Arquivo da tag: Física

Out of Place: Space/Time and Quantum (In)security (The Disorder of Things)

APRIL 21, 2015 – DRLJSHEPHERD

A demon lives behind my left eye. As a migraine sufferer, I have developed a very personal relationship with my pain and its perceived causes. On a bad day, with a crippling sensitivity to light, nausea, and the feeling that the blood flowing to my brain has slowed to a crawl and is the poisoned consistency of pancake batter, I feel the presence of this demon keenly.

On the first day of the Q2 Symposium, however, which I was delighted to attend recently, the demon was in a tricksy mood, rather than out for blood: this was a vestibular migraine. The symptoms of this particular neurological condition are dizziness, loss of balance, and sensitivity to motion. Basically, when the demon manifests in this way, I feel constantly as though I am falling: falling over, falling out of place. The Q Symposium, hosted by James Der Derian and the marvellous team at the University of Sydney’s Centre for International Security Studies,  was intended, over the course of two days and a series of presentations, interventions, and media engagements,  to unsettle, to make participants think differently about space/time and security, thinking through quantum rather than classical theory, but I do not think that this is what the organisers had in mind.

photo of cabins and corridors at Q Station, SydneyAt the Q Station, located in Sydney where the Q Symposium was held, my pain and my present aligned: I felt out of place, I felt I was falling out of place. I did not expect to like the Q Station. It is the former quarantine station used by the colonial administration to isolate immigrants they suspected of carrying infectious diseases. Its location, on the North Head of Sydney and now within the Sydney Harbour National Park, was chosen for strategic reasons – it is secluded, easy to manage, a passageway point on the journey through to the inner harbour – but it has a much longer historical relationship with healing and disease. The North Head is a site of Aboriginal cultural significance; the space was used by the spiritual leaders (koradgee) of the Guringai peoples for healing and burial ceremonies.

So I did not expect to like it, as such an overt symbol of the colonisation of Aboriginal lands, but it disarmed me. It is a place of great natural beauty, and it has been revived with respect, I felt, for the rich spiritual heritage of the space that extended long prior to the establishment of the Quarantine Station in 1835. When we Q2 Symposium participants were welcomed to country by and invited to participate in a smoking ceremony to protect us as we passed through the space, we were reminded of this history and thus reminded – gently, respectfully (perhaps more respectfully than we deserved) – that this is not ‘our’ place. We were out of place.

We were all out of place at the Q2 Symposium. That is the point. Positioning us thus was deliberate; we were to see whether voluntary quarantine would produce new interactions and new insights, guided by the Q Vision, to see how quantum theory ‘responds to global events like natural and unnatural disasters, regime change and diplomatic negotiations that phase-shift with media interventions from states to sub-states, local to global, public to private, organised to chaotic, virtual to real and back again, often in a single news cycle’. It was two days of rich intellectual exploration and conversation, and – as is the case when these experiments work – beautiful connections began to develop between those conversations and the people conversing, conversations about peace, security, and innovation, big conversations about space, and time.

I felt out of place. Mine is not the language of quantum theory. I learned so much from listening to my fellow participants, but I was insecure; as the migraine took hold on the first day, I was not only physically but intellectually feeling as though I was continually falling out of the moment, struggling to maintain the connections between what I was hearing and what I thought I knew.

Quantum theory departs from classical theory in the proposition of entanglement and the uncertainty principle:

This principle states the impossibility of simultaneously specifying the precise position and momentum of any particle. In other words, physicists cannot measure the position of a particle, for example, without causing a disturbance in the velocity of that particle. Knowledge about position and velocity are said to be complementary, that is, they cannot be precise at the same time.

I do not know anything about quantum theory – I found it hard to follow even the beginner’s guides provided by the eloquent speakers at the Symposium – but I know a lot about uncertainty. I also feel that I know something about entanglement, perhaps not as it is conceived of within quantum physics, but perhaps that is the point of events such as the Q Symposium: to encourage us to allow the unfamiliar to flow through and around us until the stream snags, to produce an idea or at least a moment of alternative cognition.

My moment of alternative cognition was caused by foetal microchimerism, a connection that flashed for me while I was listening to a physicist talk about entanglement. Scientists have shown that during gestation, foetal cells migrate into the body of the mother and can be found in the brain, spleen, liver, and elsewhere decades later. There are (possibly) parts of my son in my brain, literally as well as simply metaphorically (as the latter was already clear). I am entangled with him in ways that I cannot comprehend. Listening to the speakers discuss entanglement, all I could think was, This is what entanglement means to me, it is in my body.

Perhaps I am not proposing entanglement as Schrödinger does, as ‘the characteristic trait of quantum mechanics, the one that enforces its entire departure from classical lines of thought’. Perhaps I am just using the concept of entanglement to denote the inextricable, inexplicable, relationality that I have with my son, my family, my community, humanity. It is this entanglement that undoes me, to use Judith Butler’s most eloquent phrase, in the face of grief, violence, and injustice. Perhaps this is the value of the quantum: to make connections that are not possible within the confines of classical thought.

I am not a scientist. I am a messy body out of place, my ‘self’ apparently composed of bodies out of place. My world is not reducible. My uncertainty is vast. All of these things make me insecure, challenge how I move through professional time and space as I navigate the academy. But when I return home from my time in quarantine and joyfully reconnect with my family, I am grounded by how I perceive my entanglement. It is love, not science, that makes me a better scholar.

photo of sign that says 'laboratory and mortuary' from Q station, sydney.

I was inspired by what I heard, witnessed, discussed at the Q2 Symposium. I was – and remain – inspired by the vision of the organisers, the refusal to be bound by classical logics in any field that turns into a drive, a desire to push our exploration of security, peace, and war in new directions. We need new directions; our classical ideas have failed us, and failed humanity, a point made by Colin Wight during his remarks on the final panel at the Symposium. Too often we continue to act as though the world is our laboratory; we have ‘all these theories yet the bodies keep piling up…‘.

But if this is the case, I must ask: do we need a quantum turn to get us to a space within which we can admit entanglement, admit uncertainty, admit that we are out of place? We are never (only) our ‘selves’: we are always both wave and particle and all that is in between and it is our being entangled that renders us human. We know this from philosophy, from art and the humanities. Can we not learn this from art? Must we turn to science (again)? I felt diminished by the asking of these questions, insecure, but I did not feel that these questions were out of place.

Usina Nuclear de Angra 3 e a Operação Lava Jato (JC)

Para o físico Heitor Scalambrini Costa, denúncias de propinas na construção da usina e objeções técnicas quanto à obsolescência dos equipamentos tecnologicamente defasados, são fatos graves que devem ser apurados com urgência

Apesar de toda a movimentação no cenário internacional acerca dos problemas e riscos de instalações nucleares, que ficou exacerbada após o desastre de Fukushima (11/3/2011), surpreende a posição das autoridades do Ministério de Minas e Energia, dos “lobistas” da área nuclear,das empreiteiras e fornecedoras de equipamentos ― pois todos continuam insistindo na instalação de mais quatro usinas nucleares no país até 2030, sendo duas delas no Nordeste brasileiro. Além da construção de Angra 3 ― já aprovada.

No caso de Angra 3, a estimativa de custos da obra era de R$ 7,2 bilhões, em 2008; pulou para R$ 10,4 bilhões,no final de 2010;em julho de 2013, de acordo com a Eletronuclear, superava os R$ 13 bilhões; e, até 2018, ano de sua conclusão, devem alcançar R$ 14,9 bilhões. Obviamente a duplicação nos custos de construção desta usina nuclear impactam decisivamente o preço médio de venda de eletricidade no país.

A história da indústria nuclear no Brasil mostra que ela sempre foi ― e continua sendo ― uma indústria altamente dependente de subsídios públicos. Sem dúvida, são perversas as condições de financiamento de Angra 3, com subsídios governamentais ocultos, a serem posteriormente disfarçados nas contas de luz. E quem vai pagar essa conta seremos nós, os usuários, que já pagamos uma das mais altas tarifas de energia elétrica do mundo.

Com a Operação Lava Jato, deflagrada em março de 2014, para investigar um grande esquema de lavagem e desvio de dinheiro envolvendo a Petrobras, grandes empreiteiras do país e diversos políticos, começam a ter desnudados os reais interesses, nada republicanos, da decisão de construção das grandes obras energéticas, como a usina hidroelétrica de Belo Monte e a usina nuclear Angra 3.

Desde a decisão de construí-la no âmbito do conturbado acordo nuclear Brasil-Alemanha, a usina de Angra 3foi cercada de mistério, controvérsias, incertezas e falta de transparência, comuns no setor nuclear brasileiro.

As obras civis da usina foram licitadas à Construtora Andrade Gutierrez mediante contrato assinado em 16 de junho de 1983(governo Figueiredo, 1979-1985). Em abril de 1986, as obras foram paralisadas por falta de recursos, alto custo e dúvidas quanto à conveniência e riscos desta fonte de energia. Mesmo assim a construtora recebeu durante décadas um pagamento de aproximadamente US$ 20 milhões/ano.

Depois de 23 anos parada, as obras de Angra 3 foram retomadas em 2009 (governo Lula, 2003-2010). O governo Lula optou por não fazer licitações, e revalidou a concorrência ganha pela construtora Andrade Gutierrez, em 1983. Embora não tenha feito novas licitações, a Eletronuclear negociou atualizações de valores com todos os fornecedores e prestadores de serviços. A obra e seus equipamentos ficaram bem mais caros. Em dólares, seu valor pulou de US$ 1,8 bilhão para aproximadamente cerca de US$ 3,3 bilhões.

Diante da decisão de manter o contrato com a Andrade Gutierrez, construtoras concorrentes, especialmente a Camargo Corrêa, tentaram em vão convencer o governo a rever sua decisão, alegando que neste período houve uma revolução tecnológica que reduziu em até 40% o custo de obras civis de usinas nucleares. Também o plenário do Tribunal de Contas da União, em setembro de 2008, ao avaliar o assunto não impediu a revalidação dos contratos. Porém considerou que Angra 3 apresentava “indícios de irregularidade grave” sem recomendar, todavia, a paralisação do empreendimento.

O contrato das obras civis não foi o único a ser tirado do congelador pelo governo Lula. Para o fornecimento de bens e serviços importados foi definida a fabricante Areva, empresa resultante da fusão entre a alemã Siemens KWU e a francesa Framatome. A rigor, a Areva nem assinou o contrato. Ela foi escolhida porque herdou da KWU o acordo original.

Já os contratos da montagem foram assinados em 2 de setembro de 2014 com os seguintes consórcios: consórcio ANGRA 3, para a realização dos serviços de montagens eletromecânicas dos sistemas associados ao circuito primário da usina (sistemas associados ao circuito de geração de vapor por fonte nuclear),constituído pela empresas Construtora Queiroz Galvão S.A., EBE – Empresa Brasileira de Engenharia S.A. e Techint Engenharia S.A. E consórcio UNA 3, para a execução das montagens associadas aos sistemas convencionais da usina, constituído pelas empresas Construtora Andrade Gutierrez S.A., Construtora Norberto Odebrecht S.A., Construções e Comércio Camargo Corrêa S.A. e UTC Engenharia S.A.

O atual planejamento da Eletronuclear prevê a entrada em operação de Angra 3 em maio de 2018. Mas esta meta deverá ser revista depois de a obra ser praticamente paralisada no final de abril de 2014, devido à alegação de dívidas não pagas a empreiteira (governo Dilma, 2011-2014).

Depois de todos estes percalços, para uma obra tão polêmica, tomamos conhecimento das denúncias feitas por um dos executivos da empreiteira Camargo Correa, que passou a colaborar com as investigações da Operação Lava Jato e relatou aos procuradores, durante negociações para o acordo de delação premiada, uma suposta propina para o ex-ministro das Minas e Energia, Edson Lobão, na contratação da Camargo Correa para a execução de obras da usina de Angra 3.

Caso se confirmem tais acusações ficará claro para a sociedade brasileira que os reais interesses pela construção de Angra 3 e de mais 4 usinas nucleares tiveram como principal motivação as altas somas que autoridades públicas receberam como suborno. É bom lembrar que neste caso o ministro Lobão tinha poder de comando sobre a empresa pública responsável pela obra, a Eletronuclear ― subsidiária da Eletrobrás.

A partir deste episódio não podemos mais ignorar as objeções técnicas, como as denúncias com relação à obsolescência dos equipamentos tecnologicamente defasados (comprometendo o seu funcionamento e aumentando o risco de um desastre nuclear). Nem as denúncias de que o custo desta obra poderia encarecer durante a sua construção ― o que,de fato, já aconteceu.Tampouco o questionamento sobre o empréstimo realizado pela Caixa Econômica Federal, para a construção de Angra 3.

A expectativa é que todas as denúncias sejam investigadas e apuradas as responsabilidades. O fato em si é gravíssimo, e suficiente para a interrupção das atividades nucleares no país, em particular a construção de Angra 3, com o congelamento de novas instalações. Não se pode admitir que a decisão de construir centrais nucleares no país tenha sido feita em um mero balcão de negócios.

Heitor Scalambrini Costa é graduado em Física pela Universidade de Campinas/SP, mestrado em Ciências e Tecnologias Nucleares na Universidade Federal de Pernambuco, e doutorado em Energética – Université dAix-Marseille III (Droit, Econ. et Sciences (1992). Atualmente é professor associado da Universidade Federal de Pernambuco.

Time and Events (Knowledge Ecology)

March 24, 2015 / Adam Robbert

tumblr_nivrggIBpb1qd0i7oo1_1280

[Image: Mohammad Reza Domiri Ganji]

I just came across Massimo Pigliucci’s interesting review of Mangabeira Unger and Lee Smolin’s book The Singular Universe and the Reality of Time. There are more than a few Whiteheadian themes explored throughout the review, including Unger and Smolin’s (U&S) view that time should be read as an abstraction from events and that the “laws” of the universe are better conceptualized as habits or contingent causal connections secured by the ongoingness of those events rather than as eternal, abstract formalisms. (This entangling of laws with phenomena, of events with time, is one of the ways we can think towards an ecological metaphysics.)

But what I am particularly interested in is the short discussion on Platonism and mathematical realism. I sometimes think of mathematical realism as the view that numbers, and thus the abstract formalisms they create, are real, mind-independent entities, and that, given this view, mathematical equations are discovered (i.e., they actually exist in the world) rather than created (i.e., humans made them up to fill this or that pragmatic need). The review makes it clear, though, that this definition doesn’t push things far enough for the mathematical realist. Instead, the mathematical realist argues for not just the mind-independent existence of numbers but also their nature-independence—math as independent not just of all knowers but of all natural phenomena, past, present, or future.

U&S present an alternative to mathematical realisms of this variety that I find compelling and more consistent with the view that laws are habits and that time is an abstraction from events. Here’s the reviewer’s take on U&S’s argument (the review starts with a quote from U&S and then unpacks it a bit):

“The third idea is the selective realism of mathematics. (We use realism here in the sense of relation to the one real natural world, in opposition to what is often described as mathematical Platonism: a belief in the real existence, apart from nature, of mathematical entities.) Now dominant conceptions of what the most basic natural science is and can become have been formed in the context of beliefs about mathematics and of its relation to both science and nature. The laws of nature, the discerning of which has been the supreme object of science, are supposed to be written in the language of mathematics.” (p. xii)

But they are not, because there are no “laws” and because mathematics is a human (very useful) invention, not a mysterious sixth sense capable of probing a deeper reality beyond the empirical. This needs some unpacking, of course. Let me start with mathematics, then move to the issue of natural laws.

I was myself, until recently, intrigued by mathematical Platonism [8]. It is a compelling idea, which makes sense of the “unreasonable effectiveness of mathematics” as Eugene Wigner famously put it [9]. It is a position shared by a good number of mathematicians and philosophers of mathematics. It is based on the strong gut feeling that mathematicians have that they don’t invent mathematical formalisms, they “discover” them, in a way analogous to what empirical scientists do with features of the outside world. It is also supported by an argument analogous to the defense of realism about scientific theories and advanced by Hilary Putnam: it would be nothing short of miraculous, it is suggested, if mathematics were the arbitrary creation of the human mind, and yet time and again it turns out to be spectacularly helpful to scientists [10].

But there are, of course, equally (more?) powerful counterarguments, which are in part discussed by Unger in the first part of the book. To begin with, the whole thing smells a bit too uncomfortably of mysticism: where, exactly, is this realm of mathematical objects? What is its ontological status? Moreover, and relatedly, how is it that human beings have somehow developed the uncanny ability to access such realm? We know how we can access, however imperfectly and indirectly, the physical world: we evolved a battery of sensorial capabilities to navigate that world in order to survive and reproduce, and science has been a continuous quest for expanding the power of our senses by way of more and more sophisticated instrumentation, to gain access to more and more (and increasingly less relevant to our biological fitness!) aspects of the world.

Indeed, it is precisely this analogy with science that powerfully hints to an alternative, naturalistic interpretation of the (un)reasonable effectiveness of mathematics. Math too started out as a way to do useful things in the world, mostly to count (arithmetics) and to measure up the world and divide it into manageable chunks (geometry). Mathematicians then developed their own (conceptual, as opposed to empirical) tools to understand more and more sophisticated and less immediate aspects of the world, in the process eventually abstracting entirely from such a world in pursuit of internally generated questions (what we today call “pure” mathematics).

U&S do not by any means deny the power and effectiveness of mathematics. But they also remind us that precisely what makes it so useful and general — its abstraction from the particularities of the world, and specifically its inability to deal with temporal asymmetries (mathematical equations in fundamental physics are time-symmetric, and asymmetries have to be imported as externally imposed background conditions) — also makes it subordinate to empirical science when it comes to understanding the one real world.

This empiricist reading of mathematics offers a refreshing respite to the resurgence of a certain Idealism in some continental circles (perhaps most interestingly spearheaded by Quentin Meillassoux). I’ve heard mention a few times now that the various factions squaring off within continental philosophy’s avant garde can be roughly approximated as a renewed encounter between Kantian finitude and Hegelian absolutism. It’s probably a bit too stark of a binary, but there’s a sense in which the stakes of these arguments really do center on the ontological status of mathematics in the natural world. It’s not a direct focus of my own research interests, really, but it’s a fascinating set of questions nonetheless.

Los rayos cósmicos confirman que se fundió el corazón de Fukushima (El País)

Un detector de muones muestra el interior de dos reactores accidentados en Japón

, 23 MAR 2015 – 17:04 CET

Imagen proporcionada por Tepco sobre estos trabajos de detección.

Mientras Chernóbil todavía lucha para cubrir los restos de la tragediacon un segundo sarcófago, en Fukushima aún dan los primeros pasos para controlar por completo y desmantelar los reactores accidentados en 2011, una tarea que durará unas cuatro décadas. Al margen de las interminables fugas de agua que traen de cabeza a los responsables de la central, el principal objetivo es determinar la situación exacta del combustible radiactivo que quedó fuera de control durante varios días, provocando la mayor catástrofe atómica en lustros. Ahora, gracias a los rayos cósmicos, tenemos la confirmación de que el núcleo del reactor 1 de Fukushima se fundió por completo y que también se derritió, parcialmente, el combustible del reactor 2.

Los trabajos de desmantelamiento de la central ya han costado 1.450 millones a Japón

Esas barras de uranio derretidas generan tanto peligro que no ha sido posible entrar hasta el corazón de los reactores accidentados para determinar exactamente su estado. Las mediciones indirectas indicaban que estábamos en un escenario de fusión de los núcleos pero una nueva técnica que se sirve de la física de partículas ha ayudado a radiografiar, por el momento, dos de los reactores accidentados. Se trata de un detector de muones, unas partículas elementales que surgen cuando penetran en la atmósfera los rayos cósmicos, y que llegan por miles hasta la superficie de la Tierra. Estas partículas que frenan al chocar con objetos muy densos, como el combustible nuclear, y se pueden detectar con una suerte de placas de radiografía colocadas a los lados del reactor.

Al atravesar todo el invento, los muones han mostrado que no queda nada de combustible en el corazón del reactor número 1. Es decir, mientras el núcleo estuvo sin refrigerar con agua durante el accidente, las barras de uranio se derritieron por completo, cayendo por el fondo de la vasija que las contenía. Por eso no salen en la fotografía que han conseguido los físicos de varias universidades japonesas, que han desarrollado esta técnica junto a científicos del Laboratorio de Los Álamos y la empresa Toshiba, responsable de los trabajos de desmantelamiento de Fukushima.

Como la plancha detectora de los muones se coloca a ras de suelo, la imagen que ha devuelto de este reactor solo permite saber que el combustible se fundió y ya no está en su sitio, pero no ayuda a saber cuál es su situación en el sótano del reactor o si ha comprometido por el suelo la robusta contención que separa el núcleo del exterior. Posteriormente, Tepco ha dado a conocer el resultado de este examen en el reactor 2, que ha mostrado una descomposición parcial del núcleo al comparar la imagen con la de un reactor en condiciones normales.

Los científicos no pueden saber hasta dónde ha caído el núcleo fundido del reactor

“Los resultados reafirman nuestra idea previa de que una cantidad considerable de combustible se había fundido en el interior”, explicó Hiroshi Miyano, uno de los científicos, a AFP.  “Pero no hay evidencia de que el combustible se haya derretido a través de los edificios de contención y alcanzado el exterior”. Para asegurarse, el siguiente paso será el uso de robots que se cuelen por todos los rincones de los edificios.

Hoy se ha conocido el gasto que ha supuesto hasta el momento el desmantelamiento de Fukushima para los japoneses: 1.450 millones de euros de las arcas públicas, según un informe gubernamental que recoge la agencia Kyodo. Poco más de un tercio de ese dinero se ha gastado en los esfuerzos por controlar las continuas filtraciones y fugas de agua que inundan todo el entorno de la central.

Why Hollywood had to Fudge The General Relativity-Based Wormhole Scenes in Interstellar (The Physics arXiv Blog)

The Physics arXiv Blog

Interstellar is the only Hollywood movie to use the laws of physics to create film footage of the most extreme regions of the Universe. Now the film’s scientific advisor, Kip Thorne, reveals why they fudged the final footage

Wormholes are tunnel-like structures that link regions of spacetime. In effect, they are shortcuts from one part of the universe to another. Theoretical physicists have studied their properties for decades but despite all this work, nobody quite knows if they can exist in our universe or whether matter could pass through them if they did.

That hasn’t stopped science fiction writers making liberal use of wormholes as a convenient form of transport over otherwise unnavigable distances. And where science fiction writers roam, Hollywood isn’t far behind. Wormholes have played starring roles in films such as Star Trek, Stargate and even Bill & Ted’s Excellent Adventure. But none of these films depict wormholes in the way they might look like in real life.

All that has now changed thanks to the work of film director Christopher Nolan and Kip Thorne, a theoretical physicist at the California Institute of Technology in Pasadena, who collaborated on the science fiction film Interstellar, which was released in 2014.

Nolan wanted the film to be as realistic as possible and so invited Thorne, an expert on black holes and wormholes, to help create the footage. Thorne was intrigued by the possibility of studying wormholes visually, given that they are otherwise entirely theoretical. The result, he thought, could be a useful way of teaching students about general relativity.

So Thorne agreed to collaborate with a special effects team at Double Negative in London to create realistic footage. And today they publish a paper on the arXiv about the collaboration and what they learnt.

Interstellar is an epic tale. It begins with the discovery of a wormhole near Saturn and the decision to send a team of astronauts through it in search of a habitable planet that humans can populate because Earth is dying.

A key visual element of the story is the view through the wormhole of a different galaxy and the opposite view of Saturn. But what would these views look like?

One way to create computer generated images is to trace all the rays of light in a given scene and then determine which rays enter a camera placed at a given spot. But this is hugely inefficient because most of the rays never enter the camera.

A much more efficient method is to allow time to run backwards and trace the trajectories of light rays leaving the camera and travelling back to their source. In that way, the computing power is focused only on light rays that contribute to the final image.

So Thorne derived the various equations from general relativity that would determine the trajectory of the rays through a wormhole and the team at Double Negative created a computer model that simulated this, which they could run backwards. They also experimented with wormholes of different shapes, for example with long thin throats or much shorter ones and so on.

The results provided some fascinating insights into the way a wormhole might appear in the universe. But it also threw up some challenges for the film makers.

One problem was that Nolan chose to use footage of the view through a short wormhole, which produced fascinatingly distorted images of the distant galaxy. However, the footage of travelling through such a wormhole was too short. “The trip was quick and not terribly interesting, visually — not at all what Nolan wanted for his movie,” say Thorne and co.

But the journey through a longer wormhole was like travelling through a tunnel and very similar to things seen in other movies. “None of the clips, for any choice of parameters, had the compelling freshness that Nolan sought,” they admit.

In particular, when travelling through a wormhole, the object at the end becomes larger, scaling up from its centre and growing in size until it fills the frame. That turns out to be hard to process visually. “Because there is no parallax or other relative motion in the frame, to the audience it looks like the camera is zooming into the center of the wormhole,” say Thorne and co.

But camera zoom was utterly unlike the impression the film-makers wanted to portray, which was the sense of travelling through a shortcut from one part of the universe to another. “To foster that understanding, Nolan asked the visual effects team to convey a sense of travel through an exotic environment, one that was thematically linked to the exterior appearance of the wormhole but also incorporated elements of passing landscapes and the sense of a rapidly approaching destination,” they say.

So for the final cut, they asked visual effects artists to add some animation that gave this sense of motion. “The end result was a sequence of shots that told a story comprehensible by a general audience while resembling the wormhole’s interior,” they say.

In other words, they had to fudge it. Nevertheless, the remarkable attention to detail is a testament to the scientific commitment of the director and his team. And Thorne is adamant that the entire process of creating the footage will be an inspiration to students of film-making and of general relativity.

Of course, whether wormholes really do look like any of this is hard to say. The current thinking is that the laws of physics probably forbid the creation of wormholes like the one in Interstellar.

However, there are several ideas that leave open the possibility that wormholes might exist. The first is that wormholes may exist on the quantum scale, so a sufficiently advanced technology could enlarge them in some way.

The second is that our universe may be embedded in a larger multidimensional cosmos called a brane. That opens the possibility of travelling into other dimensions and then back into our own.

But the possibility that wormholes could exist in these scenarios reflects our ignorance of the physics involved rather any important insight. Nevertheless, there’s no harm in a little speculation!

Ref: arxiv.org/abs/1502.03809 : Visualizing Interstellar’s Wormhole

Physics’s pangolin (AEON)

Trying to resolve the stubborn paradoxes of their field, physicists craft ever more mind-boggling visions of reality

by 

Illustration by Claire ScullyIllustration by Claire Scully

Margaret Wertheim is an Australian-born science writer and director of the Institute For Figuring in Los Angeles. Her latest book is Physics on the Fringe (2011).

Theoretical physics is beset by a paradox that remains as mysterious today as it was a century ago: at the subatomic level things are simultaneously particles and waves. Like the duck-rabbit illusion first described in 1899 by the Polish-born American psychologist Joseph Jastrow, subatomic reality appears to us as two different categories of being.

But there is another paradox in play. Physics itself is riven by the competing frameworks of quantum theory and general relativity, whose differing descriptions of our world eerily mirror the wave-particle tension. When it comes to the very big and the extremely small, physical reality appears to be not one thing, but two. Where quantum theory describes the subatomic realm as a domain of individual quanta, all jitterbug and jumps, general relativity depicts happenings on the cosmological scale as a stately waltz of smooth flowing space-time. General relativity is like Strauss — deep, dignified and graceful. Quantum theory, like jazz, is disconnected, syncopated, and dazzlingly modern.

Physicists are deeply aware of the schizophrenic nature of their science and long to find a synthesis, or unification. Such is the goal of a so-called ‘theory of everything’. However, to non-physicists, these competing lines of thought, and the paradoxes they entrain, can seem not just bewildering but absurd. In my experience as a science writer, no other scientific discipline elicits such contradictory responses.

In string cosmology, the totality of existing universes exceeds the number of particles in our universe by more than 400 orders of magnitude

This schism was brought home to me starkly some months ago when, in the course of a fortnight, I happened to participate in two public discussion panels, one with a cosmologist at Caltech, Pasadena, the other with a leading literary studies scholar from the University of Southern Carolina. On the panel with the cosmologist, a researcher whose work I admire, the discussion turned to time, about which he had written a recent, and splendid, book. Like philosophers, physicists have struggled with the concept of time for centuries, but now, he told us, they had locked it down mathematically and were on the verge of a final state of understanding. In my Caltech friend’s view, physics is a progression towards an ever more accurate and encompassing Truth. My literary theory panellist was having none of this. A Lewis Carroll scholar, he had joined me for a discussion about mathematics in relation to literature, art and science. For him, maths was a delightful form of play, a ludic formalism to be admired and enjoyed; but any claims physicists might make about truth in their work were, in his view, ‘nonsense’. This mathematically based science, he said, was just ‘another kind of storytelling’.

On the one hand, then, physics is taken to be a march toward an ultimate understanding of reality; on the other, it is seen as no different in status to the understandings handed down to us by myth, religion and, no less, literary studies. Because I spend my time about equally in the realms of the sciences and arts, I encounter a lot of this dualism. Depending on whom I am with, I find myself engaging in two entirely different kinds of conversation. Can we all be talking about the same subject?

Many physicists are Platonists, at least when they talk to outsiders about their field. They believe that the mathematical relationships they discover in the world about us represent some kind of transcendent truth existing independently from, and perhaps a priori to, the physical world. In this way of seeing, the universe came into being according to a mathematical plan, what the British physicist Paul Davies has called ‘a cosmic blueprint’. Discovering this ‘plan’ is a goal for many theoretical physicists and the schism in the foundation of their framework is thus intensely frustrating. It’s as if the cosmic architect has designed a fiendish puzzle in which two apparently incompatible parts must be fitted together. Both are necessary, for both theories make predictions that have been verified to a dozen or so decimal places, and it is on the basis of these theories that we have built such marvels as microchips, lasers, and GPS satellites.

Quite apart from the physical tensions that exist between them, relativity and quantum theory each pose philosophical problems. Are space and time fundamental qualities of the universe, as general relativity suggests, or are they byproducts of something even more basic, something that might arise from a quantum process? Looking at quantum mechanics, huge debates swirl around the simplest situations. Does the universe split into multiple copies of itself every time an electron changes orbit in an atom, or every time a photon of light passes through a slit? Some say yes, others say absolutely not.

Theoretical physicists can’t even agree on what the celebrated waves of quantum theory mean. What is doing the ‘waving’? Are the waves physically real, or are they just mathematical representations of probability distributions? Are the ‘particles’ guided by the ‘waves’? And, if so, how? The dilemma posed by wave-particle duality is the tip of an epistemological iceberg on which many ships have been broken and wrecked.

Undeterred, some theoretical physicists are resorting to increasingly bold measures in their attempts to resolve these dilemmas. Take the ‘many-worlds’ interpretation of quantum theory, which proposes that every time a subatomic action takes place the universe splits into multiple, slightly different, copies of itself, with each new ‘world’ representing one of the possible outcomes.

When this idea was first proposed in 1957 by the American physicist Hugh Everett, it was considered an almost lunatic-fringe position. Even 20 years later, when I was a physics student, many of my professors thought it was a kind of madness to go down this path. Yet in recent years the many-worlds position has become mainstream. The idea of a quasi-infinite, ever-proliferating array of universes has been given further credence as a result of being taken up by string theorists, who argue that every mathematically possible version of the string theory equations corresponds to an actually existing universe, and estimate that there are 10 to the power of 500 different possibilities. To put this in perspective: physicists believe that in our universe there are approximately 10 to the power of 80 subatomic particles. In string cosmology, the totality of existing universes exceeds the number of particles in our universe by more than 400 orders of magnitude.

Nothing in our experience compares to this unimaginably vast number. Every universe that can be mathematically imagined within the string parameters — including ones in which you exist with a prehensile tail, to use an example given by the American string theorist Brian Greene — is said to be manifest somewhere in a vast supra-spatial array ‘beyond’ the space-time bubble of our own universe.

What is so epistemologically daring here is that the equations are taken to be the fundamental reality. The fact that the mathematics allows for gazillions of variations is seen to be evidence for gazillions of actual worlds.

Perhaps what we are encountering here is not so much the edge of reality, but the limits of the physicists’ category system

This kind of reification of equations is precisely what strikes some humanities scholars as childishly naive. At the very least, it raises serious questions about the relationship between our mathematical models of reality, and reality itself. While it is true that in the history of physics many important discoveries have emerged from revelations within equations — Paul Dirac’s formulation for antimatter being perhaps the most famous example — one does not need to be a cultural relativist to feel sceptical about the idea that the only way forward now is to accept an infinite cosmic ‘landscape’ of universes that embrace every conceivable version of world history, including those in which the Middle Ages never ended or Hitler won.

In the 30 years since I was a student, physicists’ interpretations of their field have increasingly tended toward literalism, while the humanities have tilted towards postmodernism. Thus a kind of stalemate has ensued. Neither side seems inclined to contemplate more nuanced views. It is hard to see ways out of this tunnel, but in the work of the late British anthropologist Mary Douglas I believe we can find a tool for thinking about some of these questions.

On the surface, Douglas’s great book Purity and Danger (1966) would seem to have nothing do with physics; it is an inquiry into the nature of dirt and cleanliness in cultures across the globe. Douglas studied taboo rituals that deal with the unclean, but her book ends with a far-reaching thesis about human language and the limits of all language systems. Given that physics is couched in the language-system of mathematics, her argument is worth considering here.

In a nutshell, Douglas notes that all languages parse the world into categories; in English, for instance, we call some things ‘mammals’ and other things ‘lizards’ and have no trouble recognising the two separate groups. Yet there are some things that do not fit neatly into either category: the pangolin, or scaly anteater, for example. Though pangolins are warm-blooded like mammals and birth their young, they have armoured bodies like some kind of bizarre lizard. Such definitional monstrosities are not just a feature of English. Douglas notes that all category systems contain liminal confusions, and she proposes that such ambiguity is the essence of what is seen to be impure or unclean.

Whatever doesn’t parse neatly in a given linguistic system can become a source of anxiety to the culture that speaks this language, calling forth special ritual acts whose function, Douglas argues, is actually to acknowledge the limits of language itself. In the Lele culture of the Congo, for example, this epistemological confrontation takes place around a special cult of the pangolin, whose initiates ritualistically eat the abominable animal, thereby sacralising it and processing its ‘dirt’ for the entire society.

‘Powers are attributed to any structure of ideas,’ Douglas writes. We all tend to think that our categories of understanding are necessarily real. ‘The yearning for rigidity is in us all,’ she continues. ‘It is part of our human condition to long for hard lines and clear concepts’. Yet when we have them, she says, ‘we have to either face the fact that some realities elude them, or else blind ourselves to the inadequacy of the concepts’. It is not just the Lele who cannot parse the pangolin: biologists are still arguing about where it belongs on the genetic tree of life.

As Douglas sees it, cultures themselves can be categorised in terms of how well they deal with linguistic ambiguity. Some cultures accept the limits of their own language, and of language itself, by understanding that there will always be things that cannot be cleanly parsed. Others become obsessed with ever-finer levels of categorisation as they try to rid their system of every pangolin-like ‘duck-rabbit’ anomaly. For such societies, Douglas argues, a kind of neurosis ensues, as the project of categorisation takes ever more energy and mental effort. If we take this analysis seriously, then, in Douglas’ terms, might it be that particle-waves are our pangolins? Perhaps what we are encountering here is not so much the edge of reality, but the limits of the physicists’ category system.

In its modern incarnation, physics is grounded in the language of mathematics. It is a so-called ‘hard’ science, a term meant to imply that physics is unfuzzy — unlike, say, biology whose classification systems have always been disputed. Based in mathematics, the classifications of physicists are supposed to have a rigour that other sciences lack, and a good deal of the near-mystical discourse that surrounds the subject hinges on ideas about where the mathematics ‘comes from’.

According to Galileo Galilei and other instigators of what came to be known as the Scientific Revolution, nature was ‘a book’ that had been written by God, who had used the language of mathematics because it was seen to be Platonically transcendent and timeless. While modern physics is no longer formally tied to Christian faith, its long association with religion lingers in the many references that physicists continue to make about ‘the mind of God’, and many contemporary proponents of a ‘theory of everything’ remain Platonists at heart.

It’s a startling thought, in an age when we can read the speed of our cars from our digitised dashboards, that somebody had to discover ‘velocity’

In order to articulate a more nuanced conception of what physics is, we need to offer an alternative to Platonism. We need to explain how the mathematics ‘arises’ in the world, in ways other than assuming that it was put there there by some kind of transcendent being or process. To approach this question dispassionately, it is necessary to abandon the beautiful but loaded metaphor of the cosmic book — and all its authorial resonances — and focus, not the creation of the world, but on the creation of physics as a science.

When we say that ‘mathematics is the language of physics’, we mean that physicists consciously comb the world for patterns that are mathematically describable; these patterns are our ‘laws of nature’. Since mathematical patterns proceed from numbers, much of the physicist’s task involves finding ways to extract numbers from physical phenomena. In the 16th and 17th centuries, philosophical discussion referred to this as the process of ‘quantification’; today we call it measurement. One way of thinking about modern physics is as an ever more sophisticated process of quantification that multiplies and diversifies the ways we extract numbers from the world, thus giving us the raw material for our quest for patterns or ‘laws’. This is no trivial task. Indeed, the history of physics has turned on the question of whatcan be measured and how.

Stop for a moment and take a look around you. What do you think can be quantified? What colours and forms present themselves to your eye? Is the room bright or dark? Does the air feel hot or cold? Are birds singing? What other sounds do you hear? What textures do you feel? What odours do you smell? Which, if any, of these qualities of experience might be measured?

In the early 14th century, a group of scholarly monks known as the calculatores at the University of Oxford began to think about this problem. One of their interests was motion, and they were the first to recognise the qualities we now refer to as ‘velocity’ and ‘acceleration’ — the former being the rate at which a body changes position, the latter, the rate at which the velocity itself changes. It’s a startling thought, in an age when we can read the speed of our cars from our digitised dashboards, that somebody had to discover ‘velocity’.

Yet despite the calculatores’ advances, the science of kinematics made barely any progress until Galileo and his contemporaries took up the baton in the late-16th century. In the intervening time, the process of quantification had to be extracted from a burden of dreams in which it became, frankly, bogged down. For along with motion, the calculatoreswere also interested in qualities such as sin and grace and they tried to find ways to quantify these as well. Between the calculatores and Galileo, students of quantification had to work out what they were going to exclude from the project. To put it bluntly, in order for the science of physics to get underway, the vision had to be narrowed.

How, exactly, this narrowing was to be achieved was articulated by the 17th-century French mathematician and philosopher René Descartes. What could a mathematically based science describe? Descartes’s answer was that the new natural philosophers must restrict themselves to studying matter in motion through space and time. Maths, he said, could describe the extended realm — or res extensa.Thoughts, feelings, emotions and moral consequences, he located in the ‘realm of thought’, or res cogitans, declaring them inaccessible to quantification, and thus beyond the purview of science. In making this distinction, Descartes did not divide mind from body (that had been done by the Greeks), he merely clarified the subject matter for a new physical science.

So what else apart from motion could be quantified? To a large degree, progress in physics has been made by slowly extending the range of answers. Take colour. At first blush, redness would seem to be an ineffable and irreducible quale. In the late 19th century, however, physicists discovered that each colour in the rainbow, when diffracted through a prism, corresponds to a different wavelength of light. Red light has a wavelength of around 700 nanometres, violet light around 400 nanometres. Colour can be correlated with numbers — both the wavelength and frequency of an electromagnetic wave. Here we have one half of our duality: the wave.

The discovery of electromagnetic waves was in fact one of the great triumphs of the quantification project. In the 1820s, Michael Faraday noticed that, if he sprinkled iron filings around a magnet, the fragments would spontaneously assemble into a pattern of lines that, he conjectured, were caused by a ‘magnetic field’. Physicists today accept fields as a primary aspect of nature but at the start of the Industrial Revolution, when philosophical mechanism was at its peak, Faraday’s peers scoffed. Invisible fields smacked of magic. Yet, later in the 19th century, James Clerk Maxwell showed that magnetic and electric fields were linked by a precise set of equations — today known as Maxwell’s Laws — that enabled him to predict the existence of radio waves. The quantification of these hitherto unsuspected aspects of our world — these hidden invisible ‘fields’ — has led to the whole gamut of modern telecommunications on which so much of modern life is now staged.

Turning to the other side of our duality – the particle – with a burgeoning array of electrical and magnetic equipment, physicists in the late 19th and early 20th centuries began to probe matter. They discovered that atoms were composed from parts holding positive and negative charge. The negative electrons, were found to revolve around a positive nucleus in pairs, with each member of the pair in a slightly different state, or ‘spin’. Spin turns out to be a fundamental quality of the subatomic realm. Matter particles, such as electrons, have a spin value of one half. Particles of light, or photons, have a spin value of one. In short, one of the qualities that distinguishes ‘matter’ from ‘energy’ is the spin value of its particles.

We have seen how light acts like a wave, yet experiments over the past century have shown that under many conditions it behaves instead like a stream of particles. In the photoelectric effect (the explanation of which won Albert Einstein his Nobel Prize in 1921), individual photons knock electrons out of their atomic orbits. In Thomas Young’s infamous double-slit experiment of 1805, light behaves simultaneously like waves and particles. Here, a stream of detectably separate photons are mysteriously guided by a wave whose effect becomes manifest over a long period of time. What is the source of this wave and how does it influence billions of isolated photons separated by great stretches of time and space? The late Nobel laureate Richard Feynman — a pioneer of quantum field theory — stated in 1965 that the double-slit experiment lay at ‘the heart of quantum mechanics’. Indeed, physicists have been debating how to interpret its proof of light’s duality for the past 200 years.

Just as waves of light sometimes behave like particles of matter, particles of matter can sometimes behave like waves. In many situations, electrons are clearly particles: we fire them from electron guns inside the cathode-ray tubes of old-fashioned TV sets and each electron that hits the screen causes a tiny phosphor to glow. Yet, in orbiting around atoms, electrons behave like three-dimensional waves. Electron microscopes put the wave-quality of these particles to work; here, in effect, they act like short-wavelengths of light.

Physics is not just another story about the world: it is a qualitatively different kind of story to those told in the humanities, in myths and religions

Wave-particle duality is a core feature of our world. Or rather, we should say, it is a core feature of our mathematical descriptions of our world. The duck-rabbits are everywhere, colonising the imagery of physicists like, well, rabbits. But what is critical to note here is that however ambiguous our images, the universe itself remains whole and is manifestly not fracturing into schizophrenic shards. It is this tantalising wholeness in the thing itself that drives physicists onward, like an eternally beckoning light that seems so teasingly near yet is always out of reach.

Instrumentally speaking, the project of quantification has led physicists to powerful insights and practical gain: the computer on which you are reading this article would not exist if physicists hadn’t discovered the equations that describe the band-gaps in semiconducting materials. Microchips, plasma screens and cellphones are all byproducts of quantification and, every decade, physicists identify new qualities of our world that are amendable to measurement, leading to new technological possibilities. In this sense, physics is not just another story about the world: it is a qualitatively different kind of story to those told in the humanities, in myths and religions. No language other than maths is capable of expressing interactions between particle spin and electromagnetic field strength. The physicists, with their equations, have shown us new dimensions of our world.

That said, we should be wary of claims about ultimate truth. While quantification, as a project, is far from complete, it is an open question as to what it might ultimately embrace. Let us look again at the colour red. Red is not just an electromagnetic phenomenon, it is also a perceptual and contextual phenomenon. Stare for a minute at a green square then look away: you will see an afterimage of a red square. No red light has been presented to your eyes, yet your brain will perceive a vivid red shape. As Goethe argued in the late-18th century, and Edwin Land (who invented Polaroid film in 1932) echoed, colour cannot be reduced to purely prismatic effects. It exists as much in our minds as in the external world. To put this into a personal context, no understanding of the electromagnetic spectrum will help me to understand why certain shades of yellow make me nauseous, while electric orange fills me with joy.

Descartes was no fool; by parsing reality into the res extensa and res cogitans he captured something critical about human experience. You do not need to be a hard-core dualist to imagine that subjective experience might not be amenable to mathematical law. For Douglas, ‘the attempt to force experience into logical categories of non-contradiction’ is the ‘final paradox’ of an obsessive search for purity. ‘But experience is not amenable [to this narrowing],’ she insists, and ‘those who make the attempt find themselves led into contradictions.’

Quintessentially, the qualities that are amenable to quantification are those that are shared. All electrons are essentially the same: given a set of physical circumstances, every electron will behave like any other. But humans are not like this. It is our individuality that makes us so infuriatingly human, and when science attempts to reduce us to the status of electrons it is no wonder that professors of literature scoff.

Douglas’s point about attempting to corral experience into logical categories of non-contradiction has obvious application to physics, particularly to recent work on the interface between quantum theory and relativity. One of the most mysterious findings of quantum science is that two or more subatomic particles can be ‘entangled’. Once particles are entangled, what we do to one immediately affects the other, even if the particles are hundreds of kilometres apart. Yet this contradicts a basic premise of special relativity, which states that no signal can travel faster than the speed of light. Entanglement suggests that either quantum theory or special relativity, or both, will have to be rethought.

More challenging still, consider what might happen if we tried to send two entangled photons to two separate satellites orbiting in space, as a team of Chinese physicists, working with the entanglement theorist Anton Zeilinger, is currently hoping to do. Here the situation is compounded by the fact that what happens in near-Earth orbit is affected by both special and general relativity. The details are complex, but suffice it to say that special relativity suggests that the motion of the satellites will cause time to appear to slow down, while the effect of the weaker gravitational field in space should cause time to speed up. Given this, it is impossible to say which of the photons would be received first at which satellite. To an observer on the ground, both photons should appear to arrive at the same time. Yet to an observer on satellite one, the photon at satellite two should appear to arrive first, while to an observer on satellite two the photon at satellite one should appear to arrive first. We are in a mire of contradiction and no one knows what would in fact happen here. If the Chinese experiment goes ahead, we might find that some radical new physics is required.

To say that every possible version of their equations must be materially manifest strikes me as a kind of berserk literalism

You will notice that the ambiguity in these examples focuses on the issue of time — as do many paradoxes relating to relativity and quantum theory. Time indeed is a huge conundrum throughout physics, and paradoxes surround it at many levels of being. In Time Reborn: From the Crisis in Physics to the Future of the Universe (2013) the American physicist Lee Smolin argues that for 400 years physicists have been thinking about time in ways that are fundamentally at odds with human experience and therefore wrong. In order to extricate ourselves from some of the deepest paradoxes in physics, he says, its very foundations must be reconceived. In an op-ed in New Scientist in April this year, Smolin wrote:
The idea that nature consists fundamentally of atoms with immutable properties moving through unchanging space, guided by timeless laws, underlies a metaphysical view in which time is absent or diminished. This view has been the basis for centuries of progress in science, but its usefulness for fundamental physics and cosmology has come to an end.

In order to resolve contradictions between how physicists describetime and how we experience time, Smolin says physicists must abandon the notion of time as an unchanging ideal and embrace an evolutionary concept of natural laws.

This is radical stuff, and Smolin is well-known for his contrarian views — he has been an outspoken critic of string theory, for example. But at the heart of his book is a worthy idea: Smolin is against the reflexive reification of equations. As our mathematical descriptions of time are so starkly in conflict with our lived experience of time, it is our descriptions that will have to change, he says.

To put this into Douglas’s terms, the powers that have been attributed to physicists’ structure of ideas have been overreaching. ‘Attempts to force experience into logical categories of non-contradiction’ have, she would say, inevitablyfailed. From the contemplation of wave-particle pangolins we have been led to the limits of the linguistic system of physicists. Like Smolin, I have long believed that the ‘block’ conception of time that physics proposes is inadequate, and I applaud this thrilling, if also at times highly speculative, book. Yet, if we can fix the current system by reinventing its axioms, then (assuming that Douglas is correct) even the new system will contain its own pangolins.

In the early days of quantum mechanics, Niels Bohr liked to say that we might never know what ‘reality’ is. Bohr used John Wheeler’s coinage, calling the universe ‘a great smoky dragon’, and claiming that all we could do with our science was to create ever more predictive models. Bohr’s positivism has gone out of fashion among theoretical physicists, replaced by an increasingly hard-core Platonism. To say, as some string theorists do, that every possible version of their equations must be materially manifest strikes me as a kind of berserk literalism, reminiscent of the old Ptolemaics who used to think that every mathematical epicycle in their descriptive apparatus must represent a physically manifest cosmic gear.

We are veering here towards Douglas’s view of neurosis. Will we accept, at some point, that there are limits to the quantification project, just as there are to all taxonomic schemes? Or will we be drawn into ever more complex and expensive quests — CERN mark two, Hubble, the sequel — as we try to root out every lingering paradox? In Douglas’s view, ambiguity is an inherent feature of language that we must face up to, at some point, or drive ourselves into distraction.

3 June 2013

A física e Hollywood (Folha de S.Paulo)

HENRIQUE GOMES

22/02/2015  03h15

RESUMO “Interestelar” integra leva de filmes pautados pela ciência. Nele, a busca pela sobrevivência leva humanos à proximidade de um buraco negro, mote para especulações ligadas à pesquisa de Stephen Hawking -tema de “A Teoria de Tudo”, que, como a ficção científica de Christopher Nolan, disputa hoje categorias do Oscar.

*

Em 2014, houve um “boom” de filmes em Hollywood levando a ciência a sério. “A Teoria de Tudo” e “O Jogo da Imitação” tratam da vida de cientistas importantes do século 21: Stephen Hawking e Alan Turing, respectivamente. Um terceiro longa, a ficção científica “Interestelar”, inova por não só aderir fielmente ao que se sabe sobre o espaço-tempo mas por usar os conhecimentos em prol da narrativa.

Não estou falando aqui sobre incluir efeitos sonoros no espaço. Isso já foi feito antes e não muda significativamente o roteiro. Os responsáveis por “Interestelar” não se contentaram em preencher a burocracia técnica só para escapar dos chatos de plantão. Aplicaram um esforço homérico em inúmeras reuniões com o renomado físico Kip Thorne (que também aparece em “A Teoria de Tudo”), em simulações de buracos negros e efetivamente reescreveram o roteiro para se adequar às diretrizes físicas.

O resultado final não perde em nada –ao menos no quesito de excitar a imaginação e produzir efeitos fantásticos– para catástrofes científicas como “Além da Escuridão – Star Trek” (2013) e “Prometheus” (2012). Em “Star Trek”, por exemplo, Isaac Newton e até Galileu ficariam horrorizados ao ver uma nave entrar em queda livre em direção à Terra, enquanto seus tripulantes, simultaneamente, entram em queda livre em relação à nave. Como bem sabem aqueles que já estiveram em naves em queda livre, tripulantes flutuam, não caem. (Pedras de pesos diferentes caem com a mesma velocidade da torre de Pisa e de outras torres.)

“Interestelar” ultrapassa as regras da ficção científica hollywoodiana. Carl Sagan disse que “a ciência não é só compatível com a espiritualidade; é uma profunda fonte de espiritualidade”. “Interestelar” prova o que nós, cientistas, sabemos há muito tempo: a frase se aplica igualmente ao encanto humano com o desconhecido.

Divulgação
Matthew McConaughey como Cooper em "Interestelar"
Matthew McConaughey como Cooper em “Interestelar”

“Interestelar” e “A Teoria de Tudo” têm alguns temas em comum.

O primeiro é a degeneração –do planeta Terra, em um; de um sistema neuromuscular, em outro. À deterioração do planeta Terra, se busca escapar por meio da exploração interestelar, liderada pelo personagem Cooper ( Matthew McConaughey). À do corpo humano, por meio da mente incansável de Stephen Hawking, vivido pelo excelente Eddie Redmayne.

O segundo tema em comum é justamente uma parte importante da obra de Hawking.

ESTRELAS

Stephen Hawking nasceu em Oxford, em 1942. Aos 21 anos, já no primeiro ano de seu doutorado, recebeu o diagnóstico de ELA (esclerose lateral amiotrófica), doença degenerativa que atinge a comunicação nervosa com os músculos, mas que deixa outras funções cerebrais intactas. Decidido a continuar seus estudos, um dos primeiros problemas a que Hawking se dedicou foi à questão do que acontece quando uma estrela é tão pesada que não aguenta o próprio peso.

O colapso da estrela concentra toda a sua massa em um único ponto, onde a teoria deixa de fazer sentido. Já se antecipando a aplicações em filmes de ficção científica, físicos chamaram esse ponto de singularidade. A uma certa distância desse ponto singular, a atração de toda a massa concentrada é suficiente para que nem a luz consiga escapar. Uma lanterna acesa a (no mínimo) esse raio não pode ser enxergada por alguém a uma distância maior; nada escapa a essa esfera, chamada de buraco negro (por motivos óbvios).

Na época em que Hawking era estudante, existia uma única solução para as equações da relatividade geral de Einstein –que descrevem como concentrações de matéria e energia distorcem a geometria do espaço-tempo– que representava um buraco negro, descoberta pelo alemão Schwarzschild. Um grupo de físicos russos argumentava que essa solução era artificial, nascida do arranjo de partículas colapsando em perfeita sincronização para que chegassem juntas ao centro, formando assim um ponto de densidade infinita: a singularidade.

Hawking e Roger Penrose, matemático de Oxford, demonstraram que, na verdade, essa era uma característica genérica das equações de Einstein –e mais: que o universo teria começado no que se convencionou chamar “singularidade cosmológica”, na qual a noção de tempo deixaria de ter significado. Como Hawking diz no filme, “seria o começo do tempo em si”.

Não há consenso na física teórica moderna sobre o que acontece de fato a quem se aproxima de uma singularidade dentro de um buraco negro. O maior obstáculo ao nosso entendimento é que, a pequenas distâncias da singularidade, precisamos levar em conta efeitos quânticos, e –como comenta a escolada Jane Hawking em “A Teoria de Tudo”– a teoria quântica e a relatividade geral são escritas em linguagens completamente diferentes. Não que seja preciso chegar tão perto da singularidade para sabermos que os efeitos seriam drásticos.

A crítica que mais ouvi de físicos amadores (e não tão amadores) a “Interestelar” é a de que –atenção, spoiler– Cooper seria trucidado ao entrar em Gargantua, um buraco negro gigante. “Trucidado” talvez seja a palavra incorreta: “espaguetificado” é o termo técnico.

MARÉS

O que mataria você ao cair em um buraco negro não é a força absoluta da gravidade. Assim como pedras jogadas por hereges italianos de cima de torres, partes diferentes do seu corpo caem com a mesma aceleração, mesmo que a aceleração em si seja enorme. Essa conclusão vale se a força da gravidade for relativamente constante –quase a mesma no seu pé e na sua cabeça. Apesar de essa condição ser satisfeita na superfície da Terra, a força da gravidade obviamente não é constante. Ela decai com a distância, e é possível observar efeitos da variação dessa força –pequenos até mesmo na escala da torre de Pisa– em corpos bem maiores. O exemplo mais familiar para nós terráqueos é o efeito das marés no nosso planeta. A Lua puxa a Terra com mais força em sua face mais próxima, com os oceanos inchando e desinchando de acordo a essa atração. Apesar da força gravitacional solar absoluta na Terra ser maior, por a Lua estar bem mais próxima de nós do que o Sol, o maior gradiente da força é lunar, e é por isso que nós sentimos mais os efeitos de maré provindos da Lua que do Sol.

Pelo mesmo motivo, assim que entrássemos em buracos negros estaríamos sujeitos a uma força gravitacional imensa, mas, ainda bem longe da singularidade central, não necessariamente sentiríamos força de maré. Essa ausência de efeitos dramáticos nesse estágio da nossa queda é adequadamente chamada de “sem drama” na comunidade, e até ali a entrada de Cooper em Gargantua seria assim: sem drama. Mas não depois. Ao se aproximar de uma singularidade, mesmo antes de precisarmos incluir efeitos quânticos, a força gravitacional pode ser tão diferente dos pés à cabeça que Cooper seria esticado –daí a “espaguetificação”.

Não era possível, claro, incluir essa explicação (macarrônica?) em “Interestelar”. Mesmo assim, uma das cenas mais espantosas do filme envolve justamente marés no planeta Miller, que orbita Gargantua. No filme, marés enormes atingem os protagonistas de hora em hora, de forma bem inconveniente. Para que o efeito de maré fosse aproximadamente correto, o físico Kip Thorne calculou o tamanho do buraco negro, sua rotação e a órbita do planeta. As imagens estarrecedoras do filme são frutos de cálculos.

Ainda que não seja realista esperar igual cuidado em produções futuras, talvez o fato de algumas dessas simulações de buracos negros serem novidade até na comunidade científica estimule alguns produtores/físicos amadores de plantão –uma demografia bem magra– a seguir o exemplo.

Mas voltemos ao destino de Cooper. Já vimos que ele escaparia ileso à entrada no buraco negro. Sem drama até ali. Mas e a tal da “espaguetificação”? Muitos comentaristas, como o popular Neil deGrasse Tyson, argumentaram que simplesmente não sabemos o que acontece dentro de um buraco negro. Passando daquela fronteira, o roteiro adquiriria então imunidade diplomática às leis da física, virando terreno fértil para especulações mais ousadas –para não dizer “terra de ninguém”.

Bem, que me desculpe Tyson, mas isso não é exatamente verdade. Acreditamos que a relatividade geral funcionaria muito bem até antes que efeitos quânticos fossem importantes (para um buraco negro do tamanho de Gargantua). Somada a isso, na solução de Schwarzschild, a aproximação da singularidade é inevitável. Assim como não conseguimos parar o tempo, não conseguiríamos manter a mesma distância do centro, tendo que inexoravelmente aproximarmo-nos mais e mais da singularidade, que se agigantaria à nossa frente, inevitável como o futuro. Nesse caso, Cooper viraria espaguete antes que os trompetes da mecânica quântica pudessem soar a sua (possível) salvação. Felizmente para toda raça humana no filme, esse não é o caso.

PIÕES

Em 1962, 43 anos após a descoberta do buraco negro nas trincheiras da Grande Guerra, o físico matemático neozelandês Roy Kerr, em circunstâncias bastante mais confortáveis, generalizou a solução de Schwarzschild, descobrindo uma solução da teoria de Einstein que correspondia a um buraco negro em rotação –girando como um pião.

Mais tarde, Hawking e colaboradores mostraram que qualquer buraco negro se assenta na forma de Kerr e, adequadamente, Gargantua é um desses, em altíssima rotação. Mas, quando piões como esses giram, eles puxam consigo o próprio espaço-tempo, e há uma espécie de força centrífuga –aquela força que sentimos no carro quando fazemos uma curva fechada– inevitável, que aumenta conforme o centro do buraco negro se aproxima. A uma certa distância do centro, o cabo de guerra entre a força de atração e a centrípeta se equilibra, e a singularidade deixa de ser inevitável.

A partir desse momento, realmente não sabemos bem o que acontece, e Cooper fica livre para fazer o que os roteiristas inventarem. Não que adentrar uma quarta dimensão, ver o tempo como mais uma direção do espaço e todo o resto não tenham nenhum embasamento, mas a partir dali entramos no reino da especulação científica. Pelo menos o fizemos com consciência limpa.

Carl Sagan, no excelente “Cosmos”, mais uma vez nos guia: “Nós não teremos medo de especular. Mas teremos cuidado em separar especulação de fato. O cosmos é cheio, além de qualquer medida, de verdades elegantes, de requintadas inter-relações, do impressionante maquinário da natureza”. O universo é mais estranho (e mais fascinante) do que a ficção. Está mais do que na hora de explorarmos uma ficção, científica não só no nome.

HENRIQUE GOMES, 34, é doutor em física pela Universidade de Nottingham (Reino Unido) e pesquisador no Perimeter Institute for Theoretical Physics (Canadá).

How The Nature of Information Could Resolve One of The Great Paradoxes Of Cosmology (The Physics Arxiv Blog)

Feb 17, 2015

Stephen Hawking described it as the most spectacular failure of any physical theory in history. Can a new theory of information rescue cosmologists?

One of the biggest puzzles in science is the cosmological constant paradox. This arises when physicists attempt to calculate the energy density of the universe from first principles. Using quantum mechanics, the number they come up with is 10^94 g/cm^3.

And yet the observed energy density, calculated from the density of mass in the cosmos and the way the universe is expanding, is about 10^-27 g/cm^3. In other words, our best theory of the universe misses the mark by 120 orders of magnitude.

That’s left cosmologists somewhat red-faced. Indeed, Stephen Hawking has famously described this as the most spectacular failure of any physical theory in history. This huge discrepancy is all the more puzzling because quantum mechanics makes such accurate predictions in other circumstances. Just why it goes so badly wrong here is unknown.

Today, Chris Fields, an independent researcher formerly with New Mexico State University in Las Cruces, puts forward a simple explanation. His idea is that the discrepancy arises because large objects, such as planets and stars, behave classically rather than demonstrating quantum properties. And he’s provided some simple calculations to make his case.

One of the key properties of quantum objects is that they can exist in a superposition of states until they are observed. When that happens, these many possibilities “collapse” and become one specific outcome, a process known as quantum decoherence.

For example, a photon can be in a superposition of states that allow it to be in several places at the same time. However, as soon as the photon is observed the superposition decoheres and the photon appears in one place.

This process of decoherence must apply to everything that has a specific position, says Fields. Even to large objects such as stars, whose position is known with respect to the cosmic microwave background, the echo of the big bang which fills the universe.

In fact, Fields argues that it is the interaction between the cosmic microwave background and all large objects in the universe that causes them to decohere giving them specific positions which astronomers observe.

But there is an important consequence from having a specific position — there must be some information associated with this location in 3D space. If a location is unknown, then the amount of information must be small. But if it is known with precision, the information content is much higher.

And given that there are some 10^25 stars in the universe, that’s a lot of information. Fields calculates that encoding the location of each star to within 10 cubic kilometres requires some 10^93 bits.

That immediately leads to an entirely new way of determining the energy density of the cosmos. Back in the 1960s, the physicist Rolf Landauer suggested that every bit of information had an energy associated with it, an idea that has gained considerable traction since then.

So Fields uses Landauer’s principle to calculate the energy associated with the locations of all the stars in the universe. This turns out to be about 10^-30 g /cm^3, very similar to the observed energy density of the universe.

But here’s the thing. That calculation requires the position of each star to be encoded only to within 10 km^3. Fields also asks how much information is required to encode the position of stars to the much higher resolution associated with the Planck length. “Encoding 10^25 stellar positions at [the Planck length] would incur a free-energy cost ∼ 10^117 larger than that found here,” he says.

That difference is remarkably similar to the 120 orders of magnitude discrepancy between the observed energy density and that calculated using quantum mechanics. Indeed, Fields says that the discrepancy arises because the positions of the stars can be accounted for using quantum mechanics. “It seems reasonable to suggest that the discrepancy between these numbers may be due to the assumption that encoding classical information at [the Planck scale] can be considered physically meaningful.”

That’s a fascinating result that raises important questions about the nature of reality. First, there is the hint in Fields’ ideas that information provides the ghostly bedrock on which the laws of physics are based. That’s an idea that has gained traction among other physicists too.

Then there is the role of energy. One important question is where this energy might have come from in the first place. The process of decoherence seems to create it from nothing.

Cosmologists generally overlook violations of the principle of conservation of energy. After all, the big bang itself is the biggest offender. So don’t expect much hand wringing over this. But Fields’ approach also implies that a purely quantum universe would have an energy density of zero, since nothing would have localised position. That’s bizarre.

Beyond this is the even deeper question of how the universe came to be classical at all, given that cosmologists would have us believe that the big bang was a quantum process. Fields suggests that it is the interaction between the cosmic microwave background and the rest of the universe that causes the quantum nature of the universe to decohere and become classical.

Perhaps. What is all too clear is that there are fundamental and fascinating problems in cosmology — and the role that information plays in reality.

Ref: arxiv.org/abs/1502.03424 : Is Dark Energy An Artifact Of Decoherence?

No Big Bang? Quantum equation predicts universe has no beginning (Phys.org)

Feb 09, 2015 by Lisa Zyga

big bang

This is an artist’s concept of the metric expansion of space, where space (including hypothetical non-observable portions of the universe) is represented at each time by the circular sections. Note on the left the dramatic expansion (not to scale) occurring in the inflationary epoch, and at the center the expansion acceleration. The scheme is decorated with WMAP images on the left and with the representation of stars at the appropriate level of development. Credit: NASA

Read more at: http://phys.org/news/2015-02-big-quantum-equation-universe.html#jCp

(Phys.org) —The universe may have existed forever, according to a new model that applies quantum correction terms to complement Einstein’s theory of general relativity. The model may also account for dark matter and dark energy, resolving multiple problems at once.

The widely accepted age of the , as estimated by , is 13.8 billion years. In the beginning, everything in existence is thought to have occupied a single infinitely dense point, or . Only after this point began to expand in a “Big Bang” did the universe officially begin.

Although the Big Bang singularity arises directly and unavoidably from the mathematics of general relativity, some scientists see it as problematic because the math can explain only what happened immediately after—not at or before—the singularity.

“The Big Bang singularity is the most serious problem of general relativity because the laws of physics appear to break down there,” Ahmed Farag Ali at Benha University and the Zewail City of Science and Technology, both in Egypt, told Phys.org.

Ali and coauthor Saurya Das at the University of Lethbridge in Alberta, Canada, have shown in a paper published in Physics Letters B that the Big Bang singularity can be resolved by their  in which the universe has no beginning and no end.

Old ideas revisited

The physicists emphasize that their quantum correction terms are not applied ad hoc in an attempt to specifically eliminate the Big Bang singularity. Their work is based on ideas by the theoretical physicist David Bohm, who is also known for his contributions to the philosophy of physics. Starting in the 1950s, Bohm explored replacing classical geodesics (the shortest path between two points on a curved surface) with quantum trajectories.

In their paper, Ali and Das applied these Bohmian trajectories to an equation developed in the 1950s by physicist Amal Kumar Raychaudhuri at Presidency University in Kolkata, India. Raychaudhuri was also Das’s teacher when he was an undergraduate student of that institution in the ’90s.

Using the quantum-corrected Raychaudhuri equation, Ali and Das derived quantum-corrected Friedmann equations, which describe the expansion and evolution of universe (including the Big Bang) within the context of general relativity. Although it’s not a true theory of , the  does contain elements from both quantum theory and general relativity. Ali and Das also expect their results to hold even if and when a full theory of quantum gravity is formulated.

No singularities nor dark stuff

In addition to not predicting a Big Bang singularity, the new model does not predict a “big crunch” singularity, either. In general relativity, one possible fate of the universe is that it starts to shrink until it collapses in on itself in a big crunch and becomes an infinitely dense point once again.

Ali and Das explain in their paper that their model avoids singularities because of a key difference between classical geodesics and Bohmian trajectories. Classical geodesics eventually cross each other, and the points at which they converge are singularities. In contrast, Bohmian trajectories never cross each other, so singularities do not appear in the equations.

In cosmological terms, the scientists explain that the quantum corrections can be thought of as a cosmological constant term (without the need for dark energy) and a radiation term. These terms keep the universe at a finite size, and therefore give it an infinite age. The terms also make predictions that agree closely with current observations of the cosmological constant and density of the universe.

New gravity particle

In physical terms, the model describes the universe as being filled with a quantum fluid. The scientists propose that this fluid might be composed of gravitons—hypothetical massless particles that mediate the force of gravity. If they exist, gravitons are thought to play a key role in a theory of quantum gravity.

In a related paper, Das and another collaborator, Rajat Bhaduri of McMaster University, Canada, have lent further credence to this model. They show that gravitons can form a Bose-Einstein condensate (named after Einstein and another Indian physicist, Satyendranath Bose) at temperatures that were present in the universe at all epochs.

Motivated by the model’s potential to resolve the Big Bang singularity and account for  and , the physicists plan to analyze their model more rigorously in the future. Their future work includes redoing their study while taking into account small inhomogeneous and anisotropic perturbations, but they do not expect small perturbations to significantly affect the results.

“It is satisfying to note that such straightforward corrections can potentially resolve so many issues at once,” Das said.

More information: Ahmed Farag Ali and Saurya Das. “Cosmology from quantum potential.” Physics Letters B. Volume 741, 4 February 2015, Pages 276–279. DOI: 10.1016/j.physletb.2014.12.057. Also at: arXiv:1404.3093[gr-qc].

Saurya Das and Rajat K. Bhaduri, “Dark matter and dark energy from Bose-Einstein condensate”, preprint: arXiv:1411.0753[gr-qc].

Chemists Confirm the Existence of New Type of Bond (Scientific American)

A “vibrational” chemical bond predicted in the 1980s is demonstrated experimentally

Jan 20, 2015 By Amy Nordrum


Credit: Allevinatis/Flickr

Chemistry has many laws, one of which is that the rate of a reaction speeds up as temperature rises. So, in 1989, when chemists experimenting at a nuclear accelerator in Vancouver observed that a reaction between bromine and muonium—a hydrogen isotope—slowed down when they increased the temperature, they were flummoxed.

Donald Fleming, a University of British Columbia chemist involved with the experiment, thought that perhaps as bromine and muonium co-mingled, they formed an intermediate structure held together by a “vibrational” bond—a bond that other chemists had posed as a theoretical possibility earlier that decade. In this scenario, the lightweight muonium atom would move rapidly between two heavy bromine atoms, “like a Ping Pong ball bouncing between two bowling balls,” Fleming says. The oscillating atom would briefly hold the two bromine atoms together and reduce the overall energy, and therefore speed, of the reaction. (With a Fleming working on a bond, you could say the atomic interaction is shaken, not stirred.)

At the time of the experiment, the necessary equipment was not available to examine the milliseconds-long reaction closely enough to determine whether such vibrational bonding existed. Over the past 25 years, however, chemists’ ability to track subtle changes in energy levels within reactions has greatly improved, so Fleming and his colleagues ran their reaction again three years ago in the nuclear accelerator at Rutherford Appleton Laboratory in England. Based on calculations from both experiments and the work of collaborating theoretical chemists at Free University of Berlin and Saitama University in Japan, they concluded that muonium and bromine were indeed forming a new type of temporary bond. Its vibrational nature lowered the total energy of the intermediate bromine-muonium structure—thereby explaining why the reaction slowed even though the temperature was rising.

The team reported its results last December in Angewandte Chemie International Edition, a publication of the German Chemical Society. The work confirms that vibrational bonds—fleeting though they may be—should be added to the list of known chemical bonds. And although the bromine-muonium reaction was an “ideal” system to verify vibrational bonding, Fleming predicts the phenomenon also occurs in other reactions between heavy and light atoms.

This article was originally published with the title “New Vibrations.”

Computadores quânticos podem revolucionar teoria da informação (Fapesp)

30 de janeiro de 2015

Por Diego Freire

Agência FAPESP – A perspectiva dos computadores quânticos, com capacidade de processamento muito superior aos atuais, tem levado ao aprimoramento de uma das áreas mais versáteis da ciência, com aplicações nas mais diversas áreas do conhecimento: a teoria da informação. Para discutir essa e outras perspectivas, o Instituto de Matemática, Estatística e Computação Científica (Imecc) da Universidade Estadual de Campinas (Unicamp) realizou, de 19 a 30 de janeiro, a SPCoding School.

O evento ocorreu no âmbito do programa Escola São Paulo de Ciência Avançada (ESPCA), da FAPESP, que oferece recursos para a organização de cursos de curta duração em temas avançados de ciência e tecnologia no Estado de São Paulo.

A base da informação processada pelos computadores largamente utilizados é o bit, a menor unidade de dados que pode ser armazenada ou transmitida. Já os computadores quânticos trabalham com qubits, que seguem os parâmetros da mecânica quântica, ramo da Física que trata das dimensões próximas ou abaixo da escala atômica. Por conta disso, esses equipamentos podem realizar simultaneamente uma quantidade muito maior de cálculos.

“Esse entendimento quântico da informação atribui toda uma complexidade à sua codificação. Mas, ao mesmo tempo em que análises complexas, que levariam décadas, séculos ou até milhares de anos para serem feitas em computadores comuns, poderiam ser executadas em minutos por computadores quânticos, também essa tecnologia ameaçaria o sigilo de informações que não foram devidamente protegidas contra esse tipo de novidade”, disse Sueli Irene Rodrigues Costa, professora do IMECC, à Agência FAPESP.

A maior ameaça dos computadores quânticos à criptografia atual está na sua capacidade de quebrar os códigos usados na proteção de informações importantes, como as de cartão de crédito. Para evitar esse tipo de risco é preciso desenvolver também sistemas criptográficos visando segurança, considerando a capacidade da computação quântica.

“A teoria da informação e a codificação precisam estar um passo à frente do uso comercial da computação quântica”, disse Rodrigues Costa, que coordena o Projeto Temático “Segurança e confiabilidade da informação: teoria e prática”, apoiado pela FAPESP.

“Trata-se de uma criptografia pós-quântica. Como já foi demonstrado no final dos anos 1990, os procedimentos criptográficos atuais não sobreviverão aos computadores quânticos por não serem tão seguros. E essa urgência pelo desenvolvimento de soluções preparadas para a capacidade da computação quântica também impulsiona a teoria da informação a avançar cada vez mais em diversas direções”, disse.

Algumas dessas soluções foram tratadas ao longo da programação da SPCoding School, muitas delas visando sistemas mais eficientes para a aplicação na computação clássica, como o uso de códigos corretores de erros e de reticulados para criptografia. Para Rodrigues Costa, a escalada da teoria da informação em paralelo ao desenvolvimento da computação quântica provocará revoluções em várias áreas do conhecimento.

“A exemplo das múltiplas aplicações da teoria da informação na atualidade, a codificação quântica também elevaria diversas áreas da ciência a novos patamares por possibilitar simulações computacionais ainda mais precisas do mundo físico, lidando com uma quantidade exponencialmente maior de variáveis em comparação aos computadores clássicos”, disse Rodrigues Costa.

A teoria da informação envolve a quantificação da informação e envolve áreas como matemática, engenharia elétrica e ciência da computação. Teve como pioneiro o norte-americano Claude Shannon (1916-2001), que foi o primeiro a considerar a comunicação como um problema matemático.

Revoluções em curso

Enquanto se prepara para os computadores quânticos, a teoria da informação promove grandes modificações na codificação e na transmissão de informações. Amin Shokrollahi, da École Polytechnique Fédérale de Lausanne, na Suíça, apresentou na SPCoding School novas técnicas de codificação para resolver problemas como ruídos na informação e consumo elevado de energia no processamento de dados, inclusive na comunicação chip a chip nos aparelhos.

Shokrollahi é conhecido na área por ter inventado os códigos Raptor e coinventado os códigos Tornado, utilizados em padrões de transmissão móveis de informação, com implementações em sistemas sem fio, satélites e no método de transmissão de sinais televisivos IPTV, que usa o protocolo de internet (IP, na sigla em inglês) para transmitir conteúdo.

“O crescimento do volume de dados digitais e a necessidade de uma comunicação cada vez mais rápida aumentam a susceptibilidade a vários tipos de ruído e o consumo de energia. É preciso buscar novas soluções nesse cenário”, disse.

Shokrollahi também apresentou inovações desenvolvidas na empresa suíça Kandou Bus, da qual é diretor de pesquisa. “Utilizamos algoritmos especiais para codificar os sinais, que são todos transferidos simultaneamente até que um decodificador recupere os sinais originais. Tudo isso é feito evitando que fios vizinhos interfiram entre si, gerando um nível de ruído significativamente menor. Os sistemas também reduzem o tamanho dos chips, aumentam a velocidade de transmissão e diminuem o consumo de energia”, explicou.

De acordo com Rodrigues Costa, soluções semelhantes também estão sendo desenvolvidas em diversas tecnologias largamente utilizadas pela sociedade.

“Os celulares, por exemplo, tiveram um grande aumento de capacidade de processamento e em versatilidade, mas uma das queixas mais frequentes entre os usuários é de que a bateria não dura. Uma das estratégias é descobrir meios de codificar de maneira mais eficiente para economizar energia”, disse.

Aplicações biológicas

Não são só problemas de natureza tecnológica que podem ser abordados ou solucionados por meio da teoria da informação. Professor na City University of New York, nos Estados Unidos, Vinay Vaishampayan coordenou na SPCoding School o painel “Information Theory, Coding Theory and the Real World”, que tratou de diversas aplicações dos códigos na sociedade – entre elas, as biológicas.

“Não existe apenas uma teoria da informação e suas abordagens, entre computacionais e probabilísticas, podem ser aplicadas a praticamente todas as áreas do conhecimento. Nós tratamos no painel das muitas possibilidades de pesquisa à disposição de quem tem interesse em estudar essas interfaces dos códigos com o mundo real”, disse à Agência FAPESP.

Vaishampayan destacou a Biologia como área de grande potencial nesse cenário. “A neurociência apresenta questionamentos importantes que podem ser respondidos com a ajuda da teoria da informação. Ainda não sabemos em profundidade como os neurônios se comunicam entre si, como o cérebro funciona em sua plenitude e as redes neurais são um campo de estudo muito rico também do ponto de vista matemático, assim como a Biologia Molecular”, disse.

Isso porque, de acordo com Max Costa, professor da Faculdade de Engenharia Elétrica e de Computação da Unicamp e um dos palestrantes, os seres vivos também são feitos de informação.

“Somos codificados por meio do DNA das nossas células. Descobrir o segredo desse código, o mecanismo que há por trás dos mapeamentos que são feitos e registrados nesse contexto, é um problema de enorme interesse para a compreensão mais profunda do processo da vida”, disse.

Para Marcelo Firer, professor do Imecc e coordenador da SPCoding School, o evento proporcionou a estudantes e pesquisadores de diversos campos novas possibilidades de pesquisa.

“Os participantes compartilharam oportunidades de engajamento em torno dessas e muitas outras aplicações da Teoria da Informação e da Codificação. Foram oferecidos desde cursos introdutórios, destinados a estudantes com formação matemática consistente, mas não necessariamente familiarizados com codificação, a cursos de maior complexidade, além de palestras e painéis de discussão”, disse Firer, membro da coordenação da área de Ciência e Engenharia da Computação da FAPESP.

Participaram do evento cerca de 120 estudantes de 70 universidades e 25 países. Entre os palestrantes estrangeiros estiveram pesquisadores do California Institute of Technology (Caltech), da Maryland University e da Princeton University, nos Estados Unidos; da Chinese University of Hong Kong, na China; da Nanyang Technological University, em Cingapura; da Technische Universiteit Eindhoven, na Holanda; da Universidade do Porto, em Portugal; e da Tel Aviv University, em Israel.

Mais informações em www.ime.unicamp.br/spcodingschool.

The Question That Could Unite Quantum Theory With General Relativity: Is Spacetime Countable? (The Physics Arxiv Blog)

Current thinking about quantum gravity assumes that spacetime exists in countable lumps, like grains of sand. That can’t be right, can it?

The Physics arXiv Blog

One of the big problems with quantum gravity is that it generates infinities that have no physical meaning. These come about because quantum mechanics implies that accurate measurements of the universe on the tiniest scales require high-energy. But when the scale becomes very small, the energy density associated with a measurement is so great that it should lead to the formation of a black hole, which would paradoxically ruin the measurement that created it.

These kinds of infinities are something of an annoyance. Their paradoxical nature makes them hard to deal with mathematically and difficult to reconcile with our knowledge of the universe, which as far as we can tell, avoids this kind of paradoxical behaviour.

So physicists have invented a way to deal with infinities called renormalisation. In essence, theorists assume that space-time is not infinitely divisible. Instead, there is a minimum scale beyond which nothing can be smaller, the so-called Planck scale. This limit ensures that energy densities never become high enough to create black holes.

This is also equivalent to saying that space-time is discrete, or as a mathematician might put it, countable. In other words, it is possible to allocate a number to each discrete volume of space-time making it countable, like grains of sand on a beach or atoms in the universe. That means space-time is entirely unlike uncountable things, such as straight lines which are infinitely divisible, or the degrees of freedom of in the fields that constitute the basic building blocks of physics, which have been mathematically proven to be uncountable.

This discreteness is certainly useful but it also raises an important question: is it right? Can the universe really be fundamentally discrete, like a computer model? Today, Sean Gryb from Radboud University in the Netherlands argues that an alternative approach is emerging in the form of a new formulation of gravity called shape dynamics. This new approach implies that spacetime is smooth and uncountable, an idea that could have far-reaching consequences for the way we understand the universe.

At the heart of this new theory is the concept of scale invariance. This is the idea that an object or law has the same properties regardless of the scale at which it is viewed.

The current laws of physics generally do not have this property. Quantum mechanics, for example, operates only at the smallest scale, while gravity operates at the largest. So it is easy to see why scale invariance is a property that theorists drool over — a scale invariant description of the universe must encompass both quantum theory and gravity.

Shape dynamics does just this, says Gryb. It does this by ignoring many ordinary features of physical objects, such as their position within the universe. Instead, it focuses on objects’ relationships to each other, such as the angles between them and the shape that this makes (hence the term shape dynamics).

This approach immediately leads to a scale invariant picture of reality. Angles are scale invariant because they are the same regardless of the scale at which they are viewed. So the new thinking is describe the universe as a series of instantaneous snapshots on the relationship between objects.

The result is a scale invariance that is purely spatial. But this, of course, is very different to the more significant notion of spacetime scale invariance.

So a key part of Gryb’s work is in using the mathematical ideas of symmetry to show that spatial scale invariance can be transformed into spacetime scale invariance.

Specifically, Gryb shows exactly how this works in a closed, expanding universe in which the laws of physics are the same for all inertial observers and for whom the speed of light is finite and constant.

If those last two conditions sound familiar, it’s because they are the postulates Einstein used to derive special relativity. And Gryb’s formulation is equivalent to this. “Observers in Einstein’s special theory of relativity can be reinterpreted as observers in a scale-invariant space,” he says.

That raises some interesting possibilities for a broader theory of theuniversegravity, just as special relativity lead to a broader theory of gravity in the form of general relativity.

Gryb describes how it is possible to create models of curved space-time by gluing together local patches of flat space-times. “Could it be possible to do something similar in Shape Dynamics; i.e., glue together local patches of conformally flat spaces that could then be related to General Relativity?” he asks.

Nobody has succeeded in doing this on a model that includes the three dimensions of space and one of time but this is early days for shape dynamics. But Gryb and others are working on the problem.

He is clearly excited by the future possibilities, saying that it suggests a new way to think about quantum gravity in scale invariant terms. “This would provide a new mechanism for being able to deal with the uncountably infinite number of degrees of freedom in the gravitational field without introducing discreteness at the Plank scale,” he says.

That’s an exciting new approach. And it is one expounded by a fresh new voice who is able to explain his ideas in a highly readable fashion to a broad audience. There is no way of knowing how this line of thinking will evolve but we’ll look forward to more instalments from Gryb.

Ref: arxiv.org/abs/1501.02671 : Is Spacetime Countable?

The Paradoxes That Threaten To Tear Modern Cosmology Apart (The Physics Arxiv Blog)

Some simple observations about the universe seem to contradict basic physics. Solving these paradoxes could change the way we think about the cosmos

The Physics arXiv Blog on Jan 20

Revolutions in science often come from the study of seemingly unresolvable paradoxes. An intense focus on these paradoxes, and their eventual resolution, is a process that has leads to many important breakthroughs.

So an interesting exercise is to list the paradoxes associated with current ideas in science. It’s just possible that these paradoxes will lead to the next generation of ideas about the universe.

Today, Yurij Baryshev at St Petersburg State University in Russia does just this with modern cosmology. The result is a list of paradoxes associated with well-established ideas and observations about the structure and origin of the universe.

Perhaps the most dramatic, and potentially most important, of these paradoxes comes from the idea that the universe is expanding, one of the great successes of modern cosmology. It is based on a number of different observations.

The first is that other galaxies are all moving away from us. The evidence for this is that light from these galaxies is red-shifted. And the greater the distance, the bigger this red-shift.

Astrophysicists interpret this as evidence that more distant galaxies are travelling away from us more quickly. Indeed, the most recent evidence is that the expansion is accelerating.

What’s curious about this expansion is that space, and the vacuum associated with it, must somehow be created in this process. And yet how this can occur is not at all clear. “The creation of space is a new cosmological phenomenon, which has not been tested yet in physical laboratory,” says Baryshev.

What’s more, there is an energy associated with any given volume of the universe. If that volume increases, the inescapable conclusion is that this energy must increase as well. And yet physicists generally think that energy creation is forbidden.

Baryshev quotes the British cosmologist, Ted Harrison, on this topic: “The conclusion, whether we like it or not, is obvious: energy in the universe is not conserved,” says Harrison.

This is a problem that cosmologists are well aware of. And yet ask them about it and they shuffle their feet and stare at the ground. Clearly, any theorist who can solve this paradox will have a bright future in cosmology.

The nature of the energy associated with the vacuum is another puzzle. This is variously called the zero point energy or the energy of the Planck vacuum and quantum physicists have spent some time attempting to calculate it.

These calculations suggest that the energy density of the vacuum is huge, of the order of 10^94 g/cm^3. This energy, being equivalent to mass, ought to have a gravitational effect on the universe.

Cosmologists have looked for this gravitational effect and calculated its value from their observations (they call it the cosmological constant). These calculations suggest that the energy density of the vacuum is about 10^-29 g/cm3.

Those numbers are difficult to reconcile. Indeed, they differ by 120 orders of magnitude. How and why this discrepancy arises is not known and is the cause of much bemused embarrassment among cosmologists.

Then there is the cosmological red-shift itself, which is another mystery. Physicists often talk about the red-shift as a kind of Doppler effect, like the change in frequency of a police siren as it passes by.

The Doppler effect arises from the relative movement of different objects. But the cosmological red-shift is different because galaxies are stationary in space. Instead, it is space itself that cosmologists think is expanding.

The mathematics that describes these effects is correspondingly different as well, not least because any relative velocity must always be less than the speed of light in conventional physics. And yet the velocity of expanding space can take any value.

Interestingly, the nature of the cosmological red-shift leads to the possibility of observational tests in the next few years. One interesting idea is that the red-shifts of distant objects must increase as they get further away. For a distant quasar, this change may be as much as one centimetre per second per year, something that may be observable with the next generation of extremely large telescopes.

One final paradox is also worth mentioning. This comes from one of the fundamental assumptions behind Einstein’s theory of general relativity—that if you look at the universe on a large enough scale, it must be the same in all directions.

It seems clear that this assumption of homogeneity does not hold on the local scale. Our galaxy is part of a cluster known as the Local Group which is itself part of a bigger supercluster.

This suggests a kind of fractal structure to the universe. In other words, the universe is made up of clusters regardless of the scale at which you look at it.

The problem with this is that it contradicts one of the basic ideas of modern cosmology—the Hubble law. This is the observation that the cosmological red-shift of an object is linearly proportional to its distance from Earth.

It is so profoundly embedded in modern cosmology that most currently accepted theories of universal expansion depend on its linear nature. That’s all okay if the universe is homogeneous (and therefore linear) on the largest scales.

But the evidence is paradoxical. Astrophysicists have measured the linear nature of the Hubble law at distances of a few hundred megaparsecs. And yet the clusters visible on those scales indicate the universe is not homogeneous on the scales.

And so the argument that the Hubble law’s linearity is a result of the homogeneity of the universe (or vice versa) does not stand up to scrutiny. Once again this is an embarrassing failure for modern cosmology.

It is sometimes tempting to think that astrophysicists have cosmology more or less sewn up, that the Big Bang model, and all that it implies, accounts for everything we see in the cosmos.

Not even close. Cosmologists may have successfully papered over the cracks in their theories in a way that keeps scientists happy for the time being. This sense of success is surely an illusion.

And that is how it should be. If scientists really think they are coming close to a final and complete description of reality, then a simple list of paradoxes can do a remarkable job of putting feet firmly back on the ground.

Ref: arxiv.org/abs/1501.01919 : Paradoxes Of Cosmological Physics In The Beginning Of The 21-St Century

How Mathematicians Used A Pump-Action Shotgun to Estimate Pi (The Physics arXiv Blog)

The Physics arXiv Blog

If you’ve ever wondered how to estimate pi using a Mossberg 500 pump-action shotgun, a sheet of aluminium foil and some clever mathematics, look no further

Imagine the following scenario. The end of civilisation has occurred, zombies have taken over the Earth and all access to modern technology has ended. The few survivors suddenly need to know the value of π and, being a mathematician, they turn to you. What do you do?

If ever you find yourself in this situation, you’ll be glad of the work of Vincent Dumoulin and Félix Thouin at the Université de Montréal in Canada. These guys have worked out how to calculate an approximate value of π using the distribution of pellets from a Mossberg 500 pump-action shotgun, which they assume would be widely available in the event of a zombie apocalypse.

The principle is straightforward. Imagine a square with sides of length 1 and which contains an arc drawn between two opposite corners to form a quarter circle. The area of the square is 1 while the area of the quarter circle is π/4.

Next, sprinkle sand or rice over the square so that it is covered with a random distribution of grains. Then count the number of grains inside the quarter circle and the total number that cover the entire square.

The ratio of these two numbers is an estimate of the ratio between the area of the quarter circle and the square, in other words π/4.

So multiplying this ratio by 4 gives you π, or at least an estimate of it. And that’s it.

This technique is known as a Monte Carlo approximation (after the casino where the uncle of the physicist who developed it used to gamble). And it is hugely useful in all kinds of simulations.

Of course, the accuracy of the technique depends on the distribution of the grains on the square. If they are truly random, then a mere 30,000 grains can give you an estimate of π which is within 0.07 per cent of the actual value.

Dumoulin and Thouin’s idea is to use the distribution of shotgun pellets rather than sand or rice (which would presumably be in short supply in the post-apocalyptic world). So these guys set up an experiment consisting of a 28-inch barrel Mossberg 500 pump-action shotgun aimed at a sheet of aluminium foil some 20 metres away.

They loaded the gun with cartridges composed of 3 dram equivalent of powder and 32 grams of #8 lead pellets. When fired from the gun, these pellets have an average muzzle velocity of around 366 metres per second.

Dumoulin and Thouin then fired 200 shots at the aluminium foil, peppering it with 30,857 holes. Finally, they used the position of these holes in the same way as the grains of sand or rice in the earlier example, to calculate the value of π.

They immediately have a problem, however. The distribution of pellets is influenced by all kinds of factors, such as the height of the gun, the distance to the target, wind direction and so on. So this distribution is not random.

To get around this, they are able to fall back on a technique known as importance sampling. This is a trick that allows mathematicians to estimate the properties of one type of distribution while using samples generated by a different distribution.

Of their 30,000 pellet holes, they chose 10,000 at random to perform this estimation trick. They then use the remaining 20,000 pellet holes to get an estimate of π, safe in the knowledge that importance sampling allows the calculation to proceed as if the distribution of pellets had been random.

The result? Their value of π is 3.131, which is just 0.33 per cent off the true value. “We feel confident that ballistic Monte Carlo methods constitute reliable ways of computing mathematical constants should a tremendous civilization collapse occur,” they conclude.

Quite! Other methods are also available.

Ref: arxiv.org/abs/1404.1499 : A Ballistic Monte Carlo Approximation of π

Quantum Experiment Shows How Time ‘Emerges’ from Entanglement (The Physics arXiv Blog)

Time is an emergent phenomenon that is a side effect of quantum entanglement, say physicists. And they have the first experimental results to prove it

The Physics arXiv Blog

When the new ideas of quantum mechanics spread through science like wildfire in the first half of the 20th century, one of the first things physicists did was to apply them to gravity and general relativity. The results were not pretty.

It immediately became clear that these two foundations of modern physics were entirely incompatible. When physicists attempted to meld the approaches, the resulting equations were bedeviled with infinities making it impossible to make sense of the results.

Then in the mid-1960s, there was a breakthrough. The physicists John Wheeler and Bryce DeWitt successfully combined the previously incompatible ideas in a key result that has since become known as the Wheeler-DeWitt equation. This is important because it avoids the troublesome infinites—a huge advance.

But it didn’t take physicists long to realise that while the Wheeler-DeWitt equation solved one significant problem, it introduced another. The new problem was that time played no role in this equation. In effect, it says that nothing ever happens in the universe, a prediction that is clearly at odds with the observational evidence.

This conundrum, which physicists call ‘the problem of time’, has proved to be a thorn in flesh of modern physicists, who have tried to ignore it but with little success.

Then in 1983, the theorists Don Page and William Wootters came up with a novel solution based on the quantum phenomenon of entanglement. This is the exotic property in which two quantum particles share the same existence, even though they are physically separated.

Entanglement is a deep and powerful link and Page and Wootters showed how it can be used to measure time. Their idea was that the way a pair of entangled particles evolve is a kind of clock that can be used to measure change.

But the results depend on how the observation is made. One way to do this is to compare the change in the entangled particles with an external clock that is entirely independent of the universe. This is equivalent to god-like observer outside the universe measuring the evolution of the particles using an external clock.

In this case, Page and Wootters showed that the particles would appear entirely unchanging—that time would not exist in this scenario.

But there is another way to do it that gives a different result. This is for an observer inside the universe to compare the evolution of the particles with the rest of the universe. In this case, the internal observer would see a change and this difference in the evolution of entangled particles compared with everything else is an important a measure of time.

This is an elegant and powerful idea. It suggests that time is an emergent phenomenon that comes about because of the nature of entanglement. And it exists only for observers inside the universe. Any god-like observer outside sees a static, unchanging universe, just as the Wheeler-DeWitt equations predict.

Of course, without experimental verification, Page and Wootter’s ideas are little more than a philosophical curiosity. And since it is never possible to have an observer outside the universe, there seemed little chance of ever testing the idea.

Until now. Today, Ekaterina Moreva at the Istituto Nazionale di Ricerca Metrologica (INRIM) in Turin, Italy, and a few pals have performed the first experimental test of Page and Wootters’ ideas. And they confirm that time is indeed an emergent phenomenon for ‘internal’ observers but absent for external ones.

The experiment involves the creation of a toy universe consisting of a pair of entangled photons and an observer that can measure their state in one of two ways. In the first, the observer measures the evolution of the system by becoming entangled with it. In the second, a god-like observer measures the evolution against an external clock which is entirely independent of the toy universe.

The experimental details are straightforward. The entangled photons each have a polarisation which can be changed by passing it through a birefringent plate. In the first set up, the observer measures the polarisation of one photon, thereby becoming entangled with it. He or she then compares this with the polarisation of the second photon. The difference is a measure of time.

In the second set up, the photons again both pass through the birefringent plates which change their polarisations. However, in this case, the observer only measures the global properties of both photons by comparing them against an independent clock.

In this case, the observer cannot detect any difference between the photons without becoming entangled with one or the other. And if there is no difference, the system appears static. In other words, time does not emerge.

“Although extremely simple, our model captures the two, seemingly contradictory, properties of the Page-Wootters mechanism,” say Moreva and co.

That’s an impressive experiment. Emergence is a popular idea in science. In particular, physicists have recently become excited about the idea that gravity is an emergent phenomenon. So it’s a relatively small step to think that time may emerge in a similar way.

What emergent gravity has lacked, of course, is an experimental demonstration that shows how it works in practice. That’s why Moreva and co’s work is significant. It places an abstract and exotic idea on firm experimental footing for the first time.

Perhaps most significant of all is the implication that quantum mechanics and general relativity are not so incompatible after all. When viewed through the lens of entanglement, the famous ‘problem of time’ just melts away.

The next step will be to extend the idea further, particularly to the macroscopic scale. It’s one thing to show how time emerges for photons, it’s quite another to show how it emerges for larger things such as humans and train timetables.

And therein lies another challenge.

Ref: arxiv.org/abs/1310.4691 :Time From Quantum Entanglement: An Experimental Illustration

Why Physicists Are Saying Consciousness Is A State Of Matter, Like a Solid, A Liquid Or A Gas (The Physics arXiv Blog)

Why Physicists Are Saying Consciousness Is A State Of Matter, Like a Solid, A Liquid Or A Gas

A new way of thinking about consciousness is sweeping through science like wildfire. Now physicists are using it to formulate the problem of consciousness in concrete mathematical terms for the first time

The Physics arXiv Blog

There’s a quiet revolution underway in theoretical physics. For as long as the discipline has existed, physicists have been reluctant to discuss consciousness, considering it a topic for quacks and charlatans. Indeed, the mere mention of the ‘c’ word could ruin careers.

That’s finally beginning to change thanks to a fundamentally new way of thinking about consciousness that is spreading like wildfire through the theoretical physics community. And while the problem of consciousness is far from being solved, it is finally being formulated mathematically as a set of problems that researchers can understand, explore and discuss.

Today, Max Tegmark, a theoretical physicist at the Massachusetts Institute of Technology in Cambridge, sets out the fundamental problems that this new way of thinking raises. He shows how these problems can be formulated in terms of quantum mechanics and information theory. And he explains how thinking about consciousness in this way leads to precise questions about the nature of reality that the scientific process of experiment might help to tease apart.

Tegmark’s approach is to think of consciousness as a state of matter, like a solid, a liquid or a gas. “I conjecture that consciousness can be understood as yet another state of matter. Just as there are many types of liquids, there are many types of consciousness,” he says.

He goes on to show how the particular properties of consciousness might arise from the physical laws that govern our universe. And he explains how these properties allow physicists to reason about the conditions under which consciousness arises and how we might exploit it to better understand why the world around us appears as it does.

Interestingly, the new approach to consciousness has come from outside the physics community, principally from neuroscientists such as Giulio Tononi at the University of Wisconsin in Madison.

In 2008, Tononi proposed that a system demonstrating consciousness must have two specific traits. First, the system must be able to store and process large amounts of information. In other words consciousness is essentially a phenomenon of information.

And second, this information must be integrated in a unified whole so that it is impossible to divide into independent parts. That reflects the experience that each instance of consciousness is a unified whole that cannot be decomposed into separate components.

Both of these traits can be specified mathematically allowing physicists like Tegmark to reason about them for the first time. He begins by outlining the basic properties that a conscious system must have.

Given that it is a phenomenon of information, a conscious system must be able to store in a memory and retrieve it efficiently.

It must also be able to to process this data, like a computer but one that is much more flexible and powerful than the silicon-based devices we are familiar with.

Tegmark borrows the term computronium to describe matter that can do this and cites other work showing that today’s computers underperform the theoretical limits of computing by some 38 orders of magnitude.

Clearly, there is so much room for improvement that allows for the performance of conscious systems.

Next, Tegmark discusses perceptronium, defined as the most general substance that feels subjectively self-aware. This substance should not only be able to store and process information but in a way that forms a unified, indivisible whole. That also requires a certain amount of independence in which the information dynamics is determined from within rather than externally.

Finally, Tegmark uses this new way of thinking about consciousness as a lens through which to study one of the fundamental problems of quantum mechanics known as the quantum factorisation problem.

This arises because quantum mechanics describes the entire universe using three mathematical entities: an object known as a Hamiltonian that describes the total energy of the system; a density matrix that describes the relationship between all the quantum states in the system; and Schrodinger’s equation which describes how these things change with time.

The problem is that when the entire universe is described in these terms, there are an infinite number of mathematical solutions that include all possible quantum mechanical outcomes and many other even more exotic possibilities.

So the problem is why we perceive the universe as the semi-classical, three dimensional world that is so familiar. When we look at a glass of iced water, we perceive the liquid and the solid ice cubes as independent things even though they are intimately linked as part of the same system. How does this happen? Out of all possible outcomes, why do we perceive this solution?

Tegmark does not have an answer. But what’s fascinating about his approach is that it is formulated using the language of quantum mechanics in a way that allows detailed scientific reasoning. And as a result it throws up all kinds of new problems that physicists will want to dissect in more detail.

Take for example, the idea that the information in a conscious system must be unified. That means the system must contain error-correcting codes that allow any subset of up to half the information to be reconstructed from the rest.

Tegmark points out that any information stored in a special network known as a Hopfield neural net automatically has this error-correcting facility. However, he calculates that a Hopfield net about the size of the human brain with 10^11 neurons, can only store 37 bits of integrated information.

“This leaves us with an integration paradox: why does the information content of our conscious experience appear to be vastly larger than 37 bits?” asks Tegmark.

That’s a question that many scientists might end up pondering in detail. For Tegmark, this paradox suggests that his mathematical formulation of consciousness is missing a vital ingredient. “This strongly implies that the integration principle must be supplemented by at least one additional principle,” he says. Suggestions please in the comments section!

And yet the power of this approach is in the assumption that consciousness does not lie beyond our ken; that there is no “secret sauce” without which it cannot be tamed.

At the beginning of the 20th century, a group of young physicists embarked on a quest to explain a few strange but seemingly small anomalies in our understanding of the universe. In deriving the new theories of relativity and quantum mechanics, they ended up changing the way we comprehend the cosmos. These physcists, at least some of them, are now household names.

Could it be that a similar revolution is currently underway at the beginning of the 21st century?

Ref:arxiv.org/abs/1401.1219: Consciousness as a State of Matter

Partículas telepáticas (Folha de S.Paulo)

CASSIO LEITE VIEIRA

ilustração JOSÉ PATRÍCIO

28/12/2014 03h08

RESUMO Há 50 anos, o físico norte-irlandês John Bell (1928-90) chegou a um resultado que demonstra a natureza “fantasmagórica” da realidade no mundo atômico e subatômico. Seu teorema é hoje visto como a arma mais eficaz contra a espionagem, algo que garantirá, num futuro talvez próximo, a privacidade absoluta das informações.

*

Um país da América do Sul quer manter a privacidade de suas informações estratégicas, mas se vê obrigado a comprar os equipamentos para essa tarefa de um país bem mais avançado tecnologicamente. Esses aparelhos, porém, podem estar “grampeados”.

Surge, então, a dúvida quase óbvia: haverá, no futuro, privacidade 100% garantida? Sim. E isso vale até mesmo para um país que compre a tecnologia antiespionagem do “inimigo”.
O que possibilita a resposta afirmativa acima é o resultado que já foi classificado como o mais profundo da ciência: o teorema de Bell, que trata de uma das perguntas filosóficas mais agudas e penetrantes feitas até hoje e que alicerça o próprio conhecimento: o que é a realidade? O teorema -que neste ano completou seu 50º aniversário- garante que a realidade, em sua dimensão mais íntima, é inimaginavelmente estranha.

José Patricio

A história do teorema, de sua comprovação experimental e de suas aplicações modernas tem vários começos. Talvez, aqui, o mais apropriado seja um artigo publicado em 1935 pelo físico de origem alemã Albert Einstein (1879-1955) e dois colaboradores, o russo Boris Podolsky (1896-1966) e o americano Nathan Rosen (1909-95).

Conhecido como paradoxo EPR (iniciais dos sobrenomes dos autores), o experimento teórico ali descrito resumia uma longa insatisfação de Einstein com os rumos que a mecânica quântica, a teoria dos fenômenos na escala atômica, havia tomado. Inicialmente, causou amargo no paladar do autor da relatividade o fato de essa teoria, desenvolvida na década de 1920, fornecer apenas a probabilidade de um fenômeno ocorrer. Isso contrastava com a “certeza” (determinismo) da física dita clássica, a que rege os fenômenos macroscópicos.

Einstein, na verdade, estranhava sua criatura, pois havia sido um dos pais da teoria quântica. Com alguma relutância inicial, o indeterminismo da mecânica quântica acabou digerido por ele. Algo, porém, nunca lhe passou pela garganta: a não localidade, ou seja, o estranhíssimo fato de algo aqui influenciar instantaneamente algo ali -mesmo que esse “ali” esteja muito distante. Einstein acreditava que coisas distantes tinham realidades independentes.

Einstein chegou a comparar -vale salientar que é só uma analogia- a não localidade a um tipo de telepatia. Mas a definição mais famosa dada por Einstein a essa estranheza foi “fantasmagórica ação a distância”.

EMARANHADO

A essência do argumento do paradoxo EPR é o seguinte: sob condições especiais, duas partículas que interagiram e se separaram acabam em um estado denominado emaranhado, como se fossem “gêmeas telepáticas”. De forma menos pictórica, diz-se que as partículas estão conectadas (ou correlacionadas, como preferem os físicos) e assim seguem, mesmo depois da interação.

A estranheza maior vem agora: se uma das partículas desse par for perturbada -ou seja, sofrer uma medida qualquer, como dizem os físicos-, a outra “sente” essa perturbação instantaneamente. E isso independe da distância entre as duas partículas. Podem estar separadas por anos-luz.

Os autores do paradoxo EPR diziam que era impossível imaginar que a natureza permitisse a conexão instantânea entre os dois objetos. E, por meio de argumentação lógica e complexa, Einstein, Podolsky e Rosen concluíam: a mecânica quântica tem que ser incompleta. Portanto, provisória.

SUPERIOR À LUZ?

Uma leitura apressada (porém, muito comum) do paradoxo EPR é dizer que uma ação instantânea (não local, no vocabulário da física) é impossível, porque violaria a relatividade de Einstein: nada pode viajar com velocidade superior à da luz no vácuo, 300 mil km/s.

No entanto, a não localidade atuaria apenas na dimensão microscópica -não pode ser usada, por exemplo, para mandar ou receber mensagens. No mundo macroscópico, se quisermos fazer isso, teremos que usar sinais que nunca viajam com velocidade maior que a da luz no vácuo. Ou seja, relatividade é preservada.

A não localidade tem a ver com conexões persistentes (e misteriosas) entre dois objetos: interferir com (alterar, mudar etc.) um deles interfere com (altera, muda etc.) o outro. Instantaneamente. O simples ato de observar um deles interfere no estado do outro.

Einstein não gostou da versão final do artigo de 1935, que só viu impressa -a redação ficou a cargo de Podolsky. Ele havia imaginado um texto menos filosófico. Pouco meses depois, viria a resposta do físico dinamarquês Niels Bohr (1885-1962) ao EPR -poucos anos antes, Einstein e Bohr haviam protagonizado o que para muitos é um dos debates filosóficos mais importantes da história: o tema era a “alma da natureza”, nas palavras de um filósofo da física.

Em sua resposta ao EPR, Bohr reafirmou tanto a completude da mecânica quântica quanto sua visão antirrealista do universo atômico: não é possível dizer que uma entidade quântica (elétron, próton, fóton etc.) tenha uma propriedade antes que esta seja medida. Ou seja, tal propriedade não seria real, não estaria oculta à espera de um aparelho de medida ou qualquer interferência (até mesmo o olhar) do observador. Quanto a isso, Einstein, mais tarde, ironizaria: “Será que a Lua só existe quando olhamos para ela?”.

AUTORIDADE

Um modo de entender o que seja uma teoria determinista é o seguinte: é aquela na qual se pressupõe que a propriedade a ser medida está presente (ou “escondida”) no objeto e pode ser determinada com certeza. Os físicos denominam esse tipo de teoria com um nome bem apropriado: teoria de variáveis ocultas.

Em uma teoria de variáveis ocultas, a tal propriedade (conhecida ou não) existe, é real. Daí, por vezes, os filósofos classificarem esse cenário como realismo -Einstein gostava do termo “realidade objetiva”: as coisas existem sem a necessidade de serem observadas.

Mas, na década de 1930, um teorema havia provado que seria impossível haver uma versão da mecânica quântica como uma teoria de variáveis ocultas. O feito era de um dos maiores matemáticos de todos os tempos, o húngaro John von Neumann (1903-57). E, fato não raro na história da ciência, valeu o argumento da autoridade em vez da autoridade do argumento.

O teorema de Von Neumann era perfeito do ponto de vista matemático, mas “errado, tolo” e “infantil” (como chegou a ser classificado) no âmbito da física, pois partia de uma premissa equivocada. Sabe-se hoje que Einstein desconfiou dessa premissa: “Temos que aceitar isso como verdade?”, perguntou a dois colegas. Mas não foi além.

O teorema de Von Neumann serviu, porém, para praticamente pisotear a versão determinista (portanto, de variáveis ocultas) da mecânica quântica feita em 1927 pelo nobre francês Louis de Broglie (1892-1987), Nobel de Física de 1929, que acabou desistindo dessa linha de pesquisa.

Por exatas duas décadas, o teorema de Von Neumann e as ideias de Bohr -que formou em torno dele uma influente escola de jovens notáveis- dissuadiram tentativas de buscar uma versão determinista da mecânica quântica.

Mas, em 1952, o físico norte-americano David Bohm (1917-92), inspirado pelas ideias de De Broglie, apresentou uma versão de variáveis ocultas da mecânica quântica -hoje, denominada mecânica quântica bohmiana, homenagem ao pesquisador que trabalhou na década de 1950 na Universidade de São Paulo (USP), quando perseguido nos EUA pelo macarthismo.

A mecânica quântica bohmiana tinha duas características em sua essência: 1) era determinista (ou seja, de variáveis ocultas); 2) era não local (isto é, admitia a ação a distância) -o que fez com que Einstein, localista convicto, perdesse o interesse inicial nela.

PROTAGONISTA

Eis que entra em cena a principal personagem desta história: o físico norte-irlandês John Stewart Bell, que, ao tomar conhecimento da mecânica bohmiana, teve uma certeza: o “impossível havia sido feito”. Mais: Von Neumann estava errado.

A mecânica quântica de Bohm -ignorada logo de início pela comunidade de físicos- acabava de cair em terreno fértil: Bell remoía, desde a universidade, como um “hobby”, os fundamentos filosóficos da mecânica quântica (EPR, Von Neumann, De Broglie etc.). E tinha tomado partido nesses debates: era um einsteiniano assumido e achava Bohr obscuro.

Bell nasceu em 28 de junho de 1928, em Belfast, em uma família anglicana sem posses. Deveria ter parado de estudar aos 14 anos, mas, por insistência da mãe, que percebeu os dotes intelectuais do segundo de quatro filhos, foi enviado a uma escola técnica de ensino médio, onde ele aprendeu coisas práticas (carpintaria, construção civil, biblioteconomia etc.).

Formado, aos 16, tentou empregos em escritórios, mas o destino quis que terminasse como técnico preparador de experimentos no departamento de física da Queen’s University, também em Belfast.

Os professores do curso logo perceberam o interesse do técnico pela física e passaram a incentivá-lo, com indicações de leituras e aulas. Com uma bolsa de estudos, Bell se formou em 1948 em física experimental e, no ano seguinte, em física matemática. Em ambos os casos, com louvor.

De 1949 a 1960, Bell trabalhou no Aere (Estabelecimento para a Pesquisa em Energia Atômica), em Harwell, no Reino Unido. Lá conheceria sua futura mulher, a física Mary Ross, sua interlocutora em vários trabalhos sobre física. “Ao olhar novamente esses artigos, vejo-a em todo lugar”, disse Bell, em homenagem recebida em 1987, três anos antes de morrer, de hemorragia cerebral.

Defendeu doutorado em 1956, após um período na Universidade de Birmingham, sob orientação do físico teuto-britânico Rudolf Peierls (1907-95). A tese inclui uma prova de um teorema muito importante da física (teorema CPT), que havia sido descoberto pouco antes por um contemporâneo seu.

O TEOREMA

Por discordar dos rumos das pesquisas no Aere, o casal decidiu trocar empregos estáveis por posições temporárias no Centro Europeu de Pesquisas Nucleares (Cern), em Genebra (Suíça). Ele na divisão de física teórica; ela, na de aceleradores.

Bell passou 1963 e 1964 trabalhando nos EUA. Lá, encontrou tempo para se dedicar a seu “hobby” intelectual e gestar o resultado que marcaria sua carreira e lhe daria, décadas mais tarde, fama.

Ele se fez a seguinte pergunta: será que a não localidade da teoria de variáveis ocultas de Bohm seria uma característica de qualquer teoria realista da mecânica quântica? Em outras palavras, se as coisas existirem sem serem observadas, elas terão que necessariamente estabelecer entre si aquela fantasmagórica ação a distância?

O teorema de Bell, publicado em 1964, é também conhecido como desigualdade de Bell. Sua matemática não é complexa. De forma muito simplificada, podemos pensar nesse teorema como uma inequação: x ≤ 2 (x menor ou igual a dois), sendo que “x” representa, para nossos propósitos aqui, os resultados de um experimento.

As consequências mais interessantes do teorema de Bell ocorreriam se tal experimento violasse a desigualdade, ou seja, mostrasse que x > 2 (x maior que dois). Nesse caso, teríamos de abrir mão de uma das duas suposições: 1) realismo (as coisas existem sem serem observadas); 2) da localidade (o mundo quântico não permite conexões mais velozes que a luz).

O artigo do teorema não teve grande repercussão -Bell havia feito outro antes, fundamental para ele chegar ao resultado, mas, por erro do editor do periódico, acabou publicado só em 1966.

REBELDIA A retomada das ideias de Bell -e, por conseguinte, do EPR e de Bohm- ganhou momento com fatores externos à física. Muitos anos depois do agitado final dos anos 1960, o físico americano John Clauser recordaria o período: ”A Guerra do Vietnã dominava os pensamentos políticos da minha geração. Sendo um jovem físico naquele período revolucionário, eu naturalmente queria chacoalhar o mundo”.

A ciência, como o resto do mundo, acabou marcada pelo espírito da geração paz e amor; pela luta pelos direitos civis; por maio de 1968; pelas filosofias orientais; pelas drogas psicodélicas; pela telepatia -em uma palavra: pela rebeldia. Que, traduzida para a física, significava se dedicar a uma área herética na academia: interpretações (ou fundamentos) da mecânica quântica. Mas fazer isso aumentava consideravelmente as chances de um jovem físico arruinar sua carreira: EPR, Bohm e Bell eram considerados temas filosóficos, e não físicos.

O elemento final para que o campo tabu de estudos ganhasse fôlego foi a crise do petróleo de 1973, que diminuiu a oferta de postos para jovens pesquisadores -incluindo físicos. À rebeldia somou-se a recessão.
Clauser, com mais três colegas, Abner Shimony, Richard Holt e Michael Horne, publicou suas primeiras ideias sobre o assunto em 1969, com o título “Proposta de Experimento para Testar Teorias de Variáveis Ocultas”. O quarteto fez isso em parte por ter notado que a desigualdade de Bell poderia ser testada com fótons, que são mais fáceis de serem gerados. Até então se pensava em arranjos experimentais mais complicados.

Em 1972, a tal proposta virou experimento -feito por Clauser e Stuart Freedman (1944-2012)-, e a desigualdade de Bell foi violada.

O mundo parecia ser não local -ironicamente, Clauser era localista! Mas só parecia: o experimento seguiu, por cerca de uma década, incompreendido e, portanto, desconsiderado pela comunidade de físicos. Mas aqueles resultados serviram a reforçar algo importante: fundamentos da mecânica quântica não eram só filosofia. Eram também física experimental.

MUDANÇA DE CENÁRIO

O aperfeiçoamento de equipamentos de óptica (incluindo lasers) permitiu que, em 1982, um experimento se tornasse um clássico da área.

Pouco antes, o físico francês Alain Aspect havia decidido iniciar um doutorado tardio, mesmo sendo um físico experimental experiente. Escolheu como tema o teorema de Bell. Foi ao encontro do colega norte-irlandês no Cern. Em entrevista ao físico Ivan dos Santos Oliveira, do Centro Brasileiro de Pesquisas Físicas, no Rio de Janeiro, e ao autor deste texto, Aspect contou o seguinte diálogo entre ele e Bell. “Você tem um cargo estável?”, perguntou Bell. “Sim”, disse Aspect. Caso contrário, “você seria muito pressionado a não fazer o experimento”, disse Bell.

O diálogo relatado por Aspect nos permite afirmar que, quase duas décadas depois do artigo seminal de 1964, o tema continuava revestido de preconceito.

Em um experimento feito com pares de fótons emaranhados, a natureza, mais uma vez, mostrou seu caráter não local: a desigualdade de Bell foi violada. Os dados mostraram x > 2. Em 2007, por exemplo, o grupo do físico austríaco Anton Zeilinger verificou a violação da desigualdade usando fótons separados por… 144 km.

Na entrevista no Brasil, Aspect disse que, até então, o teorema era pouquíssimo conhecido pelos físicos, mas ganharia fama depois de sua tese de doutorado, de cuja banca, aliás, Bell participou.

ESTRANHO

Afinal, por que a natureza permite que haja a “telepatia” einsteiniana? É no mínimo estranho pensar que uma partícula perturbada aqui possa, de algum modo, alterar o estado de sua companheira nos confins do universo.

Há várias maneiras de interpretar as consequências do que Bell fez. De partida, algumas (bem) equivocadas: 1) a não localidade não pode existir, porque viola a relatividade; 2) teorias de variáveis ocultas (Bohm, De Broglie etc.) da mecânica quântica estão totalmente descartadas; 3) a mecânica quântica é realmente indeterminista; 4) o irrealismo -ou seja, coisas só existem quando observadas- é a palavra final. A lista é longa.

Quando o teorema foi publicado, uma leitura rasa (e errônea) dizia que ele não tinha importância, pois o teorema de Von Neumann já havia descartado as variáveis ocultas, e a mecânica quântica seria, portanto, de fato indeterminista. Entre os que não aceitam a não localidade, há ainda aqueles que chegam ao ponto de dizer que Einstein, Bohm e Bell não entenderam o que fizeram.

O filósofo da física norte-americano Tim Maudlin, da Universidade de Nova York, em dois excelentes artigos, “What Bell Did” (O que Bell fez, arxiv.org/abs/1408.1826) e “Reply to Werner” (em que responde a comentários sobre o texto anterior, arxiv.org/abs/1408.1828), oferece uma longa lista de equívocos.

Para Maudlin, renomado em sua área, o teorema de Bell e sua violação significam uma só coisa: a natureza é não local (“fantasmagórica”) e, portanto, não há esperança para a localidade, como Einstein gostaria -nesse sentido, pode-se dizer que Bell mostrou que Einstein estava errado. Assim, qualquer teoria determinista (realista) que reproduza os resultados experimentais obtidos até hoje pela mecânica quântica -por sinal, a teoria mais precisa da história da ciência- terá que necessariamente ser não local.

De Aspect até hoje, desenvolvimentos tecnológicos importantes possibilitaram algo impensável há poucas décadas: estudar isoladamente uma entidade quântica (átomo, elétron, fóton etc.). E isso deu início à área de informação quântica, que abrange o estudo da criptografia quântica -aquela que permitirá a segurança absoluta dos dados- e o dos computadores quânticos, máquinas extremamente velozes. De certo modo, trata-se de filosofia transformada em física experimental.

Muitos desses avanços se devem basicamente à rebeldia de uma geração de físicos jovens que queriam contrariar o “sistema”.

Uma história saborosa desse período está em “How the Hippies Saved Physics” (Como os hippies salvaram a física, publicado pela W. W. Norton & Company em 2011), do historiador da física norte-americano David Kaiser. E uma análise histórica detalhada em “Quantum Dissidents: Research on the Foundations of Quantum Theory circa 1970” (Dissidentes do quantum: pesquisa sobre os fundamentos da teoria quântica por volta de 1970, bit.ly/1xyipTJ, só para assinantes), do historiador da física Olival Freire Jr., da Universidade Federal da Bahia.

Para os mais interessados no viés filosófico, há os dois volumes premiados de “Conceitos de Física Quântica” (Editora Livraria da Física, 2003), do físico e filósofo Osvaldo Pessoa Jr., da USP.

PRIVACIDADE

A esta altura, o(a) leitor(a) talvez esteja se perguntando sobre o que o teorema de Bell tem a ver com uma privacidade 100% garantida.

No futuro, é (bem) provável que a informação seja enviada e recebida na forma de fótons emaranhados. Pesquisas recentes em criptografia quântica garantem que bastaria submeter essas partículas de luz ao teste da desigualdade de Bell. Se ela for violada, então não há nenhuma possibilidade de a mensagem ter sido bisbilhotada indevidamente. E o teste independe do equipamento usado para enviar ou receber os fótons. A base teórica para isso está, por exemplo, em “The Ultimate Physical Limits of Privacy” (Limites físicos extremos da privacidade), de Artur Ekert e Renato Renner (bit.ly/1gFjynG, só para assinantes).

Em um futuro não muito distante, talvez, o teorema de Bell se transforme na arma mais poderosa contra a espionagem. Isso é um tremendo alento para um mundo que parece rumar à privacidade zero. É também um imenso desdobramento de uma pergunta filosófica que, segundo o físico norte-americano Henry Stapp, especialista em fundamentos da mecânica quântica, se tornou “o resultado mais profundo da ciência”. Merecidamente. Afinal, por que a natureza optou pela “ação fantasmagórica a distância”?

A resposta é um mistério. Pena que a pergunta não seja nem sequer mencionada nas graduações de física no Brasil.

CÁSSIO LEITE VIEIRA, 54, jornalista do Instituto Ciência Hoje (RJ), é autor de “Einstein – O Reformulador do Universo” (Odysseus).
JOSÉ PATRÍCIO, 54, artista plástico pernambucano, participa da mostra “Asas a Raízes” na Caixa Cultural do Rio, de 17/1 a 15/3.

Earth Wrapped In ‘Star Trek Force Field’, Scientists Discover (Huff Post)

AP

Posted: 26/11/2014 20:17 GMT Updated: 26/11/2014 20:59 GMT

Scientists discover Earth shield

Earth is wrapped in an invisible force field that scientists have compared with the “shields” featured in Star Trek. A US team discovered the barrier, some 7,200 miles above the Earth’s surface, that blocks high energy electrons threatening astronauts and satellites.

Scientists identified an “extremely sharp” boundary within the Van Allen radiation belts, two large doughnut-shaped rings held in place by the Earth’s magnetic field that are filled with fast-moving particles. Lead researcher Professor Daniel Baker, from the University of Colorado at Boulder, said: “It’s almost like these electrons are running into a glass wall in space.

Artists impression of a doughnut-shaped brick wall to illustrate an invisible “shield” discovered US scientists that is 7,200 miles above the Earth in the Van Allen radiation belts, which blocks high energy electrons that threaten astronauts and satellites

“Somewhat like the shields created by force fields on Star Trek that were used to repel alien weapons, we are seeing an invisible shield blocking these electrons. It’s an extremely puzzling phenomenon.”

The team originally thought the highly charged electrons, which loop around the Earth at more than 100,000 miles per second, would slowly drift downward into the upper atmosphere. But a pair of probes launched in 2012 to investigate the Van Allen belts showed that the electrons are stopped in their tracks before they get that far.

The nature of the force field remains an unsolved mystery. It does not appear to be linked to magnetic field lines or human-generated radio signals, and scientists are not convinced that a cloud of cold electrically charged gas called the plasmasphere that stretches thousands of miles into the outer Van Allen belt can fully explain the phenomenon either.

Prof Baker added: “I think the key here is to keep observing the region in exquisite detail, which we can do because of the powerful instruments on the Van Allen probes.” The research is reported in the journal Nature.

Transitions between states of matter: It’s more complicated, scientists find (Science Daily)

Date: November 6, 2014

Source: New York University

Summary: The seemingly simple process of phase changes — those transitions between states of matter — is more complex than previously known. New work reveals the need to rethink one of science’s building blocks and, with it, how some of the basic principles underlying the behavior of matter are taught in our classrooms.

Melting ice. The seemingly simple process of phase changes — those transitions between states of matter — is more complex than previously known. Credit: © shefkate / Fotolia

The seemingly simple process of phase changes — those transitions between states of matter — is more complex than previously known, according to research based at Princeton University, Peking University and New York University.

Their study, which appears in the journal Science, reveals the need to rethink one of science’s building blocks and, with it, how some of the basic principles underlying the behavior of matter are taught in our classrooms. The researchers examined the way that a phase change, specifically the melting of a solid, occurs at a microscopic level and discovered that the transition is far more involved than earlier models had accounted for.

“This research shows that phase changes can follow multiple pathways, which is counter to what we’ve previously known,” explains Mark Tuckerman, a professor of chemistry and applied mathematics at New York University and one of the study’s co-authors. “This means the simple theories about phase transitions that we teach in classes are just not right.”

According to Tuckerman, scientists will need to change the way they think of and teach on phase changes.

The work stems from a 10-year project at Princeton to develop a mathematical framework and computer algorithms to study complex behavior in systems, explained senior author Weinan E, a professor in Princeton’s Department of Mathematics and Program in Applied and Computational Mathematics. Phase changes proved to be a crucial test case for their algorithm, E said. E and Tuckerman worked with Amit Samanta, a postdoctoral researcher at Princeton now at Lawrence Livermore National Laboratory, and Tang-Qing Yu, a postdoctoral researcher at NYU’s Courant Institute of Mathematical Sciences.

“It was a test case for the rather powerful set of tools that we have developed to study hard questions about complex phenomena such as phase transitions,” E said. “The melting of a relatively simple atomic solid such as a metal, proved to be enormously rich. With the understanding we have gained from this case, we next aim to probe more complex molecular solids such as ice.”

The findings reveal that phase transition can occur via multiple and competing pathways and that the transitions involve at least two steps. The study shows that, along one of these pathways, the first step in the transition process is the formation of point defects — local defects that occur at or around a single lattice site in a crystalline solid. These defects turn out to be highly mobile. In a second step, the point defects randomly migrate and occasionally meet to form large, disordered defect clusters.

This mechanism predicts that “the disordered cluster grows from the outside in rather than from the inside out, as current explanations suggest,” Tuckerman notes. “Over time, these clusters grow and eventually become sufficiently large to cause the transition from solid to liquid.”

Along an alternative pathway, the defects grow into thin lines of disorder (called “dislocations”) that reach across the system. Small liquid regions then pool along these dislocations, these regions expand from the dislocation region, engulfing more and more of the solid, until the entire system becomes liquid.

This study modeled this process by tracing copper and aluminum metals from an atomic solid to an atomic liquid state. The researchers used advanced computer models and algorithms to reexamine the process of phase changes on a microscopic level.

“Phase transitions have always been something of a mystery because they represent such a dramatic change in the state of matter,” Tuckerman observes. “When a system changes from solid to liquid, the properties change substantially.”

He adds that this research shows the surprising incompleteness of previous models of nucleation and phase changes–and helps to fill in existing gaps in basic scientific understanding.

This work is supported by the Office of Naval Research (N00014-13-1-0338), the Army Research Office (W911NF- 11-1-0101), the Department of Energy (DE-SC0009248, DE-AC52-07NA27344), and the National Science Foundation of China (CHE-1301314).


Journal Reference:

  1. A. Samanta, M. E. Tuckerman, T.-Q. Yu, W. E. Microscopic mechanisms of equilibrium melting of a solid. Science, 2014; 346 (6210): 729 DOI:10.1126/science.1253810

You’re powered by quantum mechanics. No, really… (The Guardian)

For years biologists have been wary of applying the strange world of quantum mechanics, where particles can be in two places at once or connected over huge distances, to their own field. But it can help to explain some amazing natural phenomena we take for granted

and

The Observer, Sunday 26 October 2014

A European robin in flight

According to quantum biology, the European robin has a ‘sixth sense’ in the form of a protein in its eye sensitive to the orientation of the Earth’s magnetic field, allowing it to ‘see’ which way to migrate. Photograph: Helmut Heintges/ Helmut Heintges/Corbis

Every year, around about this time, thousands of European robins escape the oncoming harsh Scandinavian winter and head south to the warmer Mediterranean coasts. How they find their way unerringly on this 2,000-mile journey is one of the true wonders of the natural world. For unlike many other species of migratory birds, marine animals and even insects, they do not rely on landmarks, ocean currents, the position of the sun or a built-in star map. Instead, they are among a select group of animals that use a remarkable navigation sense – remarkable for two reasons. The first is that they are able to detect tiny variations in the direction of the Earth’s magnetic field – astonishing in itself, given that this magnetic field is 100 times weaker than even that of a measly fridge magnet. The second is that robins seem to be able to “see” the Earth’s magnetic field via a process that even Albert Einstein referred to as “spooky”. The birds’ in-built compass appears to make use of one of the strangest features of quantum mechanics.

Over the past few years, the European robin, and its quantum “sixth sense”, has emerged as the pin-up for a new field of research, one that brings together the wonderfully complex and messy living world and the counterintuitive, ethereal but strangely orderly world of atoms and elementary particles in a collision of disciplines that is as astonishing and unexpected as it is exciting. Welcome to the new science of quantum biology.

Most people have probably heard of quantum mechanics, even if they don’t really know what it is about. Certainly, the idea that it is a baffling and difficult scientific theory understood by just a tiny minority of smart physicists and chemists has become part of popular culture. Quantum mechanics describes a reality on the tiniest scales that is, famously, very weird indeed; a world in which particles can exist in two or more places at once, spread themselves out like ghostly waves, tunnel through impenetrable barriers and even possess instantaneous connections that stretch across vast distances.

But despite this bizarre description of the basic building blocks of the universe, quantum mechanics has been part of all our lives for a century. Its mathematical formulation was completed in the mid-1920s and has given us a remarkably complete account of the world of atoms and their even smaller constituents, the fundamental particles that make up our physical reality. For example, the ability of quantum mechanics to describe the way that electrons arrange themselves within atoms underpins the whole of chemistry, material science and electronics; and is at the very heart of most of the technological advances of the past half-century. Without the success of the equations of quantum mechanics in describing how electrons move through materials such as semiconductors we would not have developed the silicon transistor and, later, the microchip and the modern computer.

However, if quantum mechanics can so beautifully and accurately describe the behaviour of atoms with all their accompanying weirdness, then why aren’t all the objects we see around us, including us – which are after all only made up of these atoms – also able to be in two place at once, pass through impenetrable barriers or communicate instantaneously across space? One obvious difference is that the quantum rules apply to single particles or systems consisting of just a handful of atoms, whereas much larger objects consist of trillions of atoms bound together in mindboggling variety and complexity. Somehow, in ways we are only now beginning to understand, most of the quantum weirdness washes away ever more quickly the bigger the system is, until we end up with the everyday objects that obey the familiar rules of what physicists call the “classical world”. In fact, when we want to detect the delicate quantum effects in everyday-size objects we have to go to extraordinary lengths to do so – freezing them to within a whisker of absolute zero and performing experiments in near-perfect vacuums.

Quantum effects were certainly not expected to play any role inside the warm, wet and messy world of living cells, so most biologists have thus far ignored quantum mechanics completely, preferring their traditional ball-and-stick models of the molecular structures of life. Meanwhile, physicists have been reluctant to venture into the messy and complex world of the living cell; why should they when they can test their theories far more cleanly in the controlled environment of the lab where they at least feel they have a chance of understanding what is going on?

Erwin Schrödinger, whose book What is Life? suggested that the macroscopic order of life was based on order at its quantum level.

Erwin Schrödinger, whose book What is Life? suggested that the macroscopic order of life was based on order at its quantum level. Photograph: Bettmann/CORBIS

Yet, 70 years ago, the Austrian Nobel prize-winning physicist and quantum pioneer, Erwin Schrödinger, suggested in his famous book,What is Life?, that, deep down, some aspects of biology must be based on the rules and orderly world of quantum mechanics. His book inspired a generation of scientists, including the discoverers of the double-helix structure of DNA, Francis Crick and James Watson. Schrödinger proposed that there was something unique about life that distinguishes it from the rest of the non-living world. He suggested that, unlike inanimate matter, living organisms can somehow reach down to the quantum domain and utilise its strange properties in order to operate the extraordinary machinery within living cells.

Schrödinger’s argument was based on the paradoxical fact that the laws of classical physics, such as those of Newtonian mechanics and thermodynamics, are ultimately based on disorder. Consider a balloon. It is filled with trillions of molecules of air all moving entirely randomly, bumping into one another and the inside wall of the balloon. Each molecule is governed by orderly quantum laws, but when you add up the random motions of all the molecules and average them out, their individual quantum behaviour washes out and you are left with the gas laws that predict, for example, that the balloon will expand by a precise amount when heated. This is because heat energy makes the air molecules move a little bit faster, so that they bump into the walls of the balloon with a bit more force, pushing the walls outward a little bit further. Schrödinger called this kind of law “order from disorder” to reflect the fact that this apparent macroscopic regularity depends on random motion at the level of individual particles.

But what about life? Schrödinger pointed out that many of life’s properties, such as heredity, depend of molecules made of comparatively few particles – certainly too few to benefit from the order-from-disorder rules of thermodynamics. But life was clearly orderly. Where did this orderliness come from? Schrödinger suggested that life was based on a novel physical principle whereby its macroscopic order is a reflection of quantum-level order, rather than the molecular disorder that characterises the inanimate world. He called this new principle “order from order”. But was he right?

Up until a decade or so ago, most biologists would have said no. But as 21st-century biology probes the dynamics of ever-smaller systems – even individual atoms and molecules inside living cells – the signs of quantum mechanical behaviour in the building blocks of life are becoming increasingly apparent. Recent research indicates that some of life’s most fundamental processes do indeed depend on weirdness welling up from the quantum undercurrent of reality. Here are a few of the most exciting examples.

Enzymes are the workhorses of life. They speed up chemical reactions so that processes that would otherwise take thousands of years proceed in seconds inside living cells. Life would be impossible without them. But how they accelerate chemical reactions by such enormous factors, often more than a trillion-fold, has been an enigma. Experiments over the past few decades, however, have shown that enzymes make use of a remarkable trick called quantum tunnelling to accelerate biochemical reactions. Essentially, the enzyme encourages electrons and protons to vanish from one position in a biomolecule and instantly rematerialise in another, without passing through the gap in between – a kind of quantum teleportation.

And before you throw your hands up in incredulity, it should be stressed that quantum tunnelling is a very familiar process in the subatomic world and is responsible for such processes as radioactive decay of atoms and even the reason the sun shines (by turning hydrogen into helium through the process of nuclear fusion). Enzymes have made every single biomolecule in your cells and every cell of every living creature on the planet, so they are essential ingredients of life. And they dip into the quantum world to help keep us alive.

Another vital process in biology is of course photosynthesis. Indeed, many would argue that it is the most important biochemical reaction on the planet, responsible for turning light, air, water and a few minerals into grass, trees, grain, apples, forests and, ultimately, the rest of us who eat either the plants or the plant-eaters.

The initiating event is the capture of light energy by a chlorophyll molecule and its conversion into chemical energy that is harnessed to fix carbon dioxide and turn it into plant matter. The process whereby this light energy is transported through the cell has long been a puzzle because it can be so efficient – close to 100% and higher than any artificial energy transport process.

Sunlight shines through chestnut tree leaves. Quantum biology can explain why photosynthesis in plants is so efficient.

Sunlight shines through chestnut tree leaves. Quantum biology can explain why photosynthesis in plants is so efficient. Photograph: Getty Images/Visuals Unlimited

The first step in photosynthesis is the capture of a tiny packet of energy from sunlight that then has to hop through a forest of chlorophyll molecules to makes its way to a structure called the reaction centre where its energy is stored. The problem is understanding how the packet of energy appears to so unerringly find the quickest route through the forest. An ingenious experiment, first carried out in 2007 in Berkley, California, probed what was going on by firing short bursts of laser light at photosynthetic complexes. The research revealed that the energy packet was not hopping haphazardly about, but performing a neat quantum trick. Instead of behaving like a localised particle travelling along a single route, it behaves quantum mechanically, like a spread-out wave, and samples all possible routes at once to find the quickest way.

A third example of quantum trickery in biology – the one we introduced in our opening paragraph – is the mechanism by which birds and other animals make use of the Earth’s magnetic field for navigation. Studies of the European robin suggest that it has an internal chemical compass that utilises an astonishing quantum concept called entanglement, which Einstein dismissed as “spooky action at a distance”. This phenomenon describes how two separated particles can remain instantaneously connected via a weird quantum link. The current best guess is that this takes place inside a protein in the bird’s eye, where quantum entanglement makes a pair of electrons highly sensitive to the angle of orientation of the Earth’s magnetic field, allowing the bird to “see” which way it needs to fly.

All these quantum effects have come as a big surprise to most scientists who believed that the quantum laws only applied in the microscopic world. All delicate quantum behaviour was thought to be washed away very quickly in bigger objects, such as living cells, containing the turbulent motion of trillions of randomly moving particles. So how does life manage its quantum trickery? Recent research suggests that rather than avoiding molecular storms, life embraces them, rather like the captain of a ship who harnesses turbulent gusts and squalls to maintain his ship upright and on course.

Just as Schrödinger predicted, life seems to be balanced on the boundary between the sensible everyday world of the large and the weird and wonderful quantum world, a discovery that is opening up an exciting new field of 21st-century science.

Life on the Edge: The Coming of Age of Quantum Biology by Jim Al-Khalili and Johnjoe McFadden will be published by Bantam Press on 6 November.

‘Superglue’ for the atmosphere: How sulfuric acid increases cloud formation (Science Daily)

Date: October 8, 2014

Source: Goethe-Universität Frankfurt am Main

Summary: It has been known for several years that sulfuric acid contributes to the formation of tiny aerosol particles, which play an important role in the formation of clouds. A new study shows that dimethylamine can tremendously enhance new particle formation. The formation of neutral (i.e. uncharged) nucleating clusters of sulfuric acid and dimethylamine was observed for the first time.

Clouds. Credit: Copyright Michele Hogan

It has been known for several years that sulfuric acid contributes to the formation of tiny aerosol particles, which play an important role in the formation of clouds. The new study by Kürten et al. shows that dimethylamine can tremendously enhance new particle formation. The formation of neutral (i.e. uncharged) nucleating clusters of sulfuric acid and dimethylamine was observed for the first time.

Previously, it was only possible to detect neutral clusters containing up to two sulfuric acid molecules. However, in the present study molecular clusters containing up to 14 sulfuric acid and 16 dimethylamine molecules were detected and their growth by attachment of individual molecules was observed in real-time starting from just one molecule. Moreover, these measurements were made at concentrations of sulfuric acid and dimethylamine corresponding to atmospheric levels (less than 1 molecule of sulfuric acid per 1 x 1013 molecules of air).

The capability of sulfuric acid molecules together with water and ammonia to form clusters and particles has been recognized for several years. However, clusters which form in this manner can vaporize under the conditions which exist in the atmosphere. In contrast, the system of sulfuric acid and dimethylamine forms particles much more efficiently because even the smallest clusters are essentially stable against evaporation. In this respect dimethylamine can act as “superglue” because when interacting with sulfuric acid every collision between a cluster and a sulfuric acid molecule bonds them together irreversibly. Sulphuric acid as well as amines in the present day atmosphere have mainly anthropogenic sources.

Sulphuric acid is derived mainly from the oxidation of sulphur dioxide while amines stem, for example, from animal husbandry. The method used to measure the neutral clusters utilizes a combination of a mass spectrometer and a chemical ionization source, which was developed by the University of Frankfurt and the University of Helsinki. The measurements were made by an international collaboration at the CLOUD (Cosmics Leaving OUtdoor Droplets) chamber at CERN (European Organization for Nuclear Research).

The results allow for very detailed insight into a chemical system which could be relevant for atmospheric particle formation. Aerosol particles influence Earth’s climate through cloud formation: Clouds can only form if so-called cloud condensation nuclei (CCN) are present, which act as seeds for condensing water molecules. Globally about half the CCN originate from a secondary process which involves the formation of small clusters and particles in the very first step followed by growth to sizes of at least 50 nanometers.

The observed process of particle formation from sulfuric acid and dimethylamine could also be relevant for the formation of CCN. A high concentration of CCN generally leads to the formation of clouds with a high concentration of small droplets; whereas fewer CCN lead to clouds with few large droplets. Earth’s radiation budget, climate as well as precipitation patterns can be influenced in this manner. The deployed method will also open a new window for future measurements of particle formation in other chemical systems.


Journal Reference:

  1. A. Kurten, T. Jokinen, M. Simon, M. Sipila, N. Sarnela, H. Junninen, A. Adamov, J. Almeida, A. Amorim, F. Bianchi, M. Breitenlechner, J. Dommen, N. M. Donahue, J. Duplissy, S. Ehrhart, R. C. Flagan, A. Franchin, J. Hakala, A. Hansel, M. Heinritzi, M. Hutterli, J. Kangasluoma, J. Kirkby, A. Laaksonen, K. Lehtipalo, M. Leiminger, V. Makhmutov, S. Mathot, A. Onnela, T. Petaja, A. P. Praplan, F. Riccobono, M. P. Rissanen, L. Rondo, S. Schobesberger, J. H. Seinfeld, G. Steiner, A. Tome, J. Trostl, P. M. Winkler, C. Williamson, D. Wimmer, P. Ye, U. Baltensperger, K. S. Carslaw, M. Kulmala, D. R. Worsnop, J. Curtius. Neutral molecular cluster formation of sulfuric acid-dimethylamine observed in real time under atmospheric conditions. Proceedings of the National Academy of Sciences, 2014; DOI: 10.1073/pnas.1404853111

New math and quantum mechanics: Fluid mechanics suggests alternative to quantum orthodoxy (Science Daily)

Date: September 12, 2014

Source: Massachusetts Institute of Technology

Summary: The central mystery of quantum mechanics is that small chunks of matter sometimes seem to behave like particles, sometimes like waves. For most of the past century, the prevailing explanation of this conundrum has been what’s called the “Copenhagen interpretation” — which holds that, in some sense, a single particle really is a wave, smeared out across the universe, that collapses into a determinate location only when observed. But some founders of quantum physics — notably Louis de Broglie — championed an alternative interpretation, known as “pilot-wave theory,” which posits that quantum particles are borne along on some type of wave. According to pilot-wave theory, the particles have definite trajectories, but because of the pilot wave’s influence, they still exhibit wavelike statistics. Now a professor of applied mathematics believes that pilot-wave theory deserves a second look.


Close-ups of an experiment conducted by John Bush and his student Daniel Harris, in which a bouncing droplet of fluid was propelled across a fluid bath by waves it generated. Credit: Dan Harris

The central mystery of quantum mechanics is that small chunks of matter sometimes seem to behave like particles, sometimes like waves. For most of the past century, the prevailing explanation of this conundrum has been what’s called the “Copenhagen interpretation” — which holds that, in some sense, a single particle really is a wave, smeared out across the universe, that collapses into a determinate location only when observed.

But some founders of quantum physics — notably Louis de Broglie — championed an alternative interpretation, known as “pilot-wave theory,” which posits that quantum particles are borne along on some type of wave. According to pilot-wave theory, the particles have definite trajectories, but because of the pilot wave’s influence, they still exhibit wavelike statistics.

John Bush, a professor of applied mathematics at MIT, believes that pilot-wave theory deserves a second look. That’s because Yves Couder, Emmanuel Fort, and colleagues at the University of Paris Diderot have recently discovered a macroscopic pilot-wave system whose statistical behavior, in certain circumstances, recalls that of quantum systems.

Couder and Fort’s system consists of a bath of fluid vibrating at a rate just below the threshold at which waves would start to form on its surface. A droplet of the same fluid is released above the bath; where it strikes the surface, it causes waves to radiate outward. The droplet then begins moving across the bath, propelled by the very waves it creates.

“This system is undoubtedly quantitatively different from quantum mechanics,” Bush says. “It’s also qualitatively different: There are some features of quantum mechanics that we can’t capture, some features of this system that we know aren’t present in quantum mechanics. But are they philosophically distinct?”

Tracking trajectories

Bush believes that the Copenhagen interpretation sidesteps the technical challenge of calculating particles’ trajectories by denying that they exist. “The key question is whether a real quantum dynamics, of the general form suggested by de Broglie and the walking drops, might underlie quantum statistics,” he says. “While undoubtedly complex, it would replace the philosophical vagaries of quantum mechanics with a concrete dynamical theory.”

Last year, Bush and one of his students — Jan Molacek, now at the Max Planck Institute for Dynamics and Self-Organization — did for their system what the quantum pioneers couldn’t do for theirs: They derived an equation relating the dynamics of the pilot waves to the particles’ trajectories.

In their work, Bush and Molacek had two advantages over the quantum pioneers, Bush says. First, in the fluidic system, both the bouncing droplet and its guiding wave are plainly visible. If the droplet passes through a slit in a barrier — as it does in the re-creation of a canonical quantum experiment — the researchers can accurately determine its location. The only way to perform a measurement on an atomic-scale particle is to strike it with another particle, which changes its velocity.

The second advantage is the relatively recent development of chaos theory. Pioneered by MIT’s Edward Lorenz in the 1960s, chaos theory holds that many macroscopic physical systems are so sensitive to initial conditions that, even though they can be described by a deterministic theory, they evolve in unpredictable ways. A weather-system model, for instance, might yield entirely different results if the wind speed at a particular location at a particular time is 10.01 mph or 10.02 mph.

The fluidic pilot-wave system is also chaotic. It’s impossible to measure a bouncing droplet’s position accurately enough to predict its trajectory very far into the future. But in a recent series of papers, Bush, MIT professor of applied mathematics Ruben Rosales, and graduate students Anand Oza and Dan Harris applied their pilot-wave theory to show how chaotic pilot-wave dynamics leads to the quantumlike statistics observed in their experiments.

What’s real?

In a review article appearing in the Annual Review of Fluid Mechanics, Bush explores the connection between Couder’s fluidic system and the quantum pilot-wave theories proposed by de Broglie and others.

The Copenhagen interpretation is essentially the assertion that in the quantum realm, there is no description deeper than the statistical one. When a measurement is made on a quantum particle, and the wave form collapses, the determinate state that the particle assumes is totally random. According to the Copenhagen interpretation, the statistics don’t just describe the reality; they are the reality.

But despite the ascendancy of the Copenhagen interpretation, the intuition that physical objects, no matter how small, can be in only one location at a time has been difficult for physicists to shake. Albert Einstein, who famously doubted that God plays dice with the universe, worked for a time on what he called a “ghost wave” theory of quantum mechanics, thought to be an elaboration of de Broglie’s theory. In his 1976 Nobel Prize lecture, Murray Gell-Mann declared that Niels Bohr, the chief exponent of the Copenhagen interpretation, “brainwashed an entire generation of physicists into believing that the problem had been solved.” John Bell, the Irish physicist whose famous theorem is often mistakenly taken to repudiate all “hidden-variable” accounts of quantum mechanics, was, in fact, himself a proponent of pilot-wave theory. “It is a great mystery to me that it was so soundly ignored,” he said.

Then there’s David Griffiths, a physicist whose “Introduction to Quantum Mechanics” is standard in the field. In that book’s afterword, Griffiths says that the Copenhagen interpretation “has stood the test of time and emerged unscathed from every experimental challenge.” Nonetheless, he concludes, “It is entirely possible that future generations will look back, from the vantage point of a more sophisticated theory, and wonder how we could have been so gullible.”

“The work of Yves Couder and the related work of John Bush … provides the possibility of understanding previously incomprehensible quantum phenomena, involving ‘wave-particle duality,’ in purely classical terms,” says Keith Moffatt, a professor emeritus of mathematical physics at Cambridge University. “I think the work is brilliant, one of the most exciting developments in fluid mechanics of the current century.”

Journal Reference:

  1. John W.M. Bush. Pilot-Wave Hydrodynamics. Annual Review of Fluid Mechanics, 2014 DOI: 10.1146/annurev-fluid-010814-014506

Teoria quântica, múltiplos universos, e o destino da consciência humana após a morte (Biocentrismo, Robert Lanza)

[Nota do editor do blogue: o título da matéria em português não é fiel ao título original em inglês, e tem caráter sensacionalista. Por ser este blogue uma hemeroteca, não alterei o título.]

Cientistas comprovam a reencarnação humana (Duniverso)

s/d; acessado em 14 de setembro de 2014. Desde que o mundo é mundo discutimos e tentamos descobrir o que existe além da morte. Desta vez a ciência quântica explica e comprova que existe sim vida (não física) após a morte de qualquer ser humano. Um livro intitulado “O biocentrismo: Como a vida e a consciência são as chaves para entender a natureza do Universo” “causou” na Internet, porque continha uma noção de que a vida não acaba quando o corpo morre e que pode durar para sempre. O autor desta publicação o cientista Dr. Robert Lanza, eleito o terceiro mais importante cientista vivo pelo NY Times, não tem dúvidas de que isso é possível.

Além do tempo e do espaço

Lanza é um especialista em medicina regenerativa e diretor científico da Advanced Cell Technology Company. No passado ficou conhecido por sua extensa pesquisa com células-tronco e também por várias experiências bem sucedidas sobre clonagem de espécies animais ameaçadas de extinção. Mas não há muito tempo, o cientista se envolveu com física, mecânica quântica e astrofísica. Esta mistura explosiva deu à luz a nova teoria do biocentrismo que vem pregando desde então. O biocentrismo ensina que a vida e a consciência são fundamentais para o universo. É a consciência que cria o universo material e não o contrário. Lanza aponta para a estrutura do próprio universo e diz que as leis, forças e constantes variações do universo parecem ser afinadas para a vida, ou seja, a inteligência que existia antes importa muito. Ele também afirma que o espaço e o tempo não são objetos ou coisas mas sim ferramentas de nosso entendimento animal. Lanza diz que carregamos o espaço e o tempo em torno de nós “como tartarugas”, o que significa que quando a casca sai, espaço e tempo ainda existem. ciencia-quantica-comprova-reencarnacao

A teoria sugere que a morte da consciência simplesmente não existe. Ele só existe como um pensamento porque as pessoas se identificam com o seu corpo. Eles acreditam que o corpo vai morrer mais cedo ou mais tarde, pensando que a sua consciência vai desaparecer também. Se o corpo gera a consciência então a consciência morre quando o corpo morre. Mas se o corpo recebe a consciência da mesma forma que uma caixa de tv a cabo recebe sinais de satélite então é claro que a consciência não termina com a morte do veículo físico. Na verdade a consciência existe fora das restrições de tempo e espaço. Ele é capaz de estar em qualquer lugar: no corpo humano e no exterior de si mesma. Em outras palavras é não-local, no mesmo sentido que os objetos quânticos são não-local. Lanza também acredita que múltiplos universos podem existir simultaneamente. Em um universo o corpo pode estar morto e em outro continua a existir, absorvendo consciência que migraram para este universo. Isto significa que uma pessoa morta enquanto viaja através do mesmo túnel acaba não no inferno ou no céu, mas em um mundo semelhante a ele ou ela que foi habitado, mas desta vez vivo. E assim por diante, infinitamente, quase como um efeito cósmico vida após a morte.

Vários mundos

Não são apenas meros mortais que querem viver para sempre mas também alguns cientistas de renome têm a mesma opinião de Lanza. São os físicos e astrofísicos que tendem a concordar com a existência de mundos paralelos e que sugerem a possibilidade de múltiplos universos. Multiverso (multi-universo) é o conceito científico da teoria que eles defendem. Eles acreditam que não existem leis físicas que proibiriam a existência de mundos paralelos.

ciencia-quantica-comprova-reencarnacao-2

O primeiro a falar sobre isto foi o escritor de ficção científica HG Wells em 1895 com o livro “The Door in the Wall“. Após 62 anos essa ideia foi desenvolvida pelo Dr. Hugh Everett em sua tese de pós-graduação na Universidade de Princeton. Basicamente postula que, em determinado momento o universo se divide em inúmeros casos semelhantes e no momento seguinte, esses universos “recém-nascidos” dividem-se de forma semelhante. Então em alguns desses mundos que podemos estar presentes, lendo este artigo em um universo e assistir TV em outro. Na década de 1980 Andrei Linde cientista do Instituto de Física da Lebedev, desenvolveu a teoria de múltiplos universos. Agora como professor da Universidade de Stanford, Linde explicou: o espaço consiste em muitas esferas de insuflar que dão origem a esferas semelhantes, e aqueles, por sua vez, produzem esferas em números ainda maiores e assim por diante até o infinito. No universo eles são separados. Eles não estão cientes da existência do outro mas eles representam partes de um mesmo universo físico. A física Laura Mersini Houghton da Universidade da Carolina do Norte com seus colegas argumentam: as anomalias do fundo do cosmos existe devido ao fato de que o nosso universo é influenciado por outros universos existentes nas proximidades e que buracos e falhas são um resultado direto de ataques contra nós por universos vizinhos.

Alma

Assim, há abundância de lugares ou outros universos onde a nossa alma poderia migrar após a morte, de acordo com a teoria de neo biocentrismo. Mas será que a alma existe? Existe alguma teoria científica da consciência que poderia acomodar tal afirmação? Segundo o Dr. Stuart Hameroff uma experiência de quase morte acontece quando a informação quântica que habita o sistema nervoso deixa o corpo e se dissipa no universo. Ao contrário do que defendem os materialistas Dr. Hameroff oferece uma explicação alternativa da consciência que pode, talvez, apelar para a mente científica racional e intuições pessoais. A consciência reside, de acordo com Stuart e o físico britânico Sir Roger Penrose, nos microtúbulos das células cerebrais que são os sítios primários de processamento quântico. Após a morte esta informação é liberada de seu corpo, o que significa que a sua consciência vai com ele. Eles argumentaram que a nossa experiência da consciência é o resultado de efeitos da gravidade quântica nesses microtúbulos, uma teoria que eles batizaram Redução Objetiva Orquestrada. Consciência ou pelo menos proto consciência é teorizada por eles para ser uma propriedade fundamental do universo, presente até mesmo no primeiro momento do universo durante o Big Bang. “Em uma dessas experiências conscientes comprova-se que o proto esquema é uma propriedade básica da realidade física acessível a um processo quântico associado com atividade cerebral.” Nossas almas estão de fato construídas a partir da própria estrutura do universo e pode ter existido desde o início dos tempos. Nossos cérebros são apenas receptores e amplificadores para a proto-consciência que é intrínseca ao tecido do espaço-tempo. Então, há realmente uma parte de sua consciência que é não material e vai viver após a morte de seu corpo físico. ciencia-quantica-comprova-reencarnacao-3

Dr. Hameroff disse ao Canal Science através do documentário Wormhole: “Vamos dizer que o coração pare de bater, o sangue pare de fluir e os microtúbulos percam seu estado quântico. A informação quântica dentro dos microtúbulos não é destruída, não pode ser destruída, ele só distribui e se dissipa com o universo como um todo.” Robert Lanza acrescenta aqui que não só existem em um único universo, ela existe talvez, em outro universo. Se o paciente é ressuscitado, esta informação quântica pode voltar para os microtúbulos e o paciente diz: “Eu tive uma experiência de quase morte”. Ele acrescenta: “Se ele não reviveu e o paciente morre é possível que esta informação quântica possa existir fora do corpo talvez indefinidamente, como uma alma.” Esta conta de consciência quântica explica coisas como experiências de quase morte, projeção astral, experiências fora do corpo e até mesmo a reencarnação sem a necessidade de recorrer a ideologia religiosa. A energia de sua consciência potencialmente é reciclada de volta em um corpo diferente em algum momento e nesse meio tempo ela existe fora do corpo físico em algum outro nível de realidade e possivelmente, em outro universo.

E você o que acha? Concorda com Lanza?

Grande abraço!

Indicação: Pedro Lopes Martins Artigo publicado originalmente em inglês no site SPIRIT SCIENCE AND METAPHYSICS.

*   *   *

Scientists Claim That Quantum Theory Proves Consciousness Moves To Another Universe At Death

STEVEN BANCARZ, JANUARY 7, 2014

A book titled “Biocentrism: How Life and Consciousness Are the Keys to Understanding the Nature of the Universe“ has stirred up the Internet, because it contained a notion that life does not end when the body dies, and it can last forever. The author of this publication, scientist Dr. Robert Lanza who was voted the 3rd most important scientist alive by the NY Times, has no doubts that this is possible.

Lanza is an expert in regenerative medicine and scientific director of Advanced Cell Technology Company. Before he has been known for his extensive research which dealt with stem cells, he was also famous for several successful experiments on cloning endangered animal species. But not so long ago, the scientist became involved with physics, quantum mechanics and astrophysics. This explosive mixture has given birth to the new theory of biocentrism, which the professor has been preaching ever since.  Biocentrism teaches that life and consciousness are fundamental to the universe.  It is consciousness that creates the material universe, not the other way around. Lanza points to the structure of the universe itself, and that the laws, forces, and constants of the universe appear to be fine-tuned for life, implying intelligence existed prior to matter.  He also claims that space and time are not objects or things, but rather tools of our animal understanding.  Lanza says that we carry space and time around with us “like turtles with shells.” meaning that when the shell comes off (space and time), we still exist. The theory implies that death of consciousness simply does not exist.   It only exists as a thought because people identify themselves with their body. They believe that the body is going to perish, sooner or later, thinking their consciousness will disappear too.  If the body generates consciousness, then consciousness dies when the body dies.  But if the body receives consciousness in the same way that a cable box receives satellite signals, then of course consciousness does not end at the death of the physical vehicle. In fact, consciousness exists outside of constraints of time and space. It is able to be anywhere: in the human body and outside of it. In other words, it is non-local in the same sense that quantum objects are non-local. Lanza also believes that multiple universes can exist simultaneously.  In one universe, the body can be dead. And in another it continues to exist, absorbing consciousness which migrated into this universe.  This means that a dead person while traveling through the same tunnel ends up not in hell or in heaven, but in a similar world he or she once inhabited, but this time alive. And so on, infinitely.  It’s almost like a cosmic Russian doll afterlife effect.

Multiple worlds

This hope-instilling, but extremely controversial theory by Lanza has many unwitting supporters, not just mere mortals who want to live forever, but also some well-known scientists. These are the physicists and astrophysicists who tend to agree with existence of parallel worlds and who suggest the possibility of multiple universes. Multiverse (multi-universe) is a so-called scientific concept, which they defend. They believe that no physical laws exist which would prohibit the existence of parallel worlds. The first one was a science fiction writer H.G. Wells who proclaimed in 1895 in his story “The Door in the Wall”.  And after 62 years, this idea was developed by Dr. Hugh Everett in his graduate thesis at the Princeton University. It basically posits that at any given moment the universe divides into countless similar instances. And the next moment, these “newborn” universes split in a similar fashion. In some of these worlds you may be present: reading this article in one universe, or watching TV in another. The triggering factor for these multiplyingworlds is our actions, explained Everett. If we make some choices, instantly one universe splits into two with different versions of outcomes. In the 1980s, Andrei Linde, scientist from the Lebedev’s Institute of physics, developed the theory of multiple universes. He is now a professor at Stanford University.  Linde explained: Space consists of many inflating spheres, which give rise to similar spheres, and those, in turn, produce spheres in even greater numbers, and so on to infinity. In the universe, they are spaced apart. They are not aware of each other’s existence. But they represent parts of the same physical universe. The fact that our universe is not alone is supported by data received from the Planck space telescope. Using the data, scientists have created the most accurate map of the microwave background, the so-called cosmic relic background radiation, which has remained since the inception of our universe. They also found that the universe has a lot of dark recesses represented by some holes and extensive gaps. Theoretical physicist Laura Mersini-Houghton from the North Carolina University with her colleagues argue: the anomalies of the microwave background exist due to the fact that our universe is influenced by other universes existing nearby. And holes and gaps are a direct result of attacks on us by neighboring universes.

Soul

So, there is abundance of places or other universes where our soul could migrate after death, according to the theory of neo-biocentrism. But does the soul exist?  Is there any scientific theory of consciousness that could accommodate such a claim?  According to Dr. Stuart Hameroff, a near-death experience happens when the quantum information that inhabits the nervous system leaves the body and dissipates into the universe.  Contrary to materialistic accounts of consciousness, Dr. Hameroff offers an alternative explanation of consciousness that can perhaps appeal to both the rational scientific mind and personal intuitions. Consciousness resides, according to Stuart and British physicist Sir Roger Penrose, in the microtubules of the brain cells, which are the primary sites of quantum processing.  Upon death, this information is released from your body, meaning that your consciousness goes with it. They have argued that our experience of consciousness is the result of quantum gravity effects in these microtubules, a theory which they dubbed orchestrated objective reduction (Orch-OR). Consciousness, or at least proto-consciousness is theorized by them to be a fundamental property of the universe, present even at the first moment of the universe during the Big Bang. “In one such scheme proto-conscious experience is a basic property of physical reality accessible to a quantum process associated with brain activity.” Our souls are in fact constructed from the very fabric of the universe – and may have existed since the beginning of time.  Our brains are just receivers and amplifiers for the proto-consciousness that is intrinsic to the fabric of space-time. So is there really a part of your consciousness that is non-material and will live on after the death of your physical body? Dr Hameroff told the Science Channel’s Through the Wormhole documentary: “Let’s say the heart stops beating, the blood stops flowing, the microtubules lose their quantum state. The quantum information within the microtubules is not destroyed, it can’t be destroyed, it just distributes and dissipates to the universe at large”.  Robert Lanza would add here that not only does it exist in the universe, it exists perhaps in another universe. If the patient is resuscitated, revived, this quantum information can go back into the microtubules and the patient says “I had a near death experience”‘

He adds: “If they’re not revived, and the patient dies, it’s possible that this quantum information can exist outside the body, perhaps indefinitely, as a soul.”

This account of quantum consciousness explains things like near-death experiences, astral projection, out of body experiences, and even reincarnation without needing to appeal to religious ideology.  The energy of your consciousness potentially gets recycled back into a different body at some point, and in the mean time it exists outside of the physical body on some other level of reality, and possibly in another universe. Robert Lanza on Biocentrism:

Sources: http://www.learning-mind.com/quantum-theory-proves-that-consciousness-moves-to-another-universe-after-death/ http://en.wikipedia.org/wiki/Biocentric_universe http://www.dailymail.co.uk/sciencetech/article-2225190/Can-quantum-physics-explain-bizarre-experiences-patients-brought-brink-death.html#axzz2JyudSqhB http://www.news.com.au/news/quantum-scientists-offer-proof-soul-exists/story-fnenjnc3-1226507686757 http://www.psychologytoday.com/blog/biocentrism/201112/does-the-soul-exist-evidence-says-yes http://www.hameroff.com/penrose-hameroff/fundamentality.html

– See more at: http://www.spiritscienceandmetaphysics.com/scientists-claim-that-quantum-theory-proves-consciousness-moves-to-another-universe-at-death/#sthash.QVylhCNb.dpuf

Physicists, alchemists, and ayahuasca shamans: A study of grammar and the body (Cultural Admixtures)

Posted on by

Are there any common denominators that may underlie the practices of leading physicists and scientists, Renaissance alchemists, and indigenous Amazonian ayahuasca healers? There are obviously a myriad of things that these practices do not have in common. Yet through an analysis of the body and the senses and styles of grammar and social practice, these seemingly very different modes of existence may be triangulated to reveal a curious set of logics at play. Ways in which practitioners identify their subjectivities (or ‘self’) with nonhuman entities and ‘natural’ processes are detailed in the three contexts. A logic of identification illustrates similarities, and also differences, in the practices of advanced physics, Renaissance alchemy, and ayahuasca healing.

Physics and the “I” and “You” of experimentation

physics-physicists-wallpaper-physics-31670037-530-425

A small group of physicists at a leading American university in the early 1990s are investigating magnetic temporality and atomic spins in a crystalline lattice; undertaking experiments within the field of condensed matter physics. The scientists collaborate together, presenting experimental or theoretical findings on blackboards, overhead projectors, printed pages and various other forms of visual media. Miguel, a researcher, describes to a colleague the experiments he has just conducted. He points down and then up across a visual representation of the experiment while describing an aspect of the experiment, “We lowered the field [and] raised the field”. In response, his collaborator Ron replies using what is a common type of informal scientific language. The language-style identifies, conflates, or brings-together the researcher with the object being researched. In the following reply, the pronoun ‘he’ refers to both Miguel and the object or process under investigation: Ron asks, “Is there a possibility that he hasn’t seen anything real? I mean is there a [he points to the diagram]“. Miguel sharply interjects “I-, i-, it is possible… I am amazed by his measurement because when I come down I’m in the domain state”. Here Miguel is referring to a physical process of temperature change; a cooling that moves ‘down’ to the ‘domain state’. Ron replies, “You quench from five to two tesla, a magnet, a superconducting magnet”.  What is central here in regards to the common denominators explored in this paper is the way in which the scientists collaborate with certain figurative styles of language that blur the borders between physicist and physical process or state.

The collaboration between Miguel and Ron was filmed and examined by linguistic ethnographers Elinor Ochs, Sally Jacoby, and Patrick Gonzales (1994, 1996:328).  In the experiment, the physicists, Ochs et al illustrate, refer to ‘themselves as the thematic agents and experiencers of [the physical] phenomena’ (Osch et al 1996:335). By employing the pronouns ‘you’, ‘he’, and ‘I’ to refer to the physical processes and states under investigation, the physicists identify their own subjectivities, bodies, and investigations with the objects they are studying.

In the physics laboratory, members are trying to understand physical worlds that are not directly accessible by any of their perceptual abilities. To bridge this gap, it seems, they take embodied interpretive journeys across and through see-able, touchable two-dimensional artefacts that conventionally symbolize those worlds… Their sensory-motor gesturing is a means not only of representing (possible) worlds but also of imagining or vicariously experiencing them… Through verbal and gestural (re)enactments of constructed physical processes, physicist and physical entity are conjoined in simultaneous, multiple constructed worlds: the here-and-now interaction, the visual representation, and the represented physical process. The indeterminate grammatical constructions, along with gestural journeys through visual displays, constitute physicist and physical entity as coexperiencers of dynamic processes and, therefore, as coreferents of the personal pronoun. (Ochs et al 1994:163,164)

When Miguel says “I am in the domain state” he is using a type of ‘private, informal scientific discourse’  that has been observed in many other types of scientific practice (Latour & Woolgar 1987; Gilbert & Mulkay 1984 ). This style of erudition and scientific collaboration obviously has become established in state-of-the-art universities given the utility that it provides in regards to empirical problems and the development of scientific ideas.

What could this style of practice have in common with the healing practices of Amazonian shamans drinking the powerful psychoactive brew ayahuasca? Before moving on to an analysis of grammar and the body in types of ayahuasca use, the practice of Renaissance alchemy is introduced given the bridge or resemblance it offers between these scientific practices and certain notions of healing.

Renaissance alchemy, “As above so below”

khunrath-amphitheatrum-engraving

Heinrich Khunrath: 1595 engraving Amphitheatre

Graduating from the Basel Medical Academy in 1588, the physician Heinrich Khunrath defended his thesis that concerns a particular development of the relationship between alchemy and medicine. Inspired by the works of key figures in Roman and Greek medicine, key alchemists and practitioners of the hermetic arts, and key botanists, philosophers and others, Khunrath went on to produced innovative and influential texts and illustrations that informed various trajectories in medical and occult practice.

Alchemy flourished in the Renaissance period and was draw upon by elites such as Queen Elizabeth I and the Holy Emperor of Rome, Rudolf II . Central to the practices of Renaissance alchemists was a belief that all metals sprang from one source deep within the earth and that this process may be reversed and every metal be potentially turned into gold. The process of ‘transmutation’ or reversal of nature, it was claimed, could also lead to the elixir of life, the philosopher’s stone, or eternal youth and immortality. It was a spiritual pursuit of purification and regeneration which depended heavily on natural science experimentation.

Alchemical experiments were typically undertaken in a laboratory and alchemists were often contracted by elites for pragmatic purposes related to mining, medical services, and the production of chemicals, metals, and gemstones (Nummedal 2007). Allison Coudert describes and distills the practice of Renaissance alchemy with a basic overview of the relationship between an alchemist and the ‘natural entities’ of his practice.

All the ingredients mentioned in alchemical recipes—the minerals, metals, acids, compounds, and mixtures—were in truth only one, the alchemist himself. He was the base matter in need of purification from the fire; and the acid needed to accomplish this transformation came from his own spiritual malaise and longing for wholeness and peace. The various alchemical processes… were steps in the mysterious process of spiritual regeneration. (cited in Hanegraaff 1996:395)

The physician-alchemist Khunrath worked within a laboratory/oratory that included various alchemical apparatuses, including ‘smelting equipment for the extraction of metal from ore… glass vessels, ovens… [a] furnace or athanor… [and] a mirror’. Khunrath spoke of using the mirror as a ‘physico-magical instrument for setting a coal or lamp-fire alight by the heat of the sun’ (Forshaw 2005:205). Urszula Szulakowska argues that this use of the mirror embodies the general alchemical process and purpose of Khunruth’s practice. The functions of his practice and his alchemical illustrations and glyphs (such as his engraving Amphitheatre above) are aimed towards various outcomes of transmutation or reversal of nature. Khunruth’s engravings and illustrations,  Szulakowska (2000:9) argues:

are intended to excite the imagination of the viewer so that a mystic alchemy can take place through the act of visual contemplation… Khunrath’s theatre of images, like a mirror, catoptrically reflects the celestial spheres to the human mind, awakening the empathetic faculty of the human spirit which unites, through the imagination, with the heavenly realms. Thus, the visual imagery of Khunrath’s treatises has become the alchemical quintessence, the spiritualized matter of the philosopher’s stone.

Khunrath called himself a ‘lover of both medicines’, referring to the inseparability of material and spiritual forms of medicine.  Illustrating the centrality of alchemical practice in his medical approach, he described his ‘down-to-earth Physical-Chemistry of Nature’ as:

[T]he art of chemically dissolving, purifying and rightly reuniting Physical Things by Nature’s method; the Universal (Macro-Cosmically, the Philosopher’s Stone; Micro-Cosmically, the parts of the human body…) and ALL the particulars of the inferior globe. (cited in Forshaw 2005:205).

In Renaissance alchemy there is a certain kind of laboratory visionary mixing that happens between the human body and the human temperaments and ‘entities’ and processes of the natural world. This is condensed in the hermetic dictum “As above, so below” where the signatures of nature (‘above’) may be found in the human body (‘below’). The experiments involved certain practices of perception, contemplation, and language, that were undertaken in laboratory settings.

The practice of Renaissance alchemy, illustrated in recipes, glyphs, and instructional texts, includes styles of grammar in which minerals, metals, and other natural entities are animated with subjectivity and human temperaments. Lead “wants” or “desires” to transmute into gold; antimony feels a wilful “attraction” to silver (Kaiser 2010; Waite 1894). This form of grammar is entailed in the doctrine of medico-alchemical practice described by Khunrath above. Under certain circumstances and conditions, minerals, metals, and other natural entities may embody aspects of ‘Yourself’, or the subjectivity of the alchemist, and vice versa.

Renaissance alchemical language and practice bares a certain level of resemblance to the contemporary practices of physicists and scientists and the ways in which they identify themselves with the objects and processes of their experiments. The methods of physicists appear to differ considerably insofar as they use metaphors and trade spiritual for figurative approaches when ‘journeying through’ cognitive tasks, embodied gestures, and visual representations of empirical or natural processes. It is no coincidence that contemporary state-of-the-art scientists are employing forms of alchemical language and practice in advanced types of experimentation. Alchemical and hermetic thought and practice were highly influential in the emergence of modern forms of science (Moran 2006; Newman 2006; Hanegraaff 2013).

Ayahuasca shamanism and shapeshifting

ayahuasca-visions_023

Pablo Amaringo

In the Amazon jungle a radically different type of practice to the Renaissance alchemical traditions exists. Yet, as we will see, the practices of indigenous Amazonian shamans and Renaissance alchemists appear to include certain similarities — particularly in terms of the way in which ‘natural entities’ and the subjectivity of the practitioner may merge or swap positions — this is evidenced in the grammar and language of shamanic healing songs and in Amazonian cosmologies more generally.

In the late 1980s, Cambridge anthropologist Graham Townsley was undertaking PhD fieldwork with the indigenous Amazonian Yaminahua on the Yurua river. His research was focused on ways in which forms of social organisation are embedded in cosmology and the practice of everyday life. Yaminahua healing practices are embedded in broad animistic cosmological frames and at the centre of these healing practices is song. ‘What Yaminahua shamans do, above everything else, is sing’, Townsley explains, and this ritual singing is typically done while under the effects of the psychoactive concoction ayahuasca.

The psychoactive drink provides shamans with a means of drawing upon the healing assistance of benevolent spirit persons of the natural world (such as plant-persons, animal-persons, sun-persons etc.) and of banishing malevolent spirit persons that are affecting the wellbeing of a patient. The Yaminahua practice of ayahuasca shamanism resembles broader types of Amazonian shamanism. Shapeshifting, or the metamorphosis of human persons into nonhuman persons (such as jaguar-persons and anaconda-persons) is central to understandings of illness and to practices of healing in various types of Amazonian shamanism (Chaumeil 1992; Praet 2009; Riviere 1994).

The grammatical styles and sensory experiences of indigenous ayahuasca curing rituals and songs bare some similarities with the logic of identification noted in the sections on physics and alchemy above. Townsley (1993) describes a Yaminahua ritual where a shaman attempts to heal a patient that was still bleeding several days after giving birth. The healing songs that the shaman sings (called wai which also means ‘path’ and ‘myth’ orabodes of the spirits) make very little reference to the illness in which they are aimed to heal. The shaman’s songs do not communicate meanings to the patient but they embody complex metaphors and analogies, or what Yaminahua call ‘twisted language’; a language only comprehensible to shamans. There are ‘perceptual resemblances’ that inform the logic of Yaminahua twisted language. For example, “white-collared peccaries” becomes fish given the similarities between the gills of the fish and designs on the peccaries neck. The use of visual or sensory resonance in shamanic song metaphors is not arbitrary but central to the practice Yaminahua ayahuasca healing.

Ayahuasca typically produces a powerful visionary experience. The shaman’s use of complex metaphors in ritual song helps him shape his visions and bring a level of control to the visionary content. Resembling the common denominators and logic of identification explored above, the songs allow the shaman to perceive from the various perspectives that the meanings of the metaphors (or the spirits) afford.

Everything said about shamanic songs points to the fact that as they are sung the shaman actively visualizes the images referred to by the external analogy of the song, but he does this through a carefully controlled “seeing as” the different things actually named by the internal metaphors of his song. This “seeing as” in some way creates a space in which powerful visionary experience can occur. (Townsley 1993:460)

The use of analogies and metaphors provides a particularly powerful means of navigating the visionary experience of ayahuasca. There appears to be a kind of pragmatics involved in the use of metaphor over literal meanings. For instance, a shaman states, “twisted language brings me close but not too close [to the meanings of the metaphors]–with normal words I would crash into things–with twisted ones I circle around them–I can see them clearly” (Townsley 1993:460). Through this method of “seeing as”, the shaman embodies a variety of animal and nature spirits, or yoshi in Yaminahua, including anaconda-yoshi, jaguar-yoshi and solar or sun-yoshi, in order to perform acts of healing and various other shamanic activities.

While Yaminahua shamans use metaphors to control visions and shapeshift (or “see as”), they, and Amazonians more generally, reportedly understand shapeshifting in literal terms. For example, Lenaerts describes this notion of ‘seeing like the spirits’, and the ‘physical’ or literal view that the Ashéninka hold in regards to the practice of ayahuasca-induced shapeshifting.

What is at stake here is a temporary bodily process, whereby a human being assumes the embodied point of view of another species… There is no need to appeal to any sort of metaphoric sense here. A literal interpretation of this process of disembodiment/re-embodiment is absolutely consistent with all what an Ashéninka knowns and directly feels during this experience, in a quite physical sense. (2006, 13)

The practices of indigenous ayahuasca shamans are centred on an ability to shapeshift and ‘see nonhumans as they [nonhumans] see themselves’ (Viveiros de Castro 2004:468). Practitioners not only identify with nonhuman persons or ‘natural entities’ but they embody their point of view with the help of psychoactive plants and  ‘twisted language’ in song.

Some final thoughts

Through a brief exploration of techniques employed by advanced physicists, Renaissance alchemists, and Amazonian ayahuasca shamans, a logic of identification may be observed in which practitioners embody different means of transcending themselves and becoming the objects or spirits of their respective practices. While the physicists tend to embody secular principles and relate to this logic of identification in a purely figurative or metaphorical sense, Renaissance alchemists and Amazonian shamans embody epistemological stances that afford much more weight to the existential qualities and ‘persons’ or ‘spirits’ of their respective practices. A cognitive value in employing forms of language and sensory experience that momentarily take the practitioner beyond him or herself is evidenced by these three different practices. However, there is arguably more at stake here than values confined to cogito. The boundaries of bodies, subjectivities and humanness in each of these practices become porous, blurred, and are transcended while the contours of various forms of possibility are exposed, defined, and acted upon — possibilities that inform the outcomes of the practices and the definitions of the human they imply.

 References

Chaumeil, Jean-Pierre 1992, ‘Varieties of Amazonian shamanism’. Diogenes. Vol. 158 p.101
Forshaw, P. 2008 ‘”Paradoxes, Absurdities, and Madness”: Conflicts over Alchemy, Magic and Medicine in the Works of Andreas Libavius and Heinrich Khunrath. Early Science and Medicine. Vol. 1 pp.53
Forshaw, P. 2006 ‘Alchemy in the Amphitheatre: Some considerations of the alchemical content of the engravings in Heinrich Khunrath’s Amphitheatre of Eternal Wisdom’ in Jacob Wamberg Art and Alchemy. p.195-221
Gilbert, G. N. & Mulkay, M. 1984 Opening Bandora’s Box: A sociological analysis of scientists’ discourse. Cambridge, Cambridge University Press 
Hanegraaff, W. 2012 Esotericism and the Academy: Rejected knowledge in Western culture. Cambridge, Cambridge University Press
Hanegraaff, W. 1996 New Age Religion and Western Culture: Esotericism in the Mirror of Secular Thought. New York: SUNY Press
Latour, B. & Woolgar, S. 1987 Laboratory Life: The social construction of scientific facts. Cambridge, Harvard University Press
Lenaerts, M. 2006, ‘Substance, relationships and the omnipresence of the body: an overview of Ashéninka ethnomedicine (Western Amazonia)’ Journal of Ethnobiology and Ethnomedicine, Vol. 2, (1) 49 http://www.ethnobiomed.com/content/2/1/49
Moran, B. 2006 Distilling Knowledge: Alchemy, Chemistry, and the Scientific Revolution. Harvard, Harvard University Press
Newman, W. 2006 Atoms and Alchemy: Chymistry and the Experimental Origins of the Scientific Revolution. Chicago, Chicago University Press
Nummedal, T. 2007 Alchemy and Authroity in the Holy Roman Empire. Chicago, Chicago University Press
Ochs, E. Gonzales, P., Jacoby, S. 1996 ‘”When I come down I’m in the domain state”: grammar and graphic representation in the interpretive activities of physicists’ in Ochs, E., Schegloff, E. & Thompson, S (ed.)Interaction and Grammar. Cambridge, Cambridge University Press
Ochs, E. Gonzales, P., Jacoby, S 1994 ‘Interpretive Journeys: How Physicists Talk and Travel through Graphic Space’ Configurations. (1) p.151
Praet, I. 2009, ‘Shamanism and ritual in South America: an inquiry into Amerindian shape-shifting’. Journal of the Royal Anthropological Institute. Vol. 15 pp.737-754
Riviere, P. 1994, ‘WYSINWYG in Amazonia’. Journal of the Anthropological Society of Oxford. Vol. 25
Szulakowska, U. 2000 The Alchemy of Light: Geometry and Optics in Late Renaissance Alchemical Illustration. Leiden, Brill Press
Townsley, G. 1993 ‘Song Paths: The ways and means of Yaminahua shamanic knowledge’. L’Hommee. Vol. 33 p. 449
Viveiros de Castro, E. 2004, ‘Exchanging perspectives: The Transformation of Objects into Subjects in Amerindian Ontologies’.Common Knowledge. Vol. 10 (3) pp.463-484
Waite, A. 1894 The Hermetic and Alchemical Writings of Aureolus Philippus Theophrastrus Bombast, of Hohenheim, called Paracelcus the Great. Cornell University Library, ebook