Arquivo da tag: Mecânica quântica

Nobody understands what consciousness is or how it works. Nobody understands quantum mechanics either. Could that be more than coincidence? (BBC)

What is going on in our brains? (Credit: Mehau Kulyk/Science Photo Library)

What is going on in our brains? (Credit: Mehau Kulyk/Science Photo Library)

Quantum mechanics is the best theory we have for describing the world at the nuts-and-bolts level of atoms and subatomic particles. Perhaps the most renowned of its mysteries is the fact that the outcome of a quantum experiment can change depending on whether or not we choose to measure some property of the particles involved.

When this “observer effect” was first noticed by the early pioneers of quantum theory, they were deeply troubled. It seemed to undermine the basic assumption behind all science: that there is an objective world out there, irrespective of us. If the way the world behaves depends on how – or if – we look at it, what can “reality” really mean?

The most famous intrusion of the mind into quantum mechanics comes in the “double-slit experiment”

Some of those researchers felt forced to conclude that objectivity was an illusion, and that consciousness has to be allowed an active role in quantum theory. To others, that did not make sense. Surely, Albert Einstein once complained, the Moon does not exist only when we look at it!

Today some physicists suspect that, whether or not consciousness influences quantum mechanics, it might in fact arise because of it. They think that quantum theory might be needed to fully understand how the brain works.

Might it be that, just as quantum objects can apparently be in two places at once, so a quantum brain can hold onto two mutually-exclusive ideas at the same time?

These ideas are speculative, and it may turn out that quantum physics has no fundamental role either for or in the workings of the mind. But if nothing else, these possibilities show just how strangely quantum theory forces us to think.

The famous double-slit experiment (Credit: Victor de Schwanberg/Science Photo Library)

The famous double-slit experiment (Credit: Victor de Schwanberg/Science Photo Library)

The most famous intrusion of the mind into quantum mechanics comes in the “double-slit experiment”. Imagine shining a beam of light at a screen that contains two closely-spaced parallel slits. Some of the light passes through the slits, whereupon it strikes another screen.

Light can be thought of as a kind of wave, and when waves emerge from two slits like this they can interfere with each other. If their peaks coincide, they reinforce each other, whereas if a peak and a trough coincide, they cancel out. This wave interference is called diffraction, and it produces a series of alternating bright and dark stripes on the back screen, where the light waves are either reinforced or cancelled out.

The implication seems to be that each particle passes simultaneously through both slits

This experiment was understood to be a characteristic of wave behaviour over 200 years ago, well before quantum theory existed.

The double slit experiment can also be performed with quantum particles like electrons; tiny charged particles that are components of atoms. In a counter-intuitive twist, these particles can behave like waves. That means they can undergo diffraction when a stream of them passes through the two slits, producing an interference pattern.

Now suppose that the quantum particles are sent through the slits one by one, and their arrival at the screen is likewise seen one by one. Now there is apparently nothing for each particle to interfere with along its route – yet nevertheless the pattern of particle impacts that builds up over time reveals interference bands.

The implication seems to be that each particle passes simultaneously through both slits and interferes with itself. This combination of “both paths at once” is known as a superposition state.

But here is the really odd thing.

The double-slit experiment (Credit: GIPhotoStock/Science Photo Library)

The double-slit experiment (Credit: GIPhotoStock/Science Photo Library)

If we place a detector inside or just behind one slit, we can find out whether any given particle goes through it or not. In that case, however, the interference vanishes. Simply by observing a particle’s path – even if that observation should not disturb the particle’s motion – we change the outcome.

The physicist Pascual Jordan, who worked with quantum guru Niels Bohr in Copenhagen in the 1920s, put it like this: “observations not only disturb what has to be measured, they produce it… We compel [a quantum particle] to assume a definite position.” In other words, Jordan said, “we ourselves produce the results of measurements.”

If that is so, objective reality seems to go out of the window.

And it gets even stranger.

Particles can be in two states (Credit: Victor de Schwanberg/Science Photo Library)

Particles can be in two states (Credit: Victor de Schwanberg/Science Photo Library)

If nature seems to be changing its behaviour depending on whether we “look” or not, we could try to trick it into showing its hand. To do so, we could measure which path a particle took through the double slits, but only after it has passed through them. By then, it ought to have “decided” whether to take one path or both.

The sheer act of noticing, rather than any physical disturbance caused by measuring, can cause the collapse

An experiment for doing this was proposed in the 1970s by the American physicist John Wheeler, and this “delayed choice” experiment was performed in the following decade. It uses clever techniques to make measurements on the paths of quantum particles (generally, particles of light, called photons) after they should have chosen whether to take one path or a superposition of two.

It turns out that, just as Bohr confidently predicted, it makes no difference whether we delay the measurement or not. As long as we measure the photon’s path before its arrival at a detector is finally registered, we lose all interference.

It is as if nature “knows” not just if we are looking, but if we are planning to look.

(Credit: Emilio Segre Visual Archives/American Institute Physics/Science Photo Library)

Eugene Wigner (Credit: Emilio Segre Visual Archives/American Institute of Physics/Science Photo Library)

Whenever, in these experiments, we discover the path of a quantum particle, its cloud of possible routes “collapses” into a single well-defined state. What’s more, the delayed-choice experiment implies that the sheer act of noticing, rather than any physical disturbance caused by measuring, can cause the collapse. But does this mean that true collapse has only happened when the result of a measurement impinges on our consciousness?

It is hard to avoid the implication that consciousness and quantum mechanics are somehow linked

That possibility was admitted in the 1930s by the Hungarian physicist Eugene Wigner. “It follows that the quantum description of objects is influenced by impressions entering my consciousness,” he wrote. “Solipsism may be logically consistent with present quantum mechanics.”

Wheeler even entertained the thought that the presence of living beings, which are capable of “noticing”, has transformed what was previously a multitude of possible quantum pasts into one concrete history. In this sense, Wheeler said, we become participants in the evolution of the Universe since its very beginning. In his words, we live in a “participatory universe.”

To this day, physicists do not agree on the best way to interpret these quantum experiments, and to some extent what you make of them is (at the moment) up to you. But one way or another, it is hard to avoid the implication that consciousness and quantum mechanics are somehow linked.

Beginning in the 1980s, the British physicist Roger Penrosesuggested that the link might work in the other direction. Whether or not consciousness can affect quantum mechanics, he said, perhaps quantum mechanics is involved in consciousness.

Physicist and mathematician Roger Penrose (Credit: Max Alexander/Science Photo Library)

Physicist and mathematician Roger Penrose (Credit: Max Alexander/Science Photo Library)

What if, Penrose asked, there are molecular structures in our brains that are able to alter their state in response to a single quantum event. Could not these structures then adopt a superposition state, just like the particles in the double slit experiment? And might those quantum superpositions then show up in the ways neurons are triggered to communicate via electrical signals?

Maybe, says Penrose, our ability to sustain seemingly incompatible mental states is no quirk of perception, but a real quantum effect.

Perhaps quantum mechanics is involved in consciousness

After all, the human brain seems able to handle cognitive processes that still far exceed the capabilities of digital computers. Perhaps we can even carry out computational tasks that are impossible on ordinary computers, which use classical digital logic.

Penrose first proposed that quantum effects feature in human cognition in his 1989 book The Emperor’s New Mind. The idea is called Orch-OR, which is short for “orchestrated objective reduction”. The phrase “objective reduction” means that, as Penrose believes, the collapse of quantum interference and superposition is a real, physical process, like the bursting of a bubble.

Orch-OR draws on Penrose’s suggestion that gravity is responsible for the fact that everyday objects, such as chairs and planets, do not display quantum effects. Penrose believes that quantum superpositions become impossible for objects much larger than atoms, because their gravitational effects would then force two incompatible versions of space-time to coexist.

Penrose developed this idea further with American physician Stuart Hameroff. In his 1994 book Shadows of the Mind, he suggested that the structures involved in this quantum cognition might be protein strands called microtubules. These are found in most of our cells, including the neurons in our brains. Penrose and Hameroff argue that vibrations of microtubules can adopt a quantum superposition.

But there is no evidence that such a thing is remotely feasible.

Microtubules inside a cell (Credit: Dennis Kunkel Microscopy/Science Photo Library)

Microtubules inside a cell (Credit: Dennis Kunkel Microscopy/Science Photo Library)

It has been suggested that the idea of quantum superpositions in microtubules is supported by experiments described in 2013, but in fact those studies made no mention of quantum effects.

Besides, most researchers think that the Orch-OR idea was ruled out by a study published in 2000. Physicist Max Tegmark calculated that quantum superpositions of the molecules involved in neural signaling could not survive for even a fraction of the time needed for such a signal to get anywhere.

Other researchers have found evidence for quantum effects in living beings

Quantum effects such as superposition are easily destroyed, because of a process called decoherence. This is caused by the interactions of a quantum object with its surrounding environment, through which the “quantumness” leaks away.

Decoherence is expected to be extremely rapid in warm and wet environments like living cells.

Nerve signals are electrical pulses, caused by the passage of electrically-charged atoms across the walls of nerve cells. If one of these atoms was in a superposition and then collided with a neuron, Tegmark showed that the superposition should decay in less than one billion billionth of a second. It takes at least ten thousand trillion times as long for a neuron to discharge a signal.

As a result, ideas about quantum effects in the brain are viewed with great skepticism.

However, Penrose is unmoved by those arguments and stands by the Orch-OR hypothesis. And despite Tegmark’s prediction of ultra-fast decoherence in cells, other researchers have found evidence for quantum effects in living beings. Some argue that quantum mechanics is harnessed by migratory birds that use magnetic navigation, and by green plants when they use sunlight to make sugars in photosynthesis.

Besides, the idea that the brain might employ quantum tricks shows no sign of going away. For there is now another, quite different argument for it.

Could phosphorus sustain a quantum state? (Credit: Phil Degginger/Science Photo Library)

Could phosphorus sustain a quantum state? (Credit: Phil Degginger/Science Photo Library)

In a study published in 2015, physicist Matthew Fisher of the University of California at Santa Barbara argued that the brain might contain molecules capable of sustaining more robust quantum superpositions. Specifically, he thinks that the nuclei of phosphorus atoms may have this ability.

Phosphorus atoms are everywhere in living cells. They often take the form of phosphate ions, in which one phosphorus atom joins up with four oxygen atoms.

Such ions are the basic unit of energy within cells. Much of the cell’s energy is stored in molecules called ATP, which contain a string of three phosphate groups joined to an organic molecule. When one of the phosphates is cut free, energy is released for the cell to use.

Cells have molecular machinery for assembling phosphate ions into groups and cleaving them off again. Fisher suggested a scheme in which two phosphate ions might be placed in a special kind of superposition called an “entangled state”.

Phosphorus spins could resist decoherence for a day or so, even in living cells

The phosphorus nuclei have a quantum property called spin, which makes them rather like little magnets with poles pointing in particular directions. In an entangled state, the spin of one phosphorus nucleus depends on that of the other.

Put another way, entangled states are really superposition states involving more than one quantum particle.

Fisher says that the quantum-mechanical behaviour of these nuclear spins could plausibly resist decoherence on human timescales. He agrees with Tegmark that quantum vibrations, like those postulated by Penrose and Hameroff, will be strongly affected by their surroundings “and will decohere almost immediately”. But nuclear spins do not interact very strongly with their surroundings.

All the same, quantum behaviour in the phosphorus nuclear spins would have to be “protected” from decoherence.

Quantum particles can have different spins (Credit: Richard Kail/Science Photo Library)

Quantum particles can have different spins (Credit: Richard Kail/Science Photo Library)

This might happen, Fisher says, if the phosphorus atoms are incorporated into larger objects called “Posner molecules”. These are clusters of six phosphate ions, combined with nine calcium ions. There is some evidence that they can exist in living cells, though this is currently far from conclusive.

I decided… to explore how on earth the lithium ion could have such a dramatic effect in treating mental conditions

In Posner molecules, Fisher argues, phosphorus spins could resist decoherence for a day or so, even in living cells. That means they could influence how the brain works.

The idea is that Posner molecules can be swallowed up by neurons. Once inside, the Posner molecules could trigger the firing of a signal to another neuron, by falling apart and releasing their calcium ions.

Because of entanglement in Posner molecules, two such signals might thus in turn become entangled: a kind of quantum superposition of a “thought”, you might say. “If quantum processing with nuclear spins is in fact present in the brain, it would be an extremely common occurrence, happening pretty much all the time,” Fisher says.

He first got this idea when he started thinking about mental illness.

A capsule of lithium carbonate (Credit: Custom Medical Stock Photo/Science Photo Library)

A capsule of lithium carbonate (Credit: Custom Medical Stock Photo/Science Photo Library)

“My entry into the biochemistry of the brain started when I decided three or four years ago to explore how on earth the lithium ion could have such a dramatic effect in treating mental conditions,” Fisher says.

At this point, Fisher’s proposal is no more than an intriguing idea

Lithium drugs are widely used for treating bipolar disorder. They work, but nobody really knows how.

“I wasn’t looking for a quantum explanation,” Fisher says. But then he came across a paper reporting that lithium drugs had different effects on the behaviour of rats, depending on what form – or “isotope” – of lithium was used.

On the face of it, that was extremely puzzling. In chemical terms, different isotopes behave almost identically, so if the lithium worked like a conventional drug the isotopes should all have had the same effect.

Nerve cells are linked at synapses (Credit: Sebastian Kaulitzki/Science Photo Library)

Nerve cells are linked at synapses (Credit: Sebastian Kaulitzki/Science Photo Library)

But Fisher realised that the nuclei of the atoms of different lithium isotopes can have different spins. This quantum property might affect the way lithium drugs act. For example, if lithium substitutes for calcium in Posner molecules, the lithium spins might “feel” and influence those of phosphorus atoms, and so interfere with their entanglement.

We do not even know what consciousness is

If this is true, it would help to explain why lithium can treat bipolar disorder.

At this point, Fisher’s proposal is no more than an intriguing idea. But there are several ways in which its plausibility can be tested, starting with the idea that phosphorus spins in Posner molecules can keep their quantum coherence for long periods. That is what Fisher aims to do next.

All the same, he is wary of being associated with the earlier ideas about “quantum consciousness”, which he sees as highly speculative at best.

Consciousness is a profound mystery (Credit: Sciepro/Science Photo Library)

Consciousness is a profound mystery (Credit: Sciepro/Science Photo Library)

Physicists are not terribly comfortable with finding themselves inside their theories. Most hope that consciousness and the brain can be kept out of quantum theory, and perhaps vice versa. After all, we do not even know what consciousness is, let alone have a theory to describe it.

We all know what red is like, but we have no way to communicate the sensation

It does not help that there is now a New Age cottage industrydevoted to notions of “quantum consciousness“, claiming that quantum mechanics offers plausible rationales for such things as telepathy and telekinesis.

As a result, physicists are often embarrassed to even mention the words “quantum” and “consciousness” in the same sentence.

But setting that aside, the idea has a long history. Ever since the “observer effect” and the mind first insinuated themselves into quantum theory in the early days, it has been devilishly hard to kick them out. A few researchers think we might never manage to do so.

In 2016, Adrian Kent of the University of Cambridge in the UK, one of the most respected “quantum philosophers”, speculated that consciousness might alter the behaviour of quantum systems in subtle but detectable ways.

We do not understand how thoughts work (Credit: Andrzej Wojcicki/Science Photo Library)

We do not understand how thoughts work (Credit: Andrzej Wojcicki/Science Photo Library)

Kent is very cautious about this idea. “There is no compelling reason of principle to believe that quantum theory is the right theory in which to try to formulate a theory of consciousness, or that the problems of quantum theory must have anything to do with the problem of consciousness,” he admits.

Every line of thought on the relationship of consciousness to physics runs into deep trouble

But he says that it is hard to see how a description of consciousness based purely on pre-quantum physics can account for all the features it seems to have.

One particularly puzzling question is how our conscious minds can experience unique sensations, such as the colour red or the smell of frying bacon. With the exception of people with visual impairments, we all know what red is like, but we have no way to communicate the sensation and there is nothing in physics that tells us what it should be like.

Sensations like this are called “qualia”. We perceive them as unified properties of the outside world, but in fact they are products of our consciousness – and that is hard to explain. Indeed, in 1995 philosopher David Chalmers dubbed it “the hard problem” of consciousness.

How does our consciousness work? (Credit: Victor Habbick Visions/Science Photo Library)

How does our consciousness work? (Credit: Victor Habbick Visions/Science Photo Library)

“Every line of thought on the relationship of consciousness to physics runs into deep trouble,” says Kent.

This has prompted him to suggest that “we could make some progress on understanding the problem of the evolution of consciousness if we supposed that consciousnesses alters (albeit perhaps very slightly and subtly) quantum probabilities.”

“Quantum consciousness” is widely derided as mystical woo, but it just will not go away

In other words, the mind could genuinely affect the outcomes of measurements.

It does not, in this view, exactly determine “what is real”. But it might affect the chance that each of the possible actualities permitted by quantum mechanics is the one we do in fact observe, in a way that quantum theory itself cannot predict. Kent says that we might look for such effects experimentally.

He even bravely estimates the chances of finding them. “I would give credence of perhaps 15% that something specifically to do with consciousness causes deviations from quantum theory, with perhaps 3% credence that this will be experimentally detectable within the next 50 years,” he says.

If that happens, it would transform our ideas about both physics and the mind. That seems a chance worth exploring.


Algoritmo quântico mostrou-se mais eficaz do que qualquer análogo clássico (Revista Fapesp)

11 de dezembro de 2015

José Tadeu Arantes | Agência FAPESP – O computador quântico poderá deixar de ser um sonho e se tornar realidade nos próximos 10 anos. A expectativa é que isso traga uma drástica redução no tempo de processamento, já que algoritmos quânticos oferecem soluções mais eficientes para certas tarefas computacionais do que quaisquer algoritmos clássicos correspondentes.

Até agora, acreditava-se que a chave da computação quântica eram as correlações entre dois ou mais sistemas. Exemplo de correlação quântica é o processo de “emaranhamento”, que ocorre quando pares ou grupos de partículas são gerados ou interagem de tal maneira que o estado quântico de cada partícula não pode ser descrito independentemente, já que depende do conjunto (Para mais informações veja

Um estudo recente mostrou, no entanto, que mesmo um sistema quântico isolado, ou seja, sem correlações com outros sistemas, é suficiente para implementar um algoritmo quântico mais rápido do que o seu análogo clássico. Artigo descrevendo o estudo foi publicado no início de outubro deste ano na revista Scientific Reports, do grupo Nature: Computational speed-up with a single qudit.

O trabalho, ao mesmo tempo teórico e experimental, partiu de uma ideia apresentada pelo físico Mehmet Zafer Gedik, da Sabanci Üniversitesi, de Istambul, Turquia. E foi realizado mediante colaboração entre pesquisadores turcos e brasileiros. Felipe Fernandes Fanchini, da Faculdade de Ciências da Universidade Estadual Paulista (Unesp), no campus de Bauru, é um dos signatários do artigo. Sua participação no estudo se deu no âmbito do projeto Controle quântico em sistemas dissipativos, apoiado pela FAPESP.

“Este trabalho traz uma importante contribuição para o debate sobre qual é o recurso responsável pelo poder de processamento superior dos computadores quânticos”, disse Fanchini à Agência FAPESP.

“Partindo da ideia de Gedik, realizamos no Brasil um experimento, utilizando o sistema de ressonância magnética nuclear (RMN) da Universidade de São Paulo (USP) em São Carlos. Houve, então, a colaboração de pesquisadores de três universidades: Sabanci, Unesp e USP. E demonstramos que um circuito quântico dotado de um único sistema físico, com três ou mais níveis de energia, pode determinar a paridade de uma permutação numérica avaliando apenas uma vez a função. Isso é impensável em um protocolo clássico.”

Segundo Fanchini, o que Gedik propôs foi um algoritmo quântico muito simples que, basicamente, determina a paridade de uma sequência. O conceito de paridade é utilizado para informar se uma sequência está em determinada ordem ou não. Por exemplo, se tomarmos os algarismos 1, 2 e 3 e estabelecermos que a sequência 1- 2-3 está em ordem, as sequências 2-3-1 e 3-1-2, resultantes de permutações cíclicas dos algarismos, estarão na mesma ordem.

Isso é fácil de entender se imaginarmos os algarismos dispostos em uma circunferência. Dada a primeira sequência, basta girar uma vez em um sentido para obter a sequência seguinte, e girar mais uma vez para obter a outra. Porém, as sequências 1-3-2, 3-2-1 e 2-1-3 necessitam, para serem criadas, de permutações acíclicas. Então, se convencionarmos que as três primeiras sequências são “pares”, as outras três serão “ímpares”.

“Em termos clássicos, a observação de um único algarismo, ou seja uma única medida, não permite dizer se a sequência é par ou ímpar. Para isso, é preciso realizar ao menos duas observações. O que Gedik demonstrou foi que, em termos quânticos, uma única medida é suficiente para determinar a paridade. Por isso, o algoritmo quântico é mais rápido do que qualquer equivalente clássico. E esse algoritmo pode ser concretizado por meio de uma única partícula. O que significa que sua eficiência não depende de nenhum tipo de correlação quântica”, informou Fanchini.

O algoritmo em pauta não diz qual é a sequência. Mas informa se ela é par ou ímpar. Isso só é possível quando existem três ou mais níveis. Porque, havendo apenas dois níveis, algo do tipo 1-2 ou 2-1, não é possível definir uma sequência par ou ímpar. “Nos últimos tempos, a comunidade voltada para a computação quântica vem explorando um conceito-chave da teoria quântica, que é o conceito de ‘contextualidade’. Como a ‘contextualidade’ também só opera a partir de três ou mais níveis, suspeitamos que ela possa estar por trás da eficácia de nosso algoritmo”, acrescentou o pesquisador.

Conceito de contextulidade

“O conceito de ‘contextualidade’ pode ser melhor entendido comparando-se as ideias de mensuração da física clássica e da física quântica. Na física clássica, supõe-se que a mensuração nada mais faça do que desvelar características previamente possuídas pelo sistema que está sendo medido. Por exemplo, um determinado comprimento ou uma determinada massa. Já na física quântica, o resultado da mensuração não depende apenas da característica que está sendo medida, mas também de como foi organizada a mensuração, e de todas as mensurações anteriores. Ou seja, o resultado depende do contexto do experimento. E a ‘contextualidade’ é a grandeza que descreve esse contexto”, explicou Fanchini.

Na história da física, a “contextualidade” foi reconhecida como uma característica necessária da teoria quântica por meio do famoso Teorema de Bell. Segundo esse teorema, publicado em 1964 pelo físico irlandês John Stewart Bell (1928 – 1990), nenhuma teoria física baseada em variáveis locais pode reproduzir todas as predições da mecânica quântica. Em outras palavras, os fenômenos físicos não podem ser descritos em termos estritamente locais, uma vez que expressam a totalidade.

“É importante frisar que em outro artigo [Contextuality supplies the ‘magic’ for quantum computation] publicado na Nature em junho de 2014, aponta a contextualidade como a possível fonte do poder da computação quântica. Nosso estudo vai no mesmo sentido, apresentando um algoritmo concreto e mais eficiente do que qualquer um jamais imaginável nos moldes clássicos.”

Full-scale architecture for a quantum computer in silicon (Science Daily)

Scalable 3-D silicon chip architecture based on single atom quantum bits provides a blueprint to build operational quantum computers

October 30, 2015
University of New South Wales
Researchers have designed a full-scale architecture for a quantum computer in silicon. The new concept provides a pathway for building an operational quantum computer with error correction.

This picture shows from left to right Dr Matthew House, Sam Hile (seated), Sciential Professor Sven Rogge and Scientia Professor Michelle Simmons of the ARC Centre of Excellence for Quantum Computation and Communication Technology at UNSW. Credit: Deb Smith, UNSW Australia 

Australian scientists have designed a 3D silicon chip architecture based on single atom quantum bits, which is compatible with atomic-scale fabrication techniques — providing a blueprint to build a large-scale quantum computer.

Scientists and engineers from the Australian Research Council Centre of Excellence for Quantum Computation and Communication Technology (CQC2T), headquartered at the University of New South Wales (UNSW), are leading the world in the race to develop a scalable quantum computer in silicon — a material well-understood and favoured by the trillion-dollar computing and microelectronics industry.

Teams led by UNSW researchers have already demonstrated a unique fabrication strategy for realising atomic-scale devices and have developed the world’s most efficient quantum bits in silicon using either the electron or nuclear spins of single phosphorus atoms. Quantum bits — or qubits — are the fundamental data components of quantum computers.

One of the final hurdles to scaling up to an operational quantum computer is the architecture. Here it is necessary to figure out how to precisely control multiple qubits in parallel, across an array of many thousands of qubits, and constantly correct for ‘quantum’ errors in calculations.

Now, the CQC2T collaboration, involving theoretical and experimental researchers from the University of Melbourne and UNSW, has designed such a device. In a study published today in Science Advances, the CQC2T team describes a new silicon architecture, which uses atomic-scale qubits aligned to control lines — which are essentially very narrow wires — inside a 3D design.

“We have demonstrated we can build devices in silicon at the atomic-scale and have been working towards a full-scale architecture where we can perform error correction protocols — providing a practical system that can be scaled up to larger numbers of qubits,” says UNSW Scientia Professor Michelle Simmons, study co-author and Director of the CQC2T.

“The great thing about this work, and architecture, is that it gives us an endpoint. We now know exactly what we need to do in the international race to get there.”

In the team’s conceptual design, they have moved from a one-dimensional array of qubits, positioned along a single line, to a two-dimensional array, positioned on a plane that is far more tolerant to errors. This qubit layer is “sandwiched” in a three-dimensional architecture, between two layers of wires arranged in a grid.

By applying voltages to a sub-set of these wires, multiple qubits can be controlled in parallel, performing a series of operations using far fewer controls. Importantly, with their design, they can perform the 2D surface code error correction protocols in which any computational errors that creep into the calculation can be corrected faster than they occur.

“Our Australian team has developed the world’s best qubits in silicon,” says University of Melbourne Professor Lloyd Hollenberg, Deputy Director of the CQC2T who led the work with colleague Dr Charles Hill. “However, to scale up to a full operational quantum computer we need more than just many of these qubits — we need to be able to control and arrange them in such a way that we can correct errors quantum mechanically.”

“In our work, we’ve developed a blueprint that is unique to our system of qubits in silicon, for building a full-scale quantum computer.”

In their paper, the team proposes a strategy to build the device, which leverages the CQC2T’s internationally unique capability of atomic-scale device fabrication. They have also modelled the required voltages applied to the grid wires, needed to address individual qubits, and make the processor work.

“This architecture gives us the dense packing and parallel operation essential for scaling up the size of the quantum processor,” says Scientia Professor Sven Rogge, Head of the UNSW School of Physics. “Ultimately, the structure is scalable to millions of qubits, required for a full-scale quantum processor.”


In classical computers, data is rendered as binary bits, which are always in one of two states: 0 or 1. However, a qubit can exist in both of these states at once, a condition known as a superposition. A qubit operation exploits this quantum weirdness by allowing many computations to be performed in parallel (a two-qubit system performs the operation on 4 values, a three-qubit system on 8, and so on).

As a result, quantum computers will far exceed today’s most powerful super computers, and offer enormous advantages for a range of complex problems, such as rapidly scouring vast databases, modelling financial markets, optimising huge metropolitan transport networks, and modelling complex biological molecules.

How to build a quantum computer in silicon

Experiment Provides Further Evidence That Reality Doesn’t Exist Until We Measure It (IFLScience)

June 2, 2015 | by Stephen Luntz

photo credit: Pieter Kuiper via Wikimedia Commons. A comparison of double slit interference patterns with different widths. Similar patterns produced by atoms have confirmed the dominant model of quantum mechanics 

Physicists have succeeded in confirming one of the theoretical aspects of quantum physics: Subatomic objects switch between particle and wave states when observed, while remaining in a dual state beforehand.

In the macroscopic world, we are used to waves being waves and solid objects being particle-like. However, quantum theory holds that for the very small this distinction breaks down. Light can behave either as a wave, or as a particle. The same goes for objects with mass like electrons.

This raises the question of what determines when a photon or electron will behave like a wave or a particle. How, anthropomorphizing madly, do these things “decide” which they will be at a particular time?

The dominant model of quantum mechanics holds that it is when a measurement is taken that the “decision” takes place. Erwin Schrodinger came up with his famous thought experiment using a cat to ridicule this idea. Physicists think that quantum behavior breaks down on a large scale, so Schrödinger’s cat would not really be both alive and dead—however, in the world of the very small, strange theories like this seem to be the only way to explain what we we see.

In 1978, John Wheeler proposed a series of thought experiments to make sense of what happens when a photon has to either behave in a wave-like or particle-like manner. At the time, it was considered doubtful that these could ever be implemented in practice, but in 2007 such an experiment was achieved.

Now, Dr. Andrew Truscott of the Australian National University has reported the same thing in Nature Physics, but this time using a helium atom, rather than a photon.

“A photon is in a sense quite simple,” Truscott told IFLScience. “An atom has significant mass and couples to magnetic and electric fields, so it is much more in tune with its environment. It is more of a classical particle in a sense, so this was a test of whether a more classical particle would behave in the same way.”

Trustcott’s experiment involved creating a Bose-Einstein Condensate of around a hundred helium atoms. He conducted the experiment first with this condensate, but says the possibility that atoms were influencing each other made it important to repeat after ejecting all but one. The atom was passed through a “grate” made by two laser beams that can scatter an atom in a similar manner to a solid grating that can scatter light. These have been shown to cause atoms to either pass through one arm, like a particle, or both, like a wave.

A random number generator was then used to determine whether a second grating would appear further along the atom’s path. Crucially, the number was only generated after the atom had passed the first grate.

The second grating, when applied, caused an interference pattern in the measurement of the atom further along the path. Without the second grating, the atom had no such pattern.

An optical version of Wheeler’s delayed choice experiment (left) and an atomic version as used by Truscott (right). Credit: Manning et al.

Truscott says that there are two possible explanations for the behavior observed. Either, as most physicists think, the atom decided whether it was a wave or a particle when measured, or “a future event (the method of detection) causes the photon to decide its past.”

In the bizarre world of quantum mechanics, events rippling back in time may not seem that much stranger than things like “spooky action at a distance” or even something being a wave and a particle at the same time. However, Truscott said, “this experiment can’t prove that that is the wrong interpretation, but it seems wrong, and given what we know from elsewhere, it is much more likely that only when we measure the atoms do their observable properties come into reality.”

Out of Place: Space/Time and Quantum (In)security (The Disorder of Things)


A demon lives behind my left eye. As a migraine sufferer, I have developed a very personal relationship with my pain and its perceived causes. On a bad day, with a crippling sensitivity to light, nausea, and the feeling that the blood flowing to my brain has slowed to a crawl and is the poisoned consistency of pancake batter, I feel the presence of this demon keenly.

On the first day of the Q2 Symposium, however, which I was delighted to attend recently, the demon was in a tricksy mood, rather than out for blood: this was a vestibular migraine. The symptoms of this particular neurological condition are dizziness, loss of balance, and sensitivity to motion. Basically, when the demon manifests in this way, I feel constantly as though I am falling: falling over, falling out of place. The Q Symposium, hosted by James Der Derian and the marvellous team at the University of Sydney’s Centre for International Security Studies,  was intended, over the course of two days and a series of presentations, interventions, and media engagements,  to unsettle, to make participants think differently about space/time and security, thinking through quantum rather than classical theory, but I do not think that this is what the organisers had in mind.

photo of cabins and corridors at Q Station, SydneyAt the Q Station, located in Sydney where the Q Symposium was held, my pain and my present aligned: I felt out of place, I felt I was falling out of place. I did not expect to like the Q Station. It is the former quarantine station used by the colonial administration to isolate immigrants they suspected of carrying infectious diseases. Its location, on the North Head of Sydney and now within the Sydney Harbour National Park, was chosen for strategic reasons – it is secluded, easy to manage, a passageway point on the journey through to the inner harbour – but it has a much longer historical relationship with healing and disease. The North Head is a site of Aboriginal cultural significance; the space was used by the spiritual leaders (koradgee) of the Guringai peoples for healing and burial ceremonies.

So I did not expect to like it, as such an overt symbol of the colonisation of Aboriginal lands, but it disarmed me. It is a place of great natural beauty, and it has been revived with respect, I felt, for the rich spiritual heritage of the space that extended long prior to the establishment of the Quarantine Station in 1835. When we Q2 Symposium participants were welcomed to country by and invited to participate in a smoking ceremony to protect us as we passed through the space, we were reminded of this history and thus reminded – gently, respectfully (perhaps more respectfully than we deserved) – that this is not ‘our’ place. We were out of place.

We were all out of place at the Q2 Symposium. That is the point. Positioning us thus was deliberate; we were to see whether voluntary quarantine would produce new interactions and new insights, guided by the Q Vision, to see how quantum theory ‘responds to global events like natural and unnatural disasters, regime change and diplomatic negotiations that phase-shift with media interventions from states to sub-states, local to global, public to private, organised to chaotic, virtual to real and back again, often in a single news cycle’. It was two days of rich intellectual exploration and conversation, and – as is the case when these experiments work – beautiful connections began to develop between those conversations and the people conversing, conversations about peace, security, and innovation, big conversations about space, and time.

I felt out of place. Mine is not the language of quantum theory. I learned so much from listening to my fellow participants, but I was insecure; as the migraine took hold on the first day, I was not only physically but intellectually feeling as though I was continually falling out of the moment, struggling to maintain the connections between what I was hearing and what I thought I knew.

Quantum theory departs from classical theory in the proposition of entanglement and the uncertainty principle:

This principle states the impossibility of simultaneously specifying the precise position and momentum of any particle. In other words, physicists cannot measure the position of a particle, for example, without causing a disturbance in the velocity of that particle. Knowledge about position and velocity are said to be complementary, that is, they cannot be precise at the same time.

I do not know anything about quantum theory – I found it hard to follow even the beginner’s guides provided by the eloquent speakers at the Symposium – but I know a lot about uncertainty. I also feel that I know something about entanglement, perhaps not as it is conceived of within quantum physics, but perhaps that is the point of events such as the Q Symposium: to encourage us to allow the unfamiliar to flow through and around us until the stream snags, to produce an idea or at least a moment of alternative cognition.

My moment of alternative cognition was caused by foetal microchimerism, a connection that flashed for me while I was listening to a physicist talk about entanglement. Scientists have shown that during gestation, foetal cells migrate into the body of the mother and can be found in the brain, spleen, liver, and elsewhere decades later. There are (possibly) parts of my son in my brain, literally as well as simply metaphorically (as the latter was already clear). I am entangled with him in ways that I cannot comprehend. Listening to the speakers discuss entanglement, all I could think was, This is what entanglement means to me, it is in my body.

Perhaps I am not proposing entanglement as Schrödinger does, as ‘the characteristic trait of quantum mechanics, the one that enforces its entire departure from classical lines of thought’. Perhaps I am just using the concept of entanglement to denote the inextricable, inexplicable, relationality that I have with my son, my family, my community, humanity. It is this entanglement that undoes me, to use Judith Butler’s most eloquent phrase, in the face of grief, violence, and injustice. Perhaps this is the value of the quantum: to make connections that are not possible within the confines of classical thought.

I am not a scientist. I am a messy body out of place, my ‘self’ apparently composed of bodies out of place. My world is not reducible. My uncertainty is vast. All of these things make me insecure, challenge how I move through professional time and space as I navigate the academy. But when I return home from my time in quarantine and joyfully reconnect with my family, I am grounded by how I perceive my entanglement. It is love, not science, that makes me a better scholar.

photo of sign that says 'laboratory and mortuary' from Q station, sydney.

I was inspired by what I heard, witnessed, discussed at the Q2 Symposium. I was – and remain – inspired by the vision of the organisers, the refusal to be bound by classical logics in any field that turns into a drive, a desire to push our exploration of security, peace, and war in new directions. We need new directions; our classical ideas have failed us, and failed humanity, a point made by Colin Wight during his remarks on the final panel at the Symposium. Too often we continue to act as though the world is our laboratory; we have ‘all these theories yet the bodies keep piling up…‘.

But if this is the case, I must ask: do we need a quantum turn to get us to a space within which we can admit entanglement, admit uncertainty, admit that we are out of place? We are never (only) our ‘selves’: we are always both wave and particle and all that is in between and it is our being entangled that renders us human. We know this from philosophy, from art and the humanities. Can we not learn this from art? Must we turn to science (again)? I felt diminished by the asking of these questions, insecure, but I did not feel that these questions were out of place.

No Big Bang? Quantum equation predicts universe has no beginning (

Feb 09, 2015 by Lisa Zyga

big bang

This is an artist’s concept of the metric expansion of space, where space (including hypothetical non-observable portions of the universe) is represented at each time by the circular sections. Note on the left the dramatic expansion (not to scale) occurring in the inflationary epoch, and at the center the expansion acceleration. The scheme is decorated with WMAP images on the left and with the representation of stars at the appropriate level of development. Credit: NASA

Read more at:

( —The universe may have existed forever, according to a new model that applies quantum correction terms to complement Einstein’s theory of general relativity. The model may also account for dark matter and dark energy, resolving multiple problems at once.

The widely accepted age of the , as estimated by , is 13.8 billion years. In the beginning, everything in existence is thought to have occupied a single infinitely dense point, or . Only after this point began to expand in a “Big Bang” did the universe officially begin.

Although the Big Bang singularity arises directly and unavoidably from the mathematics of general relativity, some scientists see it as problematic because the math can explain only what happened immediately after—not at or before—the singularity.

“The Big Bang singularity is the most serious problem of general relativity because the laws of physics appear to break down there,” Ahmed Farag Ali at Benha University and the Zewail City of Science and Technology, both in Egypt, told

Ali and coauthor Saurya Das at the University of Lethbridge in Alberta, Canada, have shown in a paper published in Physics Letters B that the Big Bang singularity can be resolved by their  in which the universe has no beginning and no end.

Old ideas revisited

The physicists emphasize that their quantum correction terms are not applied ad hoc in an attempt to specifically eliminate the Big Bang singularity. Their work is based on ideas by the theoretical physicist David Bohm, who is also known for his contributions to the philosophy of physics. Starting in the 1950s, Bohm explored replacing classical geodesics (the shortest path between two points on a curved surface) with quantum trajectories.

In their paper, Ali and Das applied these Bohmian trajectories to an equation developed in the 1950s by physicist Amal Kumar Raychaudhuri at Presidency University in Kolkata, India. Raychaudhuri was also Das’s teacher when he was an undergraduate student of that institution in the ’90s.

Using the quantum-corrected Raychaudhuri equation, Ali and Das derived quantum-corrected Friedmann equations, which describe the expansion and evolution of universe (including the Big Bang) within the context of general relativity. Although it’s not a true theory of , the  does contain elements from both quantum theory and general relativity. Ali and Das also expect their results to hold even if and when a full theory of quantum gravity is formulated.

No singularities nor dark stuff

In addition to not predicting a Big Bang singularity, the new model does not predict a “big crunch” singularity, either. In general relativity, one possible fate of the universe is that it starts to shrink until it collapses in on itself in a big crunch and becomes an infinitely dense point once again.

Ali and Das explain in their paper that their model avoids singularities because of a key difference between classical geodesics and Bohmian trajectories. Classical geodesics eventually cross each other, and the points at which they converge are singularities. In contrast, Bohmian trajectories never cross each other, so singularities do not appear in the equations.

In cosmological terms, the scientists explain that the quantum corrections can be thought of as a cosmological constant term (without the need for dark energy) and a radiation term. These terms keep the universe at a finite size, and therefore give it an infinite age. The terms also make predictions that agree closely with current observations of the cosmological constant and density of the universe.

New gravity particle

In physical terms, the model describes the universe as being filled with a quantum fluid. The scientists propose that this fluid might be composed of gravitons—hypothetical massless particles that mediate the force of gravity. If they exist, gravitons are thought to play a key role in a theory of quantum gravity.

In a related paper, Das and another collaborator, Rajat Bhaduri of McMaster University, Canada, have lent further credence to this model. They show that gravitons can form a Bose-Einstein condensate (named after Einstein and another Indian physicist, Satyendranath Bose) at temperatures that were present in the universe at all epochs.

Motivated by the model’s potential to resolve the Big Bang singularity and account for  and , the physicists plan to analyze their model more rigorously in the future. Their future work includes redoing their study while taking into account small inhomogeneous and anisotropic perturbations, but they do not expect small perturbations to significantly affect the results.

“It is satisfying to note that such straightforward corrections can potentially resolve so many issues at once,” Das said.

More information: Ahmed Farag Ali and Saurya Das. “Cosmology from quantum potential.” Physics Letters B. Volume 741, 4 February 2015, Pages 276–279. DOI: 10.1016/j.physletb.2014.12.057. Also at: arXiv:1404.3093[gr-qc].

Saurya Das and Rajat K. Bhaduri, “Dark matter and dark energy from Bose-Einstein condensate”, preprint: arXiv:1411.0753[gr-qc].

Chemists Confirm the Existence of New Type of Bond (Scientific American)

A “vibrational” chemical bond predicted in the 1980s is demonstrated experimentally

Jan 20, 2015 By Amy Nordrum

Credit: Allevinatis/Flickr

Chemistry has many laws, one of which is that the rate of a reaction speeds up as temperature rises. So, in 1989, when chemists experimenting at a nuclear accelerator in Vancouver observed that a reaction between bromine and muonium—a hydrogen isotope—slowed down when they increased the temperature, they were flummoxed.

Donald Fleming, a University of British Columbia chemist involved with the experiment, thought that perhaps as bromine and muonium co-mingled, they formed an intermediate structure held together by a “vibrational” bond—a bond that other chemists had posed as a theoretical possibility earlier that decade. In this scenario, the lightweight muonium atom would move rapidly between two heavy bromine atoms, “like a Ping Pong ball bouncing between two bowling balls,” Fleming says. The oscillating atom would briefly hold the two bromine atoms together and reduce the overall energy, and therefore speed, of the reaction. (With a Fleming working on a bond, you could say the atomic interaction is shaken, not stirred.)

At the time of the experiment, the necessary equipment was not available to examine the milliseconds-long reaction closely enough to determine whether such vibrational bonding existed. Over the past 25 years, however, chemists’ ability to track subtle changes in energy levels within reactions has greatly improved, so Fleming and his colleagues ran their reaction again three years ago in the nuclear accelerator at Rutherford Appleton Laboratory in England. Based on calculations from both experiments and the work of collaborating theoretical chemists at Free University of Berlin and Saitama University in Japan, they concluded that muonium and bromine were indeed forming a new type of temporary bond. Its vibrational nature lowered the total energy of the intermediate bromine-muonium structure—thereby explaining why the reaction slowed even though the temperature was rising.

The team reported its results last December in Angewandte Chemie International Edition, a publication of the German Chemical Society. The work confirms that vibrational bonds—fleeting though they may be—should be added to the list of known chemical bonds. And although the bromine-muonium reaction was an “ideal” system to verify vibrational bonding, Fleming predicts the phenomenon also occurs in other reactions between heavy and light atoms.

This article was originally published with the title “New Vibrations.”

Computadores quânticos podem revolucionar teoria da informação (Fapesp)

30 de janeiro de 2015

Por Diego Freire

Agência FAPESP – A perspectiva dos computadores quânticos, com capacidade de processamento muito superior aos atuais, tem levado ao aprimoramento de uma das áreas mais versáteis da ciência, com aplicações nas mais diversas áreas do conhecimento: a teoria da informação. Para discutir essa e outras perspectivas, o Instituto de Matemática, Estatística e Computação Científica (Imecc) da Universidade Estadual de Campinas (Unicamp) realizou, de 19 a 30 de janeiro, a SPCoding School.

O evento ocorreu no âmbito do programa Escola São Paulo de Ciência Avançada (ESPCA), da FAPESP, que oferece recursos para a organização de cursos de curta duração em temas avançados de ciência e tecnologia no Estado de São Paulo.

A base da informação processada pelos computadores largamente utilizados é o bit, a menor unidade de dados que pode ser armazenada ou transmitida. Já os computadores quânticos trabalham com qubits, que seguem os parâmetros da mecânica quântica, ramo da Física que trata das dimensões próximas ou abaixo da escala atômica. Por conta disso, esses equipamentos podem realizar simultaneamente uma quantidade muito maior de cálculos.

“Esse entendimento quântico da informação atribui toda uma complexidade à sua codificação. Mas, ao mesmo tempo em que análises complexas, que levariam décadas, séculos ou até milhares de anos para serem feitas em computadores comuns, poderiam ser executadas em minutos por computadores quânticos, também essa tecnologia ameaçaria o sigilo de informações que não foram devidamente protegidas contra esse tipo de novidade”, disse Sueli Irene Rodrigues Costa, professora do IMECC, à Agência FAPESP.

A maior ameaça dos computadores quânticos à criptografia atual está na sua capacidade de quebrar os códigos usados na proteção de informações importantes, como as de cartão de crédito. Para evitar esse tipo de risco é preciso desenvolver também sistemas criptográficos visando segurança, considerando a capacidade da computação quântica.

“A teoria da informação e a codificação precisam estar um passo à frente do uso comercial da computação quântica”, disse Rodrigues Costa, que coordena o Projeto Temático “Segurança e confiabilidade da informação: teoria e prática”, apoiado pela FAPESP.

“Trata-se de uma criptografia pós-quântica. Como já foi demonstrado no final dos anos 1990, os procedimentos criptográficos atuais não sobreviverão aos computadores quânticos por não serem tão seguros. E essa urgência pelo desenvolvimento de soluções preparadas para a capacidade da computação quântica também impulsiona a teoria da informação a avançar cada vez mais em diversas direções”, disse.

Algumas dessas soluções foram tratadas ao longo da programação da SPCoding School, muitas delas visando sistemas mais eficientes para a aplicação na computação clássica, como o uso de códigos corretores de erros e de reticulados para criptografia. Para Rodrigues Costa, a escalada da teoria da informação em paralelo ao desenvolvimento da computação quântica provocará revoluções em várias áreas do conhecimento.

“A exemplo das múltiplas aplicações da teoria da informação na atualidade, a codificação quântica também elevaria diversas áreas da ciência a novos patamares por possibilitar simulações computacionais ainda mais precisas do mundo físico, lidando com uma quantidade exponencialmente maior de variáveis em comparação aos computadores clássicos”, disse Rodrigues Costa.

A teoria da informação envolve a quantificação da informação e envolve áreas como matemática, engenharia elétrica e ciência da computação. Teve como pioneiro o norte-americano Claude Shannon (1916-2001), que foi o primeiro a considerar a comunicação como um problema matemático.

Revoluções em curso

Enquanto se prepara para os computadores quânticos, a teoria da informação promove grandes modificações na codificação e na transmissão de informações. Amin Shokrollahi, da École Polytechnique Fédérale de Lausanne, na Suíça, apresentou na SPCoding School novas técnicas de codificação para resolver problemas como ruídos na informação e consumo elevado de energia no processamento de dados, inclusive na comunicação chip a chip nos aparelhos.

Shokrollahi é conhecido na área por ter inventado os códigos Raptor e coinventado os códigos Tornado, utilizados em padrões de transmissão móveis de informação, com implementações em sistemas sem fio, satélites e no método de transmissão de sinais televisivos IPTV, que usa o protocolo de internet (IP, na sigla em inglês) para transmitir conteúdo.

“O crescimento do volume de dados digitais e a necessidade de uma comunicação cada vez mais rápida aumentam a susceptibilidade a vários tipos de ruído e o consumo de energia. É preciso buscar novas soluções nesse cenário”, disse.

Shokrollahi também apresentou inovações desenvolvidas na empresa suíça Kandou Bus, da qual é diretor de pesquisa. “Utilizamos algoritmos especiais para codificar os sinais, que são todos transferidos simultaneamente até que um decodificador recupere os sinais originais. Tudo isso é feito evitando que fios vizinhos interfiram entre si, gerando um nível de ruído significativamente menor. Os sistemas também reduzem o tamanho dos chips, aumentam a velocidade de transmissão e diminuem o consumo de energia”, explicou.

De acordo com Rodrigues Costa, soluções semelhantes também estão sendo desenvolvidas em diversas tecnologias largamente utilizadas pela sociedade.

“Os celulares, por exemplo, tiveram um grande aumento de capacidade de processamento e em versatilidade, mas uma das queixas mais frequentes entre os usuários é de que a bateria não dura. Uma das estratégias é descobrir meios de codificar de maneira mais eficiente para economizar energia”, disse.

Aplicações biológicas

Não são só problemas de natureza tecnológica que podem ser abordados ou solucionados por meio da teoria da informação. Professor na City University of New York, nos Estados Unidos, Vinay Vaishampayan coordenou na SPCoding School o painel “Information Theory, Coding Theory and the Real World”, que tratou de diversas aplicações dos códigos na sociedade – entre elas, as biológicas.

“Não existe apenas uma teoria da informação e suas abordagens, entre computacionais e probabilísticas, podem ser aplicadas a praticamente todas as áreas do conhecimento. Nós tratamos no painel das muitas possibilidades de pesquisa à disposição de quem tem interesse em estudar essas interfaces dos códigos com o mundo real”, disse à Agência FAPESP.

Vaishampayan destacou a Biologia como área de grande potencial nesse cenário. “A neurociência apresenta questionamentos importantes que podem ser respondidos com a ajuda da teoria da informação. Ainda não sabemos em profundidade como os neurônios se comunicam entre si, como o cérebro funciona em sua plenitude e as redes neurais são um campo de estudo muito rico também do ponto de vista matemático, assim como a Biologia Molecular”, disse.

Isso porque, de acordo com Max Costa, professor da Faculdade de Engenharia Elétrica e de Computação da Unicamp e um dos palestrantes, os seres vivos também são feitos de informação.

“Somos codificados por meio do DNA das nossas células. Descobrir o segredo desse código, o mecanismo que há por trás dos mapeamentos que são feitos e registrados nesse contexto, é um problema de enorme interesse para a compreensão mais profunda do processo da vida”, disse.

Para Marcelo Firer, professor do Imecc e coordenador da SPCoding School, o evento proporcionou a estudantes e pesquisadores de diversos campos novas possibilidades de pesquisa.

“Os participantes compartilharam oportunidades de engajamento em torno dessas e muitas outras aplicações da Teoria da Informação e da Codificação. Foram oferecidos desde cursos introdutórios, destinados a estudantes com formação matemática consistente, mas não necessariamente familiarizados com codificação, a cursos de maior complexidade, além de palestras e painéis de discussão”, disse Firer, membro da coordenação da área de Ciência e Engenharia da Computação da FAPESP.

Participaram do evento cerca de 120 estudantes de 70 universidades e 25 países. Entre os palestrantes estrangeiros estiveram pesquisadores do California Institute of Technology (Caltech), da Maryland University e da Princeton University, nos Estados Unidos; da Chinese University of Hong Kong, na China; da Nanyang Technological University, em Cingapura; da Technische Universiteit Eindhoven, na Holanda; da Universidade do Porto, em Portugal; e da Tel Aviv University, em Israel.

Mais informações em

The Question That Could Unite Quantum Theory With General Relativity: Is Spacetime Countable? (The Physics Arxiv Blog)

Current thinking about quantum gravity assumes that spacetime exists in countable lumps, like grains of sand. That can’t be right, can it?

The Physics arXiv Blog

One of the big problems with quantum gravity is that it generates infinities that have no physical meaning. These come about because quantum mechanics implies that accurate measurements of the universe on the tiniest scales require high-energy. But when the scale becomes very small, the energy density associated with a measurement is so great that it should lead to the formation of a black hole, which would paradoxically ruin the measurement that created it.

These kinds of infinities are something of an annoyance. Their paradoxical nature makes them hard to deal with mathematically and difficult to reconcile with our knowledge of the universe, which as far as we can tell, avoids this kind of paradoxical behaviour.

So physicists have invented a way to deal with infinities called renormalisation. In essence, theorists assume that space-time is not infinitely divisible. Instead, there is a minimum scale beyond which nothing can be smaller, the so-called Planck scale. This limit ensures that energy densities never become high enough to create black holes.

This is also equivalent to saying that space-time is discrete, or as a mathematician might put it, countable. In other words, it is possible to allocate a number to each discrete volume of space-time making it countable, like grains of sand on a beach or atoms in the universe. That means space-time is entirely unlike uncountable things, such as straight lines which are infinitely divisible, or the degrees of freedom of in the fields that constitute the basic building blocks of physics, which have been mathematically proven to be uncountable.

This discreteness is certainly useful but it also raises an important question: is it right? Can the universe really be fundamentally discrete, like a computer model? Today, Sean Gryb from Radboud University in the Netherlands argues that an alternative approach is emerging in the form of a new formulation of gravity called shape dynamics. This new approach implies that spacetime is smooth and uncountable, an idea that could have far-reaching consequences for the way we understand the universe.

At the heart of this new theory is the concept of scale invariance. This is the idea that an object or law has the same properties regardless of the scale at which it is viewed.

The current laws of physics generally do not have this property. Quantum mechanics, for example, operates only at the smallest scale, while gravity operates at the largest. So it is easy to see why scale invariance is a property that theorists drool over — a scale invariant description of the universe must encompass both quantum theory and gravity.

Shape dynamics does just this, says Gryb. It does this by ignoring many ordinary features of physical objects, such as their position within the universe. Instead, it focuses on objects’ relationships to each other, such as the angles between them and the shape that this makes (hence the term shape dynamics).

This approach immediately leads to a scale invariant picture of reality. Angles are scale invariant because they are the same regardless of the scale at which they are viewed. So the new thinking is describe the universe as a series of instantaneous snapshots on the relationship between objects.

The result is a scale invariance that is purely spatial. But this, of course, is very different to the more significant notion of spacetime scale invariance.

So a key part of Gryb’s work is in using the mathematical ideas of symmetry to show that spatial scale invariance can be transformed into spacetime scale invariance.

Specifically, Gryb shows exactly how this works in a closed, expanding universe in which the laws of physics are the same for all inertial observers and for whom the speed of light is finite and constant.

If those last two conditions sound familiar, it’s because they are the postulates Einstein used to derive special relativity. And Gryb’s formulation is equivalent to this. “Observers in Einstein’s special theory of relativity can be reinterpreted as observers in a scale-invariant space,” he says.

That raises some interesting possibilities for a broader theory of theuniversegravity, just as special relativity lead to a broader theory of gravity in the form of general relativity.

Gryb describes how it is possible to create models of curved space-time by gluing together local patches of flat space-times. “Could it be possible to do something similar in Shape Dynamics; i.e., glue together local patches of conformally flat spaces that could then be related to General Relativity?” he asks.

Nobody has succeeded in doing this on a model that includes the three dimensions of space and one of time but this is early days for shape dynamics. But Gryb and others are working on the problem.

He is clearly excited by the future possibilities, saying that it suggests a new way to think about quantum gravity in scale invariant terms. “This would provide a new mechanism for being able to deal with the uncountably infinite number of degrees of freedom in the gravitational field without introducing discreteness at the Plank scale,” he says.

That’s an exciting new approach. And it is one expounded by a fresh new voice who is able to explain his ideas in a highly readable fashion to a broad audience. There is no way of knowing how this line of thinking will evolve but we’ll look forward to more instalments from Gryb.

Ref: : Is Spacetime Countable?

Quantum Experiment Shows How Time ‘Emerges’ from Entanglement (The Physics arXiv Blog)

Time is an emergent phenomenon that is a side effect of quantum entanglement, say physicists. And they have the first experimental results to prove it

The Physics arXiv Blog

When the new ideas of quantum mechanics spread through science like wildfire in the first half of the 20th century, one of the first things physicists did was to apply them to gravity and general relativity. The results were not pretty.

It immediately became clear that these two foundations of modern physics were entirely incompatible. When physicists attempted to meld the approaches, the resulting equations were bedeviled with infinities making it impossible to make sense of the results.

Then in the mid-1960s, there was a breakthrough. The physicists John Wheeler and Bryce DeWitt successfully combined the previously incompatible ideas in a key result that has since become known as the Wheeler-DeWitt equation. This is important because it avoids the troublesome infinites—a huge advance.

But it didn’t take physicists long to realise that while the Wheeler-DeWitt equation solved one significant problem, it introduced another. The new problem was that time played no role in this equation. In effect, it says that nothing ever happens in the universe, a prediction that is clearly at odds with the observational evidence.

This conundrum, which physicists call ‘the problem of time’, has proved to be a thorn in flesh of modern physicists, who have tried to ignore it but with little success.

Then in 1983, the theorists Don Page and William Wootters came up with a novel solution based on the quantum phenomenon of entanglement. This is the exotic property in which two quantum particles share the same existence, even though they are physically separated.

Entanglement is a deep and powerful link and Page and Wootters showed how it can be used to measure time. Their idea was that the way a pair of entangled particles evolve is a kind of clock that can be used to measure change.

But the results depend on how the observation is made. One way to do this is to compare the change in the entangled particles with an external clock that is entirely independent of the universe. This is equivalent to god-like observer outside the universe measuring the evolution of the particles using an external clock.

In this case, Page and Wootters showed that the particles would appear entirely unchanging—that time would not exist in this scenario.

But there is another way to do it that gives a different result. This is for an observer inside the universe to compare the evolution of the particles with the rest of the universe. In this case, the internal observer would see a change and this difference in the evolution of entangled particles compared with everything else is an important a measure of time.

This is an elegant and powerful idea. It suggests that time is an emergent phenomenon that comes about because of the nature of entanglement. And it exists only for observers inside the universe. Any god-like observer outside sees a static, unchanging universe, just as the Wheeler-DeWitt equations predict.

Of course, without experimental verification, Page and Wootter’s ideas are little more than a philosophical curiosity. And since it is never possible to have an observer outside the universe, there seemed little chance of ever testing the idea.

Until now. Today, Ekaterina Moreva at the Istituto Nazionale di Ricerca Metrologica (INRIM) in Turin, Italy, and a few pals have performed the first experimental test of Page and Wootters’ ideas. And they confirm that time is indeed an emergent phenomenon for ‘internal’ observers but absent for external ones.

The experiment involves the creation of a toy universe consisting of a pair of entangled photons and an observer that can measure their state in one of two ways. In the first, the observer measures the evolution of the system by becoming entangled with it. In the second, a god-like observer measures the evolution against an external clock which is entirely independent of the toy universe.

The experimental details are straightforward. The entangled photons each have a polarisation which can be changed by passing it through a birefringent plate. In the first set up, the observer measures the polarisation of one photon, thereby becoming entangled with it. He or she then compares this with the polarisation of the second photon. The difference is a measure of time.

In the second set up, the photons again both pass through the birefringent plates which change their polarisations. However, in this case, the observer only measures the global properties of both photons by comparing them against an independent clock.

In this case, the observer cannot detect any difference between the photons without becoming entangled with one or the other. And if there is no difference, the system appears static. In other words, time does not emerge.

“Although extremely simple, our model captures the two, seemingly contradictory, properties of the Page-Wootters mechanism,” say Moreva and co.

That’s an impressive experiment. Emergence is a popular idea in science. In particular, physicists have recently become excited about the idea that gravity is an emergent phenomenon. So it’s a relatively small step to think that time may emerge in a similar way.

What emergent gravity has lacked, of course, is an experimental demonstration that shows how it works in practice. That’s why Moreva and co’s work is significant. It places an abstract and exotic idea on firm experimental footing for the first time.

Perhaps most significant of all is the implication that quantum mechanics and general relativity are not so incompatible after all. When viewed through the lens of entanglement, the famous ‘problem of time’ just melts away.

The next step will be to extend the idea further, particularly to the macroscopic scale. It’s one thing to show how time emerges for photons, it’s quite another to show how it emerges for larger things such as humans and train timetables.

And therein lies another challenge.

Ref: :Time From Quantum Entanglement: An Experimental Illustration

Why Physicists Are Saying Consciousness Is A State Of Matter, Like a Solid, A Liquid Or A Gas (The Physics arXiv Blog)

Why Physicists Are Saying Consciousness Is A State Of Matter, Like a Solid, A Liquid Or A Gas

A new way of thinking about consciousness is sweeping through science like wildfire. Now physicists are using it to formulate the problem of consciousness in concrete mathematical terms for the first time

The Physics arXiv Blog

There’s a quiet revolution underway in theoretical physics. For as long as the discipline has existed, physicists have been reluctant to discuss consciousness, considering it a topic for quacks and charlatans. Indeed, the mere mention of the ‘c’ word could ruin careers.

That’s finally beginning to change thanks to a fundamentally new way of thinking about consciousness that is spreading like wildfire through the theoretical physics community. And while the problem of consciousness is far from being solved, it is finally being formulated mathematically as a set of problems that researchers can understand, explore and discuss.

Today, Max Tegmark, a theoretical physicist at the Massachusetts Institute of Technology in Cambridge, sets out the fundamental problems that this new way of thinking raises. He shows how these problems can be formulated in terms of quantum mechanics and information theory. And he explains how thinking about consciousness in this way leads to precise questions about the nature of reality that the scientific process of experiment might help to tease apart.

Tegmark’s approach is to think of consciousness as a state of matter, like a solid, a liquid or a gas. “I conjecture that consciousness can be understood as yet another state of matter. Just as there are many types of liquids, there are many types of consciousness,” he says.

He goes on to show how the particular properties of consciousness might arise from the physical laws that govern our universe. And he explains how these properties allow physicists to reason about the conditions under which consciousness arises and how we might exploit it to better understand why the world around us appears as it does.

Interestingly, the new approach to consciousness has come from outside the physics community, principally from neuroscientists such as Giulio Tononi at the University of Wisconsin in Madison.

In 2008, Tononi proposed that a system demonstrating consciousness must have two specific traits. First, the system must be able to store and process large amounts of information. In other words consciousness is essentially a phenomenon of information.

And second, this information must be integrated in a unified whole so that it is impossible to divide into independent parts. That reflects the experience that each instance of consciousness is a unified whole that cannot be decomposed into separate components.

Both of these traits can be specified mathematically allowing physicists like Tegmark to reason about them for the first time. He begins by outlining the basic properties that a conscious system must have.

Given that it is a phenomenon of information, a conscious system must be able to store in a memory and retrieve it efficiently.

It must also be able to to process this data, like a computer but one that is much more flexible and powerful than the silicon-based devices we are familiar with.

Tegmark borrows the term computronium to describe matter that can do this and cites other work showing that today’s computers underperform the theoretical limits of computing by some 38 orders of magnitude.

Clearly, there is so much room for improvement that allows for the performance of conscious systems.

Next, Tegmark discusses perceptronium, defined as the most general substance that feels subjectively self-aware. This substance should not only be able to store and process information but in a way that forms a unified, indivisible whole. That also requires a certain amount of independence in which the information dynamics is determined from within rather than externally.

Finally, Tegmark uses this new way of thinking about consciousness as a lens through which to study one of the fundamental problems of quantum mechanics known as the quantum factorisation problem.

This arises because quantum mechanics describes the entire universe using three mathematical entities: an object known as a Hamiltonian that describes the total energy of the system; a density matrix that describes the relationship between all the quantum states in the system; and Schrodinger’s equation which describes how these things change with time.

The problem is that when the entire universe is described in these terms, there are an infinite number of mathematical solutions that include all possible quantum mechanical outcomes and many other even more exotic possibilities.

So the problem is why we perceive the universe as the semi-classical, three dimensional world that is so familiar. When we look at a glass of iced water, we perceive the liquid and the solid ice cubes as independent things even though they are intimately linked as part of the same system. How does this happen? Out of all possible outcomes, why do we perceive this solution?

Tegmark does not have an answer. But what’s fascinating about his approach is that it is formulated using the language of quantum mechanics in a way that allows detailed scientific reasoning. And as a result it throws up all kinds of new problems that physicists will want to dissect in more detail.

Take for example, the idea that the information in a conscious system must be unified. That means the system must contain error-correcting codes that allow any subset of up to half the information to be reconstructed from the rest.

Tegmark points out that any information stored in a special network known as a Hopfield neural net automatically has this error-correcting facility. However, he calculates that a Hopfield net about the size of the human brain with 10^11 neurons, can only store 37 bits of integrated information.

“This leaves us with an integration paradox: why does the information content of our conscious experience appear to be vastly larger than 37 bits?” asks Tegmark.

That’s a question that many scientists might end up pondering in detail. For Tegmark, this paradox suggests that his mathematical formulation of consciousness is missing a vital ingredient. “This strongly implies that the integration principle must be supplemented by at least one additional principle,” he says. Suggestions please in the comments section!

And yet the power of this approach is in the assumption that consciousness does not lie beyond our ken; that there is no “secret sauce” without which it cannot be tamed.

At the beginning of the 20th century, a group of young physicists embarked on a quest to explain a few strange but seemingly small anomalies in our understanding of the universe. In deriving the new theories of relativity and quantum mechanics, they ended up changing the way we comprehend the cosmos. These physcists, at least some of them, are now household names.

Could it be that a similar revolution is currently underway at the beginning of the 21st century? Consciousness as a State of Matter

Partículas telepáticas (Folha de S.Paulo)


ilustração JOSÉ PATRÍCIO

28/12/2014 03h08

RESUMO Há 50 anos, o físico norte-irlandês John Bell (1928-90) chegou a um resultado que demonstra a natureza “fantasmagórica” da realidade no mundo atômico e subatômico. Seu teorema é hoje visto como a arma mais eficaz contra a espionagem, algo que garantirá, num futuro talvez próximo, a privacidade absoluta das informações.


Um país da América do Sul quer manter a privacidade de suas informações estratégicas, mas se vê obrigado a comprar os equipamentos para essa tarefa de um país bem mais avançado tecnologicamente. Esses aparelhos, porém, podem estar “grampeados”.

Surge, então, a dúvida quase óbvia: haverá, no futuro, privacidade 100% garantida? Sim. E isso vale até mesmo para um país que compre a tecnologia antiespionagem do “inimigo”.
O que possibilita a resposta afirmativa acima é o resultado que já foi classificado como o mais profundo da ciência: o teorema de Bell, que trata de uma das perguntas filosóficas mais agudas e penetrantes feitas até hoje e que alicerça o próprio conhecimento: o que é a realidade? O teorema -que neste ano completou seu 50º aniversário- garante que a realidade, em sua dimensão mais íntima, é inimaginavelmente estranha.

José Patricio

A história do teorema, de sua comprovação experimental e de suas aplicações modernas tem vários começos. Talvez, aqui, o mais apropriado seja um artigo publicado em 1935 pelo físico de origem alemã Albert Einstein (1879-1955) e dois colaboradores, o russo Boris Podolsky (1896-1966) e o americano Nathan Rosen (1909-95).

Conhecido como paradoxo EPR (iniciais dos sobrenomes dos autores), o experimento teórico ali descrito resumia uma longa insatisfação de Einstein com os rumos que a mecânica quântica, a teoria dos fenômenos na escala atômica, havia tomado. Inicialmente, causou amargo no paladar do autor da relatividade o fato de essa teoria, desenvolvida na década de 1920, fornecer apenas a probabilidade de um fenômeno ocorrer. Isso contrastava com a “certeza” (determinismo) da física dita clássica, a que rege os fenômenos macroscópicos.

Einstein, na verdade, estranhava sua criatura, pois havia sido um dos pais da teoria quântica. Com alguma relutância inicial, o indeterminismo da mecânica quântica acabou digerido por ele. Algo, porém, nunca lhe passou pela garganta: a não localidade, ou seja, o estranhíssimo fato de algo aqui influenciar instantaneamente algo ali -mesmo que esse “ali” esteja muito distante. Einstein acreditava que coisas distantes tinham realidades independentes.

Einstein chegou a comparar -vale salientar que é só uma analogia- a não localidade a um tipo de telepatia. Mas a definição mais famosa dada por Einstein a essa estranheza foi “fantasmagórica ação a distância”.


A essência do argumento do paradoxo EPR é o seguinte: sob condições especiais, duas partículas que interagiram e se separaram acabam em um estado denominado emaranhado, como se fossem “gêmeas telepáticas”. De forma menos pictórica, diz-se que as partículas estão conectadas (ou correlacionadas, como preferem os físicos) e assim seguem, mesmo depois da interação.

A estranheza maior vem agora: se uma das partículas desse par for perturbada -ou seja, sofrer uma medida qualquer, como dizem os físicos-, a outra “sente” essa perturbação instantaneamente. E isso independe da distância entre as duas partículas. Podem estar separadas por anos-luz.

Os autores do paradoxo EPR diziam que era impossível imaginar que a natureza permitisse a conexão instantânea entre os dois objetos. E, por meio de argumentação lógica e complexa, Einstein, Podolsky e Rosen concluíam: a mecânica quântica tem que ser incompleta. Portanto, provisória.


Uma leitura apressada (porém, muito comum) do paradoxo EPR é dizer que uma ação instantânea (não local, no vocabulário da física) é impossível, porque violaria a relatividade de Einstein: nada pode viajar com velocidade superior à da luz no vácuo, 300 mil km/s.

No entanto, a não localidade atuaria apenas na dimensão microscópica -não pode ser usada, por exemplo, para mandar ou receber mensagens. No mundo macroscópico, se quisermos fazer isso, teremos que usar sinais que nunca viajam com velocidade maior que a da luz no vácuo. Ou seja, relatividade é preservada.

A não localidade tem a ver com conexões persistentes (e misteriosas) entre dois objetos: interferir com (alterar, mudar etc.) um deles interfere com (altera, muda etc.) o outro. Instantaneamente. O simples ato de observar um deles interfere no estado do outro.

Einstein não gostou da versão final do artigo de 1935, que só viu impressa -a redação ficou a cargo de Podolsky. Ele havia imaginado um texto menos filosófico. Pouco meses depois, viria a resposta do físico dinamarquês Niels Bohr (1885-1962) ao EPR -poucos anos antes, Einstein e Bohr haviam protagonizado o que para muitos é um dos debates filosóficos mais importantes da história: o tema era a “alma da natureza”, nas palavras de um filósofo da física.

Em sua resposta ao EPR, Bohr reafirmou tanto a completude da mecânica quântica quanto sua visão antirrealista do universo atômico: não é possível dizer que uma entidade quântica (elétron, próton, fóton etc.) tenha uma propriedade antes que esta seja medida. Ou seja, tal propriedade não seria real, não estaria oculta à espera de um aparelho de medida ou qualquer interferência (até mesmo o olhar) do observador. Quanto a isso, Einstein, mais tarde, ironizaria: “Será que a Lua só existe quando olhamos para ela?”.


Um modo de entender o que seja uma teoria determinista é o seguinte: é aquela na qual se pressupõe que a propriedade a ser medida está presente (ou “escondida”) no objeto e pode ser determinada com certeza. Os físicos denominam esse tipo de teoria com um nome bem apropriado: teoria de variáveis ocultas.

Em uma teoria de variáveis ocultas, a tal propriedade (conhecida ou não) existe, é real. Daí, por vezes, os filósofos classificarem esse cenário como realismo -Einstein gostava do termo “realidade objetiva”: as coisas existem sem a necessidade de serem observadas.

Mas, na década de 1930, um teorema havia provado que seria impossível haver uma versão da mecânica quântica como uma teoria de variáveis ocultas. O feito era de um dos maiores matemáticos de todos os tempos, o húngaro John von Neumann (1903-57). E, fato não raro na história da ciência, valeu o argumento da autoridade em vez da autoridade do argumento.

O teorema de Von Neumann era perfeito do ponto de vista matemático, mas “errado, tolo” e “infantil” (como chegou a ser classificado) no âmbito da física, pois partia de uma premissa equivocada. Sabe-se hoje que Einstein desconfiou dessa premissa: “Temos que aceitar isso como verdade?”, perguntou a dois colegas. Mas não foi além.

O teorema de Von Neumann serviu, porém, para praticamente pisotear a versão determinista (portanto, de variáveis ocultas) da mecânica quântica feita em 1927 pelo nobre francês Louis de Broglie (1892-1987), Nobel de Física de 1929, que acabou desistindo dessa linha de pesquisa.

Por exatas duas décadas, o teorema de Von Neumann e as ideias de Bohr -que formou em torno dele uma influente escola de jovens notáveis- dissuadiram tentativas de buscar uma versão determinista da mecânica quântica.

Mas, em 1952, o físico norte-americano David Bohm (1917-92), inspirado pelas ideias de De Broglie, apresentou uma versão de variáveis ocultas da mecânica quântica -hoje, denominada mecânica quântica bohmiana, homenagem ao pesquisador que trabalhou na década de 1950 na Universidade de São Paulo (USP), quando perseguido nos EUA pelo macarthismo.

A mecânica quântica bohmiana tinha duas características em sua essência: 1) era determinista (ou seja, de variáveis ocultas); 2) era não local (isto é, admitia a ação a distância) -o que fez com que Einstein, localista convicto, perdesse o interesse inicial nela.


Eis que entra em cena a principal personagem desta história: o físico norte-irlandês John Stewart Bell, que, ao tomar conhecimento da mecânica bohmiana, teve uma certeza: o “impossível havia sido feito”. Mais: Von Neumann estava errado.

A mecânica quântica de Bohm -ignorada logo de início pela comunidade de físicos- acabava de cair em terreno fértil: Bell remoía, desde a universidade, como um “hobby”, os fundamentos filosóficos da mecânica quântica (EPR, Von Neumann, De Broglie etc.). E tinha tomado partido nesses debates: era um einsteiniano assumido e achava Bohr obscuro.

Bell nasceu em 28 de junho de 1928, em Belfast, em uma família anglicana sem posses. Deveria ter parado de estudar aos 14 anos, mas, por insistência da mãe, que percebeu os dotes intelectuais do segundo de quatro filhos, foi enviado a uma escola técnica de ensino médio, onde ele aprendeu coisas práticas (carpintaria, construção civil, biblioteconomia etc.).

Formado, aos 16, tentou empregos em escritórios, mas o destino quis que terminasse como técnico preparador de experimentos no departamento de física da Queen’s University, também em Belfast.

Os professores do curso logo perceberam o interesse do técnico pela física e passaram a incentivá-lo, com indicações de leituras e aulas. Com uma bolsa de estudos, Bell se formou em 1948 em física experimental e, no ano seguinte, em física matemática. Em ambos os casos, com louvor.

De 1949 a 1960, Bell trabalhou no Aere (Estabelecimento para a Pesquisa em Energia Atômica), em Harwell, no Reino Unido. Lá conheceria sua futura mulher, a física Mary Ross, sua interlocutora em vários trabalhos sobre física. “Ao olhar novamente esses artigos, vejo-a em todo lugar”, disse Bell, em homenagem recebida em 1987, três anos antes de morrer, de hemorragia cerebral.

Defendeu doutorado em 1956, após um período na Universidade de Birmingham, sob orientação do físico teuto-britânico Rudolf Peierls (1907-95). A tese inclui uma prova de um teorema muito importante da física (teorema CPT), que havia sido descoberto pouco antes por um contemporâneo seu.


Por discordar dos rumos das pesquisas no Aere, o casal decidiu trocar empregos estáveis por posições temporárias no Centro Europeu de Pesquisas Nucleares (Cern), em Genebra (Suíça). Ele na divisão de física teórica; ela, na de aceleradores.

Bell passou 1963 e 1964 trabalhando nos EUA. Lá, encontrou tempo para se dedicar a seu “hobby” intelectual e gestar o resultado que marcaria sua carreira e lhe daria, décadas mais tarde, fama.

Ele se fez a seguinte pergunta: será que a não localidade da teoria de variáveis ocultas de Bohm seria uma característica de qualquer teoria realista da mecânica quântica? Em outras palavras, se as coisas existirem sem serem observadas, elas terão que necessariamente estabelecer entre si aquela fantasmagórica ação a distância?

O teorema de Bell, publicado em 1964, é também conhecido como desigualdade de Bell. Sua matemática não é complexa. De forma muito simplificada, podemos pensar nesse teorema como uma inequação: x ≤ 2 (x menor ou igual a dois), sendo que “x” representa, para nossos propósitos aqui, os resultados de um experimento.

As consequências mais interessantes do teorema de Bell ocorreriam se tal experimento violasse a desigualdade, ou seja, mostrasse que x > 2 (x maior que dois). Nesse caso, teríamos de abrir mão de uma das duas suposições: 1) realismo (as coisas existem sem serem observadas); 2) da localidade (o mundo quântico não permite conexões mais velozes que a luz).

O artigo do teorema não teve grande repercussão -Bell havia feito outro antes, fundamental para ele chegar ao resultado, mas, por erro do editor do periódico, acabou publicado só em 1966.

REBELDIA A retomada das ideias de Bell -e, por conseguinte, do EPR e de Bohm- ganhou momento com fatores externos à física. Muitos anos depois do agitado final dos anos 1960, o físico americano John Clauser recordaria o período: ”A Guerra do Vietnã dominava os pensamentos políticos da minha geração. Sendo um jovem físico naquele período revolucionário, eu naturalmente queria chacoalhar o mundo”.

A ciência, como o resto do mundo, acabou marcada pelo espírito da geração paz e amor; pela luta pelos direitos civis; por maio de 1968; pelas filosofias orientais; pelas drogas psicodélicas; pela telepatia -em uma palavra: pela rebeldia. Que, traduzida para a física, significava se dedicar a uma área herética na academia: interpretações (ou fundamentos) da mecânica quântica. Mas fazer isso aumentava consideravelmente as chances de um jovem físico arruinar sua carreira: EPR, Bohm e Bell eram considerados temas filosóficos, e não físicos.

O elemento final para que o campo tabu de estudos ganhasse fôlego foi a crise do petróleo de 1973, que diminuiu a oferta de postos para jovens pesquisadores -incluindo físicos. À rebeldia somou-se a recessão.
Clauser, com mais três colegas, Abner Shimony, Richard Holt e Michael Horne, publicou suas primeiras ideias sobre o assunto em 1969, com o título “Proposta de Experimento para Testar Teorias de Variáveis Ocultas”. O quarteto fez isso em parte por ter notado que a desigualdade de Bell poderia ser testada com fótons, que são mais fáceis de serem gerados. Até então se pensava em arranjos experimentais mais complicados.

Em 1972, a tal proposta virou experimento -feito por Clauser e Stuart Freedman (1944-2012)-, e a desigualdade de Bell foi violada.

O mundo parecia ser não local -ironicamente, Clauser era localista! Mas só parecia: o experimento seguiu, por cerca de uma década, incompreendido e, portanto, desconsiderado pela comunidade de físicos. Mas aqueles resultados serviram a reforçar algo importante: fundamentos da mecânica quântica não eram só filosofia. Eram também física experimental.


O aperfeiçoamento de equipamentos de óptica (incluindo lasers) permitiu que, em 1982, um experimento se tornasse um clássico da área.

Pouco antes, o físico francês Alain Aspect havia decidido iniciar um doutorado tardio, mesmo sendo um físico experimental experiente. Escolheu como tema o teorema de Bell. Foi ao encontro do colega norte-irlandês no Cern. Em entrevista ao físico Ivan dos Santos Oliveira, do Centro Brasileiro de Pesquisas Físicas, no Rio de Janeiro, e ao autor deste texto, Aspect contou o seguinte diálogo entre ele e Bell. “Você tem um cargo estável?”, perguntou Bell. “Sim”, disse Aspect. Caso contrário, “você seria muito pressionado a não fazer o experimento”, disse Bell.

O diálogo relatado por Aspect nos permite afirmar que, quase duas décadas depois do artigo seminal de 1964, o tema continuava revestido de preconceito.

Em um experimento feito com pares de fótons emaranhados, a natureza, mais uma vez, mostrou seu caráter não local: a desigualdade de Bell foi violada. Os dados mostraram x > 2. Em 2007, por exemplo, o grupo do físico austríaco Anton Zeilinger verificou a violação da desigualdade usando fótons separados por… 144 km.

Na entrevista no Brasil, Aspect disse que, até então, o teorema era pouquíssimo conhecido pelos físicos, mas ganharia fama depois de sua tese de doutorado, de cuja banca, aliás, Bell participou.


Afinal, por que a natureza permite que haja a “telepatia” einsteiniana? É no mínimo estranho pensar que uma partícula perturbada aqui possa, de algum modo, alterar o estado de sua companheira nos confins do universo.

Há várias maneiras de interpretar as consequências do que Bell fez. De partida, algumas (bem) equivocadas: 1) a não localidade não pode existir, porque viola a relatividade; 2) teorias de variáveis ocultas (Bohm, De Broglie etc.) da mecânica quântica estão totalmente descartadas; 3) a mecânica quântica é realmente indeterminista; 4) o irrealismo -ou seja, coisas só existem quando observadas- é a palavra final. A lista é longa.

Quando o teorema foi publicado, uma leitura rasa (e errônea) dizia que ele não tinha importância, pois o teorema de Von Neumann já havia descartado as variáveis ocultas, e a mecânica quântica seria, portanto, de fato indeterminista. Entre os que não aceitam a não localidade, há ainda aqueles que chegam ao ponto de dizer que Einstein, Bohm e Bell não entenderam o que fizeram.

O filósofo da física norte-americano Tim Maudlin, da Universidade de Nova York, em dois excelentes artigos, “What Bell Did” (O que Bell fez, e “Reply to Werner” (em que responde a comentários sobre o texto anterior,, oferece uma longa lista de equívocos.

Para Maudlin, renomado em sua área, o teorema de Bell e sua violação significam uma só coisa: a natureza é não local (“fantasmagórica”) e, portanto, não há esperança para a localidade, como Einstein gostaria -nesse sentido, pode-se dizer que Bell mostrou que Einstein estava errado. Assim, qualquer teoria determinista (realista) que reproduza os resultados experimentais obtidos até hoje pela mecânica quântica -por sinal, a teoria mais precisa da história da ciência- terá que necessariamente ser não local.

De Aspect até hoje, desenvolvimentos tecnológicos importantes possibilitaram algo impensável há poucas décadas: estudar isoladamente uma entidade quântica (átomo, elétron, fóton etc.). E isso deu início à área de informação quântica, que abrange o estudo da criptografia quântica -aquela que permitirá a segurança absoluta dos dados- e o dos computadores quânticos, máquinas extremamente velozes. De certo modo, trata-se de filosofia transformada em física experimental.

Muitos desses avanços se devem basicamente à rebeldia de uma geração de físicos jovens que queriam contrariar o “sistema”.

Uma história saborosa desse período está em “How the Hippies Saved Physics” (Como os hippies salvaram a física, publicado pela W. W. Norton & Company em 2011), do historiador da física norte-americano David Kaiser. E uma análise histórica detalhada em “Quantum Dissidents: Research on the Foundations of Quantum Theory circa 1970” (Dissidentes do quantum: pesquisa sobre os fundamentos da teoria quântica por volta de 1970,, só para assinantes), do historiador da física Olival Freire Jr., da Universidade Federal da Bahia.

Para os mais interessados no viés filosófico, há os dois volumes premiados de “Conceitos de Física Quântica” (Editora Livraria da Física, 2003), do físico e filósofo Osvaldo Pessoa Jr., da USP.


A esta altura, o(a) leitor(a) talvez esteja se perguntando sobre o que o teorema de Bell tem a ver com uma privacidade 100% garantida.

No futuro, é (bem) provável que a informação seja enviada e recebida na forma de fótons emaranhados. Pesquisas recentes em criptografia quântica garantem que bastaria submeter essas partículas de luz ao teste da desigualdade de Bell. Se ela for violada, então não há nenhuma possibilidade de a mensagem ter sido bisbilhotada indevidamente. E o teste independe do equipamento usado para enviar ou receber os fótons. A base teórica para isso está, por exemplo, em “The Ultimate Physical Limits of Privacy” (Limites físicos extremos da privacidade), de Artur Ekert e Renato Renner (, só para assinantes).

Em um futuro não muito distante, talvez, o teorema de Bell se transforme na arma mais poderosa contra a espionagem. Isso é um tremendo alento para um mundo que parece rumar à privacidade zero. É também um imenso desdobramento de uma pergunta filosófica que, segundo o físico norte-americano Henry Stapp, especialista em fundamentos da mecânica quântica, se tornou “o resultado mais profundo da ciência”. Merecidamente. Afinal, por que a natureza optou pela “ação fantasmagórica a distância”?

A resposta é um mistério. Pena que a pergunta não seja nem sequer mencionada nas graduações de física no Brasil.

CÁSSIO LEITE VIEIRA, 54, jornalista do Instituto Ciência Hoje (RJ), é autor de “Einstein – O Reformulador do Universo” (Odysseus).
JOSÉ PATRÍCIO, 54, artista plástico pernambucano, participa da mostra “Asas a Raízes” na Caixa Cultural do Rio, de 17/1 a 15/3.

You’re powered by quantum mechanics. No, really… (The Guardian)

For years biologists have been wary of applying the strange world of quantum mechanics, where particles can be in two places at once or connected over huge distances, to their own field. But it can help to explain some amazing natural phenomena we take for granted


The Observer, Sunday 26 October 2014

A European robin in flight

According to quantum biology, the European robin has a ‘sixth sense’ in the form of a protein in its eye sensitive to the orientation of the Earth’s magnetic field, allowing it to ‘see’ which way to migrate. Photograph: Helmut Heintges/ Helmut Heintges/Corbis

Every year, around about this time, thousands of European robins escape the oncoming harsh Scandinavian winter and head south to the warmer Mediterranean coasts. How they find their way unerringly on this 2,000-mile journey is one of the true wonders of the natural world. For unlike many other species of migratory birds, marine animals and even insects, they do not rely on landmarks, ocean currents, the position of the sun or a built-in star map. Instead, they are among a select group of animals that use a remarkable navigation sense – remarkable for two reasons. The first is that they are able to detect tiny variations in the direction of the Earth’s magnetic field – astonishing in itself, given that this magnetic field is 100 times weaker than even that of a measly fridge magnet. The second is that robins seem to be able to “see” the Earth’s magnetic field via a process that even Albert Einstein referred to as “spooky”. The birds’ in-built compass appears to make use of one of the strangest features of quantum mechanics.

Over the past few years, the European robin, and its quantum “sixth sense”, has emerged as the pin-up for a new field of research, one that brings together the wonderfully complex and messy living world and the counterintuitive, ethereal but strangely orderly world of atoms and elementary particles in a collision of disciplines that is as astonishing and unexpected as it is exciting. Welcome to the new science of quantum biology.

Most people have probably heard of quantum mechanics, even if they don’t really know what it is about. Certainly, the idea that it is a baffling and difficult scientific theory understood by just a tiny minority of smart physicists and chemists has become part of popular culture. Quantum mechanics describes a reality on the tiniest scales that is, famously, very weird indeed; a world in which particles can exist in two or more places at once, spread themselves out like ghostly waves, tunnel through impenetrable barriers and even possess instantaneous connections that stretch across vast distances.

But despite this bizarre description of the basic building blocks of the universe, quantum mechanics has been part of all our lives for a century. Its mathematical formulation was completed in the mid-1920s and has given us a remarkably complete account of the world of atoms and their even smaller constituents, the fundamental particles that make up our physical reality. For example, the ability of quantum mechanics to describe the way that electrons arrange themselves within atoms underpins the whole of chemistry, material science and electronics; and is at the very heart of most of the technological advances of the past half-century. Without the success of the equations of quantum mechanics in describing how electrons move through materials such as semiconductors we would not have developed the silicon transistor and, later, the microchip and the modern computer.

However, if quantum mechanics can so beautifully and accurately describe the behaviour of atoms with all their accompanying weirdness, then why aren’t all the objects we see around us, including us – which are after all only made up of these atoms – also able to be in two place at once, pass through impenetrable barriers or communicate instantaneously across space? One obvious difference is that the quantum rules apply to single particles or systems consisting of just a handful of atoms, whereas much larger objects consist of trillions of atoms bound together in mindboggling variety and complexity. Somehow, in ways we are only now beginning to understand, most of the quantum weirdness washes away ever more quickly the bigger the system is, until we end up with the everyday objects that obey the familiar rules of what physicists call the “classical world”. In fact, when we want to detect the delicate quantum effects in everyday-size objects we have to go to extraordinary lengths to do so – freezing them to within a whisker of absolute zero and performing experiments in near-perfect vacuums.

Quantum effects were certainly not expected to play any role inside the warm, wet and messy world of living cells, so most biologists have thus far ignored quantum mechanics completely, preferring their traditional ball-and-stick models of the molecular structures of life. Meanwhile, physicists have been reluctant to venture into the messy and complex world of the living cell; why should they when they can test their theories far more cleanly in the controlled environment of the lab where they at least feel they have a chance of understanding what is going on?

Erwin Schrödinger, whose book What is Life? suggested that the macroscopic order of life was based on order at its quantum level.

Erwin Schrödinger, whose book What is Life? suggested that the macroscopic order of life was based on order at its quantum level. Photograph: Bettmann/CORBIS

Yet, 70 years ago, the Austrian Nobel prize-winning physicist and quantum pioneer, Erwin Schrödinger, suggested in his famous book,What is Life?, that, deep down, some aspects of biology must be based on the rules and orderly world of quantum mechanics. His book inspired a generation of scientists, including the discoverers of the double-helix structure of DNA, Francis Crick and James Watson. Schrödinger proposed that there was something unique about life that distinguishes it from the rest of the non-living world. He suggested that, unlike inanimate matter, living organisms can somehow reach down to the quantum domain and utilise its strange properties in order to operate the extraordinary machinery within living cells.

Schrödinger’s argument was based on the paradoxical fact that the laws of classical physics, such as those of Newtonian mechanics and thermodynamics, are ultimately based on disorder. Consider a balloon. It is filled with trillions of molecules of air all moving entirely randomly, bumping into one another and the inside wall of the balloon. Each molecule is governed by orderly quantum laws, but when you add up the random motions of all the molecules and average them out, their individual quantum behaviour washes out and you are left with the gas laws that predict, for example, that the balloon will expand by a precise amount when heated. This is because heat energy makes the air molecules move a little bit faster, so that they bump into the walls of the balloon with a bit more force, pushing the walls outward a little bit further. Schrödinger called this kind of law “order from disorder” to reflect the fact that this apparent macroscopic regularity depends on random motion at the level of individual particles.

But what about life? Schrödinger pointed out that many of life’s properties, such as heredity, depend of molecules made of comparatively few particles – certainly too few to benefit from the order-from-disorder rules of thermodynamics. But life was clearly orderly. Where did this orderliness come from? Schrödinger suggested that life was based on a novel physical principle whereby its macroscopic order is a reflection of quantum-level order, rather than the molecular disorder that characterises the inanimate world. He called this new principle “order from order”. But was he right?

Up until a decade or so ago, most biologists would have said no. But as 21st-century biology probes the dynamics of ever-smaller systems – even individual atoms and molecules inside living cells – the signs of quantum mechanical behaviour in the building blocks of life are becoming increasingly apparent. Recent research indicates that some of life’s most fundamental processes do indeed depend on weirdness welling up from the quantum undercurrent of reality. Here are a few of the most exciting examples.

Enzymes are the workhorses of life. They speed up chemical reactions so that processes that would otherwise take thousands of years proceed in seconds inside living cells. Life would be impossible without them. But how they accelerate chemical reactions by such enormous factors, often more than a trillion-fold, has been an enigma. Experiments over the past few decades, however, have shown that enzymes make use of a remarkable trick called quantum tunnelling to accelerate biochemical reactions. Essentially, the enzyme encourages electrons and protons to vanish from one position in a biomolecule and instantly rematerialise in another, without passing through the gap in between – a kind of quantum teleportation.

And before you throw your hands up in incredulity, it should be stressed that quantum tunnelling is a very familiar process in the subatomic world and is responsible for such processes as radioactive decay of atoms and even the reason the sun shines (by turning hydrogen into helium through the process of nuclear fusion). Enzymes have made every single biomolecule in your cells and every cell of every living creature on the planet, so they are essential ingredients of life. And they dip into the quantum world to help keep us alive.

Another vital process in biology is of course photosynthesis. Indeed, many would argue that it is the most important biochemical reaction on the planet, responsible for turning light, air, water and a few minerals into grass, trees, grain, apples, forests and, ultimately, the rest of us who eat either the plants or the plant-eaters.

The initiating event is the capture of light energy by a chlorophyll molecule and its conversion into chemical energy that is harnessed to fix carbon dioxide and turn it into plant matter. The process whereby this light energy is transported through the cell has long been a puzzle because it can be so efficient – close to 100% and higher than any artificial energy transport process.

Sunlight shines through chestnut tree leaves. Quantum biology can explain why photosynthesis in plants is so efficient.

Sunlight shines through chestnut tree leaves. Quantum biology can explain why photosynthesis in plants is so efficient. Photograph: Getty Images/Visuals Unlimited

The first step in photosynthesis is the capture of a tiny packet of energy from sunlight that then has to hop through a forest of chlorophyll molecules to makes its way to a structure called the reaction centre where its energy is stored. The problem is understanding how the packet of energy appears to so unerringly find the quickest route through the forest. An ingenious experiment, first carried out in 2007 in Berkley, California, probed what was going on by firing short bursts of laser light at photosynthetic complexes. The research revealed that the energy packet was not hopping haphazardly about, but performing a neat quantum trick. Instead of behaving like a localised particle travelling along a single route, it behaves quantum mechanically, like a spread-out wave, and samples all possible routes at once to find the quickest way.

A third example of quantum trickery in biology – the one we introduced in our opening paragraph – is the mechanism by which birds and other animals make use of the Earth’s magnetic field for navigation. Studies of the European robin suggest that it has an internal chemical compass that utilises an astonishing quantum concept called entanglement, which Einstein dismissed as “spooky action at a distance”. This phenomenon describes how two separated particles can remain instantaneously connected via a weird quantum link. The current best guess is that this takes place inside a protein in the bird’s eye, where quantum entanglement makes a pair of electrons highly sensitive to the angle of orientation of the Earth’s magnetic field, allowing the bird to “see” which way it needs to fly.

All these quantum effects have come as a big surprise to most scientists who believed that the quantum laws only applied in the microscopic world. All delicate quantum behaviour was thought to be washed away very quickly in bigger objects, such as living cells, containing the turbulent motion of trillions of randomly moving particles. So how does life manage its quantum trickery? Recent research suggests that rather than avoiding molecular storms, life embraces them, rather like the captain of a ship who harnesses turbulent gusts and squalls to maintain his ship upright and on course.

Just as Schrödinger predicted, life seems to be balanced on the boundary between the sensible everyday world of the large and the weird and wonderful quantum world, a discovery that is opening up an exciting new field of 21st-century science.

Life on the Edge: The Coming of Age of Quantum Biology by Jim Al-Khalili and Johnjoe McFadden will be published by Bantam Press on 6 November.

New math and quantum mechanics: Fluid mechanics suggests alternative to quantum orthodoxy (Science Daily)

Date: September 12, 2014

Source: Massachusetts Institute of Technology

Summary: The central mystery of quantum mechanics is that small chunks of matter sometimes seem to behave like particles, sometimes like waves. For most of the past century, the prevailing explanation of this conundrum has been what’s called the “Copenhagen interpretation” — which holds that, in some sense, a single particle really is a wave, smeared out across the universe, that collapses into a determinate location only when observed. But some founders of quantum physics — notably Louis de Broglie — championed an alternative interpretation, known as “pilot-wave theory,” which posits that quantum particles are borne along on some type of wave. According to pilot-wave theory, the particles have definite trajectories, but because of the pilot wave’s influence, they still exhibit wavelike statistics. Now a professor of applied mathematics believes that pilot-wave theory deserves a second look.

Close-ups of an experiment conducted by John Bush and his student Daniel Harris, in which a bouncing droplet of fluid was propelled across a fluid bath by waves it generated. Credit: Dan Harris

The central mystery of quantum mechanics is that small chunks of matter sometimes seem to behave like particles, sometimes like waves. For most of the past century, the prevailing explanation of this conundrum has been what’s called the “Copenhagen interpretation” — which holds that, in some sense, a single particle really is a wave, smeared out across the universe, that collapses into a determinate location only when observed.

But some founders of quantum physics — notably Louis de Broglie — championed an alternative interpretation, known as “pilot-wave theory,” which posits that quantum particles are borne along on some type of wave. According to pilot-wave theory, the particles have definite trajectories, but because of the pilot wave’s influence, they still exhibit wavelike statistics.

John Bush, a professor of applied mathematics at MIT, believes that pilot-wave theory deserves a second look. That’s because Yves Couder, Emmanuel Fort, and colleagues at the University of Paris Diderot have recently discovered a macroscopic pilot-wave system whose statistical behavior, in certain circumstances, recalls that of quantum systems.

Couder and Fort’s system consists of a bath of fluid vibrating at a rate just below the threshold at which waves would start to form on its surface. A droplet of the same fluid is released above the bath; where it strikes the surface, it causes waves to radiate outward. The droplet then begins moving across the bath, propelled by the very waves it creates.

“This system is undoubtedly quantitatively different from quantum mechanics,” Bush says. “It’s also qualitatively different: There are some features of quantum mechanics that we can’t capture, some features of this system that we know aren’t present in quantum mechanics. But are they philosophically distinct?”

Tracking trajectories

Bush believes that the Copenhagen interpretation sidesteps the technical challenge of calculating particles’ trajectories by denying that they exist. “The key question is whether a real quantum dynamics, of the general form suggested by de Broglie and the walking drops, might underlie quantum statistics,” he says. “While undoubtedly complex, it would replace the philosophical vagaries of quantum mechanics with a concrete dynamical theory.”

Last year, Bush and one of his students — Jan Molacek, now at the Max Planck Institute for Dynamics and Self-Organization — did for their system what the quantum pioneers couldn’t do for theirs: They derived an equation relating the dynamics of the pilot waves to the particles’ trajectories.

In their work, Bush and Molacek had two advantages over the quantum pioneers, Bush says. First, in the fluidic system, both the bouncing droplet and its guiding wave are plainly visible. If the droplet passes through a slit in a barrier — as it does in the re-creation of a canonical quantum experiment — the researchers can accurately determine its location. The only way to perform a measurement on an atomic-scale particle is to strike it with another particle, which changes its velocity.

The second advantage is the relatively recent development of chaos theory. Pioneered by MIT’s Edward Lorenz in the 1960s, chaos theory holds that many macroscopic physical systems are so sensitive to initial conditions that, even though they can be described by a deterministic theory, they evolve in unpredictable ways. A weather-system model, for instance, might yield entirely different results if the wind speed at a particular location at a particular time is 10.01 mph or 10.02 mph.

The fluidic pilot-wave system is also chaotic. It’s impossible to measure a bouncing droplet’s position accurately enough to predict its trajectory very far into the future. But in a recent series of papers, Bush, MIT professor of applied mathematics Ruben Rosales, and graduate students Anand Oza and Dan Harris applied their pilot-wave theory to show how chaotic pilot-wave dynamics leads to the quantumlike statistics observed in their experiments.

What’s real?

In a review article appearing in the Annual Review of Fluid Mechanics, Bush explores the connection between Couder’s fluidic system and the quantum pilot-wave theories proposed by de Broglie and others.

The Copenhagen interpretation is essentially the assertion that in the quantum realm, there is no description deeper than the statistical one. When a measurement is made on a quantum particle, and the wave form collapses, the determinate state that the particle assumes is totally random. According to the Copenhagen interpretation, the statistics don’t just describe the reality; they are the reality.

But despite the ascendancy of the Copenhagen interpretation, the intuition that physical objects, no matter how small, can be in only one location at a time has been difficult for physicists to shake. Albert Einstein, who famously doubted that God plays dice with the universe, worked for a time on what he called a “ghost wave” theory of quantum mechanics, thought to be an elaboration of de Broglie’s theory. In his 1976 Nobel Prize lecture, Murray Gell-Mann declared that Niels Bohr, the chief exponent of the Copenhagen interpretation, “brainwashed an entire generation of physicists into believing that the problem had been solved.” John Bell, the Irish physicist whose famous theorem is often mistakenly taken to repudiate all “hidden-variable” accounts of quantum mechanics, was, in fact, himself a proponent of pilot-wave theory. “It is a great mystery to me that it was so soundly ignored,” he said.

Then there’s David Griffiths, a physicist whose “Introduction to Quantum Mechanics” is standard in the field. In that book’s afterword, Griffiths says that the Copenhagen interpretation “has stood the test of time and emerged unscathed from every experimental challenge.” Nonetheless, he concludes, “It is entirely possible that future generations will look back, from the vantage point of a more sophisticated theory, and wonder how we could have been so gullible.”

“The work of Yves Couder and the related work of John Bush … provides the possibility of understanding previously incomprehensible quantum phenomena, involving ‘wave-particle duality,’ in purely classical terms,” says Keith Moffatt, a professor emeritus of mathematical physics at Cambridge University. “I think the work is brilliant, one of the most exciting developments in fluid mechanics of the current century.”

Journal Reference:

  1. John W.M. Bush. Pilot-Wave Hydrodynamics. Annual Review of Fluid Mechanics, 2014 DOI: 10.1146/annurev-fluid-010814-014506

Teoria quântica, múltiplos universos, e o destino da consciência humana após a morte (Biocentrismo, Robert Lanza)

[Nota do editor do blogue: o título da matéria em português não é fiel ao título original em inglês, e tem caráter sensacionalista. Por ser este blogue uma hemeroteca, não alterei o título.]

Cientistas comprovam a reencarnação humana (Duniverso)

s/d; acessado em 14 de setembro de 2014. Desde que o mundo é mundo discutimos e tentamos descobrir o que existe além da morte. Desta vez a ciência quântica explica e comprova que existe sim vida (não física) após a morte de qualquer ser humano. Um livro intitulado “O biocentrismo: Como a vida e a consciência são as chaves para entender a natureza do Universo” “causou” na Internet, porque continha uma noção de que a vida não acaba quando o corpo morre e que pode durar para sempre. O autor desta publicação o cientista Dr. Robert Lanza, eleito o terceiro mais importante cientista vivo pelo NY Times, não tem dúvidas de que isso é possível.

Além do tempo e do espaço

Lanza é um especialista em medicina regenerativa e diretor científico da Advanced Cell Technology Company. No passado ficou conhecido por sua extensa pesquisa com células-tronco e também por várias experiências bem sucedidas sobre clonagem de espécies animais ameaçadas de extinção. Mas não há muito tempo, o cientista se envolveu com física, mecânica quântica e astrofísica. Esta mistura explosiva deu à luz a nova teoria do biocentrismo que vem pregando desde então. O biocentrismo ensina que a vida e a consciência são fundamentais para o universo. É a consciência que cria o universo material e não o contrário. Lanza aponta para a estrutura do próprio universo e diz que as leis, forças e constantes variações do universo parecem ser afinadas para a vida, ou seja, a inteligência que existia antes importa muito. Ele também afirma que o espaço e o tempo não são objetos ou coisas mas sim ferramentas de nosso entendimento animal. Lanza diz que carregamos o espaço e o tempo em torno de nós “como tartarugas”, o que significa que quando a casca sai, espaço e tempo ainda existem. ciencia-quantica-comprova-reencarnacao

A teoria sugere que a morte da consciência simplesmente não existe. Ele só existe como um pensamento porque as pessoas se identificam com o seu corpo. Eles acreditam que o corpo vai morrer mais cedo ou mais tarde, pensando que a sua consciência vai desaparecer também. Se o corpo gera a consciência então a consciência morre quando o corpo morre. Mas se o corpo recebe a consciência da mesma forma que uma caixa de tv a cabo recebe sinais de satélite então é claro que a consciência não termina com a morte do veículo físico. Na verdade a consciência existe fora das restrições de tempo e espaço. Ele é capaz de estar em qualquer lugar: no corpo humano e no exterior de si mesma. Em outras palavras é não-local, no mesmo sentido que os objetos quânticos são não-local. Lanza também acredita que múltiplos universos podem existir simultaneamente. Em um universo o corpo pode estar morto e em outro continua a existir, absorvendo consciência que migraram para este universo. Isto significa que uma pessoa morta enquanto viaja através do mesmo túnel acaba não no inferno ou no céu, mas em um mundo semelhante a ele ou ela que foi habitado, mas desta vez vivo. E assim por diante, infinitamente, quase como um efeito cósmico vida após a morte.

Vários mundos

Não são apenas meros mortais que querem viver para sempre mas também alguns cientistas de renome têm a mesma opinião de Lanza. São os físicos e astrofísicos que tendem a concordar com a existência de mundos paralelos e que sugerem a possibilidade de múltiplos universos. Multiverso (multi-universo) é o conceito científico da teoria que eles defendem. Eles acreditam que não existem leis físicas que proibiriam a existência de mundos paralelos.


O primeiro a falar sobre isto foi o escritor de ficção científica HG Wells em 1895 com o livro “The Door in the Wall“. Após 62 anos essa ideia foi desenvolvida pelo Dr. Hugh Everett em sua tese de pós-graduação na Universidade de Princeton. Basicamente postula que, em determinado momento o universo se divide em inúmeros casos semelhantes e no momento seguinte, esses universos “recém-nascidos” dividem-se de forma semelhante. Então em alguns desses mundos que podemos estar presentes, lendo este artigo em um universo e assistir TV em outro. Na década de 1980 Andrei Linde cientista do Instituto de Física da Lebedev, desenvolveu a teoria de múltiplos universos. Agora como professor da Universidade de Stanford, Linde explicou: o espaço consiste em muitas esferas de insuflar que dão origem a esferas semelhantes, e aqueles, por sua vez, produzem esferas em números ainda maiores e assim por diante até o infinito. No universo eles são separados. Eles não estão cientes da existência do outro mas eles representam partes de um mesmo universo físico. A física Laura Mersini Houghton da Universidade da Carolina do Norte com seus colegas argumentam: as anomalias do fundo do cosmos existe devido ao fato de que o nosso universo é influenciado por outros universos existentes nas proximidades e que buracos e falhas são um resultado direto de ataques contra nós por universos vizinhos.


Assim, há abundância de lugares ou outros universos onde a nossa alma poderia migrar após a morte, de acordo com a teoria de neo biocentrismo. Mas será que a alma existe? Existe alguma teoria científica da consciência que poderia acomodar tal afirmação? Segundo o Dr. Stuart Hameroff uma experiência de quase morte acontece quando a informação quântica que habita o sistema nervoso deixa o corpo e se dissipa no universo. Ao contrário do que defendem os materialistas Dr. Hameroff oferece uma explicação alternativa da consciência que pode, talvez, apelar para a mente científica racional e intuições pessoais. A consciência reside, de acordo com Stuart e o físico britânico Sir Roger Penrose, nos microtúbulos das células cerebrais que são os sítios primários de processamento quântico. Após a morte esta informação é liberada de seu corpo, o que significa que a sua consciência vai com ele. Eles argumentaram que a nossa experiência da consciência é o resultado de efeitos da gravidade quântica nesses microtúbulos, uma teoria que eles batizaram Redução Objetiva Orquestrada. Consciência ou pelo menos proto consciência é teorizada por eles para ser uma propriedade fundamental do universo, presente até mesmo no primeiro momento do universo durante o Big Bang. “Em uma dessas experiências conscientes comprova-se que o proto esquema é uma propriedade básica da realidade física acessível a um processo quântico associado com atividade cerebral.” Nossas almas estão de fato construídas a partir da própria estrutura do universo e pode ter existido desde o início dos tempos. Nossos cérebros são apenas receptores e amplificadores para a proto-consciência que é intrínseca ao tecido do espaço-tempo. Então, há realmente uma parte de sua consciência que é não material e vai viver após a morte de seu corpo físico. ciencia-quantica-comprova-reencarnacao-3

Dr. Hameroff disse ao Canal Science através do documentário Wormhole: “Vamos dizer que o coração pare de bater, o sangue pare de fluir e os microtúbulos percam seu estado quântico. A informação quântica dentro dos microtúbulos não é destruída, não pode ser destruída, ele só distribui e se dissipa com o universo como um todo.” Robert Lanza acrescenta aqui que não só existem em um único universo, ela existe talvez, em outro universo. Se o paciente é ressuscitado, esta informação quântica pode voltar para os microtúbulos e o paciente diz: “Eu tive uma experiência de quase morte”. Ele acrescenta: “Se ele não reviveu e o paciente morre é possível que esta informação quântica possa existir fora do corpo talvez indefinidamente, como uma alma.” Esta conta de consciência quântica explica coisas como experiências de quase morte, projeção astral, experiências fora do corpo e até mesmo a reencarnação sem a necessidade de recorrer a ideologia religiosa. A energia de sua consciência potencialmente é reciclada de volta em um corpo diferente em algum momento e nesse meio tempo ela existe fora do corpo físico em algum outro nível de realidade e possivelmente, em outro universo.

E você o que acha? Concorda com Lanza?

Grande abraço!

Indicação: Pedro Lopes Martins Artigo publicado originalmente em inglês no site SPIRIT SCIENCE AND METAPHYSICS.

*   *   *

Scientists Claim That Quantum Theory Proves Consciousness Moves To Another Universe At Death


A book titled “Biocentrism: How Life and Consciousness Are the Keys to Understanding the Nature of the Universe“ has stirred up the Internet, because it contained a notion that life does not end when the body dies, and it can last forever. The author of this publication, scientist Dr. Robert Lanza who was voted the 3rd most important scientist alive by the NY Times, has no doubts that this is possible.

Lanza is an expert in regenerative medicine and scientific director of Advanced Cell Technology Company. Before he has been known for his extensive research which dealt with stem cells, he was also famous for several successful experiments on cloning endangered animal species. But not so long ago, the scientist became involved with physics, quantum mechanics and astrophysics. This explosive mixture has given birth to the new theory of biocentrism, which the professor has been preaching ever since.  Biocentrism teaches that life and consciousness are fundamental to the universe.  It is consciousness that creates the material universe, not the other way around. Lanza points to the structure of the universe itself, and that the laws, forces, and constants of the universe appear to be fine-tuned for life, implying intelligence existed prior to matter.  He also claims that space and time are not objects or things, but rather tools of our animal understanding.  Lanza says that we carry space and time around with us “like turtles with shells.” meaning that when the shell comes off (space and time), we still exist. The theory implies that death of consciousness simply does not exist.   It only exists as a thought because people identify themselves with their body. They believe that the body is going to perish, sooner or later, thinking their consciousness will disappear too.  If the body generates consciousness, then consciousness dies when the body dies.  But if the body receives consciousness in the same way that a cable box receives satellite signals, then of course consciousness does not end at the death of the physical vehicle. In fact, consciousness exists outside of constraints of time and space. It is able to be anywhere: in the human body and outside of it. In other words, it is non-local in the same sense that quantum objects are non-local. Lanza also believes that multiple universes can exist simultaneously.  In one universe, the body can be dead. And in another it continues to exist, absorbing consciousness which migrated into this universe.  This means that a dead person while traveling through the same tunnel ends up not in hell or in heaven, but in a similar world he or she once inhabited, but this time alive. And so on, infinitely.  It’s almost like a cosmic Russian doll afterlife effect.

Multiple worlds

This hope-instilling, but extremely controversial theory by Lanza has many unwitting supporters, not just mere mortals who want to live forever, but also some well-known scientists. These are the physicists and astrophysicists who tend to agree with existence of parallel worlds and who suggest the possibility of multiple universes. Multiverse (multi-universe) is a so-called scientific concept, which they defend. They believe that no physical laws exist which would prohibit the existence of parallel worlds. The first one was a science fiction writer H.G. Wells who proclaimed in 1895 in his story “The Door in the Wall”.  And after 62 years, this idea was developed by Dr. Hugh Everett in his graduate thesis at the Princeton University. It basically posits that at any given moment the universe divides into countless similar instances. And the next moment, these “newborn” universes split in a similar fashion. In some of these worlds you may be present: reading this article in one universe, or watching TV in another. The triggering factor for these multiplyingworlds is our actions, explained Everett. If we make some choices, instantly one universe splits into two with different versions of outcomes. In the 1980s, Andrei Linde, scientist from the Lebedev’s Institute of physics, developed the theory of multiple universes. He is now a professor at Stanford University.  Linde explained: Space consists of many inflating spheres, which give rise to similar spheres, and those, in turn, produce spheres in even greater numbers, and so on to infinity. In the universe, they are spaced apart. They are not aware of each other’s existence. But they represent parts of the same physical universe. The fact that our universe is not alone is supported by data received from the Planck space telescope. Using the data, scientists have created the most accurate map of the microwave background, the so-called cosmic relic background radiation, which has remained since the inception of our universe. They also found that the universe has a lot of dark recesses represented by some holes and extensive gaps. Theoretical physicist Laura Mersini-Houghton from the North Carolina University with her colleagues argue: the anomalies of the microwave background exist due to the fact that our universe is influenced by other universes existing nearby. And holes and gaps are a direct result of attacks on us by neighboring universes.


So, there is abundance of places or other universes where our soul could migrate after death, according to the theory of neo-biocentrism. But does the soul exist?  Is there any scientific theory of consciousness that could accommodate such a claim?  According to Dr. Stuart Hameroff, a near-death experience happens when the quantum information that inhabits the nervous system leaves the body and dissipates into the universe.  Contrary to materialistic accounts of consciousness, Dr. Hameroff offers an alternative explanation of consciousness that can perhaps appeal to both the rational scientific mind and personal intuitions. Consciousness resides, according to Stuart and British physicist Sir Roger Penrose, in the microtubules of the brain cells, which are the primary sites of quantum processing.  Upon death, this information is released from your body, meaning that your consciousness goes with it. They have argued that our experience of consciousness is the result of quantum gravity effects in these microtubules, a theory which they dubbed orchestrated objective reduction (Orch-OR). Consciousness, or at least proto-consciousness is theorized by them to be a fundamental property of the universe, present even at the first moment of the universe during the Big Bang. “In one such scheme proto-conscious experience is a basic property of physical reality accessible to a quantum process associated with brain activity.” Our souls are in fact constructed from the very fabric of the universe – and may have existed since the beginning of time.  Our brains are just receivers and amplifiers for the proto-consciousness that is intrinsic to the fabric of space-time. So is there really a part of your consciousness that is non-material and will live on after the death of your physical body? Dr Hameroff told the Science Channel’s Through the Wormhole documentary: “Let’s say the heart stops beating, the blood stops flowing, the microtubules lose their quantum state. The quantum information within the microtubules is not destroyed, it can’t be destroyed, it just distributes and dissipates to the universe at large”.  Robert Lanza would add here that not only does it exist in the universe, it exists perhaps in another universe. If the patient is resuscitated, revived, this quantum information can go back into the microtubules and the patient says “I had a near death experience”‘

He adds: “If they’re not revived, and the patient dies, it’s possible that this quantum information can exist outside the body, perhaps indefinitely, as a soul.”

This account of quantum consciousness explains things like near-death experiences, astral projection, out of body experiences, and even reincarnation without needing to appeal to religious ideology.  The energy of your consciousness potentially gets recycled back into a different body at some point, and in the mean time it exists outside of the physical body on some other level of reality, and possibly in another universe. Robert Lanza on Biocentrism:


– See more at:

Quantum physics enables revolutionary imaging method (Science Daily)

Date: August 28, 2014

Source: University of Vienna

Summary: Researchers have developed a fundamentally new quantum imaging technique with strikingly counter-intuitive features. For the first time, an image has been obtained without ever detecting the light that was used to illuminate the imaged object, while the light revealing the image never touches the imaged object.

A new quantum imaging technique generates images with photons that have never touched to object — in this case a sketch of a cat. This alludes to the famous Schrödinger cat paradox, in which a cat inside a closed box is said to be simultaneously dead and alive as long there is no information outside the box to rule out one option over the other. Similarly, the new imaging technique relies on a lack of information regarding where the photons are created and which path they take. Credit: Copyright: Patricia Enigl, IQOQI

Researchers from the Institute for Quantum Optics and Quantum Information (IQOQI), the Vienna Center for Quantum Science and Technology (VCQ), and the University of Vienna have developed a fundamentally new quantum imaging technique with strikingly counterintuitive features. For the first time, an image has been obtained without ever detecting the light that was used to illuminate the imaged object, while the light revealing the image never touches the imaged object.

In general, to obtain an image of an object one has to illuminate it with a light beam and use a camera to sense the light that is either scattered or transmitted through that object. The type of light used to shine onto the object depends on the properties that one would like to image. Unfortunately, in many practical situations the ideal type of light for the illumination of the object is one for which cameras do not exist.

The experiment published in Nature this week for the first time breaks this seemingly self-evident limitation. The object (e.g. the contour of a cat) is illuminated with light that remains undetected. Moreover, the light that forms an image of the cat on the camera never interacts with it. In order to realise their experiment, the scientists use so-called “entangled” pairs of photons. These pairs of photons — which are like interlinked twins — are created when a laser interacts with a non-linear crystal. In the experiment, the laser illuminates two separate crystals, creating one pair of twin photons (consisting of one infrared photon and a “sister” red photon) in either crystal. The object is placed in between the two crystals. The arrangement is such that if a photon pair is created in the first crystal, only the infrared photon passes through the imaged object. Its path then goes through the second crystal where it fully combines with any infrared photons that would be created there.

With this crucial step, there is now, in principle, no possibility to find out which crystal actually created the photon pair. Moreover, there is now no information in the infrared photon about the object. However, due to the quantum correlations of the entangled pairs the information about the object is now contained in the red photons — although they never touched the object. Bringing together both paths of the red photons (from the first and the second crystal) creates bright and dark patterns, which form the exact image of the object.

Stunningly, all of the infrared photons (the only light that illuminated the object) are discarded; the picture is obtained by only detecting the red photons that never interacted with the object. The camera used in the experiment is even blind to the infrared photons that have interacted with the object. In fact, very low light infrared cameras are essentially unavailable on the commercial market. The researchers are confident that their new imaging concept is very versatile and could even enable imaging in the important mid-infrared region. It could find applications where low light imaging is crucial, in fields such as biological or medical imaging.


Journal Reference:

  1. Gabriela Barreto Lemos, Victoria Borish, Garrett D. Cole, Sven Ramelow, Radek Lapkiewicz, Anton Zeilinger. Quantum imaging with undetected photons.Nature, 2014; 512 (7515): 409 DOI: 10.1038/nature13586

The Quantum Cheshire Cat: Can neutrons be located at a different place than their own spin? (Science Daily)

Date: July 29, 2014

Source: Vienna University of Technology, TU Vienna

Summary: Can neutrons be located at a different place than their own spin? A quantum experiment demonstrates a new kind of quantum paradox. The Cheshire Cat featured in Lewis Caroll’s novel “Alice in Wonderland” is a remarkable creature: it disappears, leaving its grin behind. Can an object be separated from its properties? It is possible in the quantum world. In an experiment, neutrons travel along a different path than one of their properties — their magnetic moment. This “Quantum Cheshire Cat” could be used to make high precision measurements less sensitive to external perturbations.

The basic idea of the Quantum Cheshire Cat: In an interferometer, an object is separated from one if its properties – like a cat, moving on a different path than its own grin. Credit: Image courtesy of Vienna University of Technology, TU Vienna

Can neutrons be located at a different place than their own spin? A quantum experiment, carried out by a team of researchers from the Vienna University of Technology, demonstrates a new kind of quantum paradox.

The Cheshire Cat featured in Lewis Caroll’s novel “Alice in Wonderland” is a remarkable creature: it disappears, leaving its grin behind. Can an object be separated from its properties? It is possible in the quantum world. In an experiment, neutrons travel along a different path than one of their properties — their magnetic moment. This “Quantum Cheshire Cat” could be used to make high precision measurements less sensitive to external perturbations.

At Different Places at Once

According to the law of quantum physics, particles can be in different physical states at the same time. If, for example, a beam of neutrons is divided into two beams using a silicon crystal, it can be shown that the individual neutrons do not have to decide which of the two possible paths they choose. Instead, they can travel along both paths at the same time in a quantum superposition.

“This experimental technique is called neutron interferometry,” says Professor Yuji Hasegawa from the Vienna University of Technology. “It was invented here at our institute in the 1970s, and it has turned out to be the perfect tool to investigate fundamental quantum mechanics.”

To see if the same technique could separate the properties of a particle from the particle itself, Yuji Hasegawa brought together a team including Tobis Denkmayr, Hermann Geppert and Stephan Sponar, together with Alexandre Matzkin from CNRS in France, Professor Jeff Tollaksen from Chapman University in California and Hartmut Lemmel from the Institut Laue-Langevin to develop a brand new quantum experiment.

The experiment was done at the neutron source at the Institut Laue-Langevin (ILL) in Grenoble, where a unique kind of measuring station is operated by the Viennese team, supported by Hartmut Lemmel from ILL.

Where is the Cat …?

Neutrons are not electrically charged, but they carry a magnetic moment. They have a magnetic direction, the neutron spin, which can be influenced by external magnetic fields.

First, a neutron beam is split into two parts in a neutron interferometer. Then the spins of the two beams are shifted into different directions: The upper neutron beam has a spin parallel to the neutrons’ trajectory, the spin of the lower beam points into the opposite direction. After the two beams have been recombined, only those neutrons are chosen, which have a spin parallel to their direction of motion. All the others are just ignored. “This is called postselection,” says Hermann Geppert. “The beam contains neutrons of both spin directions, but we only analyse part of the neutrons.”

These neutrons, which are found to have a spin parallel to its direction of motion, must clearly have travelled along the upper path — only there, the neutrons have this spin state. This can be shown in the experiment. If the lower beam is sent through a filter which absorbs some of the neutrons, then the number of the neutrons with spin parallel to their trajectory stays the same. If the upper beam is sent through a filter, than the number of these neutrons is reduced.

… and Where is the Grin?

Things get tricky, when the system is used to measure where the neutron spin is located: the spin can be slightly changed using a magnetic field. When the two beams are recombined appropriately, they can amplify or cancel each other. This is exactly what can be seen in the measurement, if the magnetic field is applied at the lower beam — but that is the path which the neutrons considered in the experiment are actually never supposed to take. A magnetic field applied to the upper beam, on the other hand, does not have any effect.

“By preparing the neurons in a special initial state and then postselecting another state, we can achieve a situation in which both the possible paths in the interferometer are important for the experiment, but in very different ways,” says Tobias Denkmayr. “Along one of the paths, the particles themselves couple to our measurement device, but only the other path is sensitive to magnetic spin coupling. The system behaves as if the particles were spatially separated from their properties.”

High Hopes for High-Precision Measurements

This counter intuitive effect is very interesting for high precision measurements, which are very often based on the principle of quantum interference. “When the quantum system has a property you want to measure and another property which makes the system prone to perturbations, the two can be separated using a Quantum Cheshire Cat, and possibly the perturbation can be minimized,” says Stephan Sponar.

The idea of the Quantum Cheshire Cat was first developed by Prof. Jeff Tollaksen and Prof. Yakir Aharonov from the Chapman University. An experimental proposal was published last year. The measurements which have now been presented are the first experimental proof of this phenomenon.

Journal Reference:

  1. Tobias Denkmayr, Hermann Geppert, Stephan Sponar, Hartmut Lemmel, Alexandre Matzkin, Jeff Tollaksen, Yuji Hasegawa. Observation of a quantum Cheshire Cat in a matter-wave interferometer experiment. Nature Communications, 2014; 5 DOI: 10.1038/ncomms5492

Com a corda no pescoço (Folha de S.Paulo)

São Paulo, domingo, 05 de novembro de 2006

Físico americano revela em livro a celeuma travada nos bastidores da academia em torno da teoria de cordas e argumenta que talvez o Universo não seja elegante, afinal


Há tempos a comunidade dos físicos está dividida numa guerra surda, abafada pelos muros da academia. Agora, pela primeira vez, dois livros trazem a público os detalhes dessa desavença, que põe em xeque o modo de produzir a ciência moderna, revelando uma doença que pode estar se espalhando por todo o edifício acadêmico.

“The Trouble With Physics” (“A Crise da Física”), livro lançado no mês passado nos EUA e ainda sem tradução no Brasil, do físico teórico Lee Smolin, abre uma discussão que muitos prefeririam manter longe do grande público: está a física moderna completamente estagnada há três décadas?

“A história que vou contar”, escreve Smolin, “pode ser lida como uma tragédia. Para ser claro e antecipar o desfecho: nós fracassamos”, diz ele, invocando o cargo de porta-voz de toda uma geração de cientistas. Pior: a razão da estagnação seria a formação de gangues de cientistas, incluindo as mentes mais brilhantes do mundo, para afastar dos postos acadêmicos os teóricos dissidentes.

Os principais acusados são os físicos adeptos da chamada teoria de cordas, que promete, desde o início da década de 1970, unificar todas as forças e partículas do Universo conhecido. “A teoria de cordas tem uma posição tão dominante na academia”, escreve Smolin, “que é praticamente suicídio de carreira para um jovem teórico não juntar-se à onda”.

Smolin, um polêmico e respeitado físico teórico, com PhD em Harvard e professorado em Yale, não está só. Também o físico matemático Peter Woit disparou contra os físicos das cordas uma acusação pesada que transparece já no título de seu livro: “Not Even Wrong” (“Nem Sequer Errado”). Esse era o pior insulto que o legendário físico Wolfgang Pauli reservava para os trabalhos e teses mal feitas. Afinal, se uma tese fica comprovadamente errada, ela tem o lado positivo de fechar becos sem saída na busca do caminho certo.

Mas o alerta de Smolin não está restrito ao desenvolvimento teórico da física. Para manter privilégios acadêmicos, a comunidade dos teóricos de cordas tomou conta das principais universidades e centros de pesquisas, barrando a carreira de pesquisadores com enfoques alternativos. Smolin,que já namorou a teoria de cordas, produzindo 18 artigos sobre o assunto, emerge na arena científica como uma espécie de mafioso desertor, disparando sua metralhadora giratória.

Modelo Padrão
O mais surpreendente é que a confusão tenha começado logo após décadas de avanços contínuos no século que começa com Einstein e a consolidação da mecânica quântica.

O último capítulo dessa epopéia -e a raiz da bagunça- foi o espetacular sucesso do chamado Modelo Padrão das Forças e Partículas Elementares. Essa formulação, obra de gênios como Richard Feynman, Freeman Dyson, Murray Gell-Mann e outros, teve como canto do cisne a comprovação teórica e experimental da unificação da força fraca e o eletromagnetismo, feita pelos Prêmios Nobel Abdus Salam e Steven Weinberg. A unificação de forças é o santo graal da física desde Johannes Kepler (unificação das órbitas celestes), passando por Isaac Newton (unificação da gravidade e movimento orbital) James Maxwell (unificação da luz, eletricidade e magnetismo) e Einstein (unificação da energia e matéria) .

Mas o portentoso edifício do Modelo Padrão, tinha (e tem) graves rachaduras. Apesar de descrever todas as partículas e forças detectadas e previstas com incrível precisão, não incorporava a força da gravidade nem dizia nada sobre a histórica divisão entre os excludentes mundos da relatividade geral e da mecânica quântica.

Mas todos físicos da área de partículas e altas energias, teóricos e experimentais, mergulharam nas furiosas calculeiras do Modelo Padrão. Absorvidos no que se chama o modo de produção da ciência normal (em oposição aos períodos de erupção revolucionária, como o da relatividade), as mais brilhantes mentes do mundo chegaram a um beco sem saída: quase todas as previsões experimentais do Modelo Padrão foram vitoriosamente testadas. O que fazer depois?

Boas vibrações
É quando emergem as cordas. Em vez de partículas pontuais quase sem dimensão como constituintes básicos da matéria, surge a idéia revolucionária das entidades elementares serem na verdade literalmente cordas bidimensionais. Idênticas às dos violinos (no sentido matemático), só que de dimensões minúsculas (da ordem de um trilhão de vezes menores que um próton) e, mais espantoso, vibrando num Universo de mais do que as três dimensões habituais. Nas últimas formulações, nada menos que 11, incluindo o tempo.

No começo o progresso foi espantoso: a força da gravidade, uma deserdada da mecânica quântica e do Modelo Padrão, emergia naturalmente das harmonias de cordas, como ressuscitando as intuições pitagóricas. Todas as forças e partículas foram descritas matematicamente como formas particulares de oscilação de poucos tipos básicos de corda.

Mas logo as complicações começaram também a brotar descontroladamente das equações. Se o Modelo Padrão exigia 19 constantes, ajustadas na marra pelos teóricos para coincidir com a realidade, os desdobramentos da teoria de cordas passaram a exigir centenas delas.

No princípio a beleza da teoria de cordas vem de existir apenas o parâmetro da tensão de corda. Cada partícula ou força seria apenas uma variação das cordas básicas, mudando apenas sua tensão e modo de vibrar. A gravidade, por exemplo, seria uma corda fechada, como um elástico de borracha de prender cédulas. Elétrons seriam cordas oscilando com apenas uma extremidade presa.

A cada ajuste na geometria para tornar a teoria compatível com o Universo observável, foi tornando o modelo cada vez mais complicado, de maneira parecida ao modelo cósmico do astrônomo egípcio Ptolomeu, com as adições de ciclos e epiciclos para explicar os movimentos dos planetas.

Veio então a explosão final. Logo surgiram cinco alternativas de teorias de cordas. Depois a conjectura de existir uma tal teoria M, que agruparia todas com casos especiais. Finalmente, a teoria de cordas, que prometia simplicidade de beleza tão clara como a célebre E= mc2, revelou-se capaz de produzir nada menos que 10500 (o número 1 seguido de 500 zeros) soluções possíveis, cada uma delas representando um Universo alternativo, com forças e partículas diferentes. Ou seja, há mais soluções para as contas dos físicos de cordas do que há partículas e átomos no Universo inteiro.

Pior, uma parcela mais maluca da comunidade dos teóricos de cordas acha isso muito natural e insinua agora que a necessidade de prova experimental é um ranço arcaico da ciência.

“Vale a pena tentar ensinar mecânica quântica para um cachorro?” -perguntam eles. Seria igualmente inútil para nossos cérebros tentar entender e provar experimentalmente a grande bagunça instalada na ciência nos últimos 30 anos?

É claro que a maioria dos mais brilhantes teóricos de cordas não endossa esse impasse epistemológico. O próprio Brian Greene, físico americano e principal divulgador da concepção de cordas, autor do best-seller (mais falado do que lido, é verdade) “O Universo Elegante”, escreveu um artigo para o jornal “The New York Times” ressaltando que a prova experimental é essencial e que a questão levantada por Smolin é procedente. “O rigor matemático e a elegância não bastam para demonstrar a relevância de uma teoria. Para ser considerada uma descrição correta do Universo, uma teoria deve fazer previsões confirmadas por experimentos.

As dimensões das cordas e as energias que elas envolvem para serem comprovadas estão fora de alcance. Um acelerador de partículas para produzi-las artificialmente, deveria ser maior do que o Sistema Solar

E, quando um pequeno mas barulhento grupo de críticos da teoria de cordas ressalta isso com razão, a teoria de cordas ainda tem de fazer isso. Essa é uma questão chave e merece um escrutínio sério.”

Enquanto o diálogo entre Greene e Smolin tem sido diplomático, nos blogs das comunidades científicas a guerra está vários pontos para baixo. No diário on-line do físico Lubos Motl, de Harvard (, por exemplo, já foram até excluídos posts da cosmóloga brasileira Christine Dantas (christinedantas.blogspot .com). “Na verdade não existe uma guerra entre os muros da academia”, ameniza Victor Rivelles, do Instituto de Física da USP. “O que é novo é que a internet, e particularmente os blogs, amplificam essa discussão dando a impressão de que é muito maior do que na realidade é.”

Saída pela esquerda
Para contornar a questão apareceu o que se chama princípio antrópico: entre os incontáveis Universos possíveis, os observáveis seriam apenas os feitos sob medida para os humanos. Uma interpretação que resvala para o misticismo e devolve o homem ao centro do Universo, como na Idade Média.

Lamentavelmente, a física experimental, a juíza última das verdades desde os tempos de Galileu e Kepler, pouca coisa pode fazer. As dimensões das cordas elementares e as energias que elas envolvem para serem comprovadas estão fora de alcance. Um acelerador de partículas para produzi-las artificialmente, como foi feito na comprovação do Modelo Padrão, deveria ter uma dimensão maior que a do Sistema Solar.

Todas as esperanças de todos os físicos se voltam agora para o Grande Colisor de Hádrons (prótons ou nêutrons), a ser ligado a partir do ano que vem perto de Genebra, na fronteira da Suíça com a França, na sede do Cern (Centro Europeu de Pesquisas Nucleares). Pela primeira vez, esse acelerador, um túnel ultrafrio com 27 km de circunferência, vai atingir energias suficientes para produzir indícios indiretos da existência de uma quarta dimensão espacial. Lamentavelmente isso não prova nem refuta a teoria de cordas, pois o postulado de dimensões adicionais não é uma exclusividade desse modelo. A pendenga na comunidade dos físicos, portanto, pode persistir.

Pano de fundo
A linha teórica desenvolvida por Smolin, por outro lado, é igualmente nebulosa. Ele é um dos principais articuladores da gravitação quântica de laço, que pretende retomar o enfoque einsteniano de unificação. A teoria geral da relatividade, explica Smolin, independe da geometria do espaço-tempo. Mas para toda a teoria de cordas, e mesmo o modelo padrão, as forças e partículas são como atores num cenário ou pano de fundo de uma paisagem espaço-temporal definida.

É o que ele chama de teorias dependente do fundo. A gravitação quântica de laço, ao contrário, é independente do fundo. É uma conjectura arrojada: em vez de partículas e forças elementares, Smolin sugere que as entidades fundamentais são nós ou laços no tecido do espaço-tempo.

Assim como a teoria de cordas deriva todas as partículas e forças a partir de modos diferentes das cordas elementares vibrarem, Smolin acredita que essas entidades surjam de enroscos no tecido do espaço-tempo. Assim, as dimensões espaciais e a passagem do tempo emergem não como cenário do teatro das partículas, mas como sua gênese. Outra conseqüência da teoria é que o espaço-tempo não é contínuo: ele também é quantizado, existindo tamanhos mínimos, como átomos de espaço-tempo.

Lamentavelmente esses enroscos também são indetectáveis, mesmo nos mais poderosos aceleradores. No fim, pateticamente, Smolin admite que não se saiu melhor do que os teóricos de cordas e que seu livro “é uma forma de procrastinação”.

Mas as questões sociológicas colocadas nos últimos capítulos do livro de Smolin não podem mais ficar no limbo. A acusação da formação de gangues nos centros de pesquisa é agora uma questão pública, que envolve a aplicação do dinheiro dos impostos e a estagnação das ciências e, indiretamente, da tecnologia que ela deveria gerar.

LIVRO – “The Trouble With Physics: The Rise of String Theory, the Fall of a Science, and What Comes Next” 
Lee Smolin; Houghton Mifflin, 392 páginas US$26.

Discovery of Quantum Vibrations in ‘Microtubules’ Inside Brain Neurons Supports Controversial Theory of Consciousness (Science Daily)

Jan. 16, 2014 — A review and update of a controversial 20-year-old theory of consciousness published in Physics of Life Reviews claims that consciousness derives from deeper level, finer scale activities inside brain neurons. The recent discovery of quantum vibrations in “microtubules” inside brain neurons corroborates this theory, according to review authors Stuart Hameroff and Sir Roger Penrose. They suggest that EEG rhythms (brain waves) also derive from deeper level microtubule vibrations, and that from a practical standpoint, treating brain microtubule vibrations could benefit a host of mental, neurological, and cognitive conditions.

A review and update of a controversial 20-year-old theory of consciousness published in Physics of Life Reviews claims that consciousness derives from deeper level, finer scale activities inside brain neurons. (Credit: © James Steidl / Fotolia)

The theory, called “orchestrated objective reduction” (‘Orch OR’), was first put forward in the mid-1990s by eminent mathematical physicist Sir Roger Penrose, FRS, Mathematical Institute and Wadham College, University of Oxford, and prominent anesthesiologist Stuart Hameroff, MD, Anesthesiology, Psychology and Center for Consciousness Studies, The University of Arizona, Tucson. They suggested that quantum vibrational computations in microtubules were “orchestrated” (“Orch”) by synaptic inputs and memory stored in microtubules, and terminated by Penrose “objective reduction” (‘OR’), hence “Orch OR.” Microtubules are major components of the cell structural skeleton.

Orch OR was harshly criticized from its inception, as the brain was considered too “warm, wet, and noisy” for seemingly delicate quantum processes.. However, evidence has now shown warm quantum coherence in plant photosynthesis, bird brain navigation, our sense of smell, and brain microtubules. The recent discovery of warm temperature quantum vibrations in microtubules inside brain neurons by the research group led by Anirban Bandyopadhyay, PhD, at the National Institute of Material Sciences in Tsukuba, Japan (and now at MIT), corroborates the pair’s theory and suggests that EEG rhythms also derive from deeper level microtubule vibrations. In addition, work from the laboratory of Roderick G. Eckenhoff, MD, at the University of Pennsylvania, suggests that anesthesia, which selectively erases consciousness while sparing non-conscious brain activities, acts via microtubules in brain neurons.

“The origin of consciousness reflects our place in the universe, the nature of our existence. Did consciousness evolve from complex computations among brain neurons, as most scientists assert? Or has consciousness, in some sense, been here all along, as spiritual approaches maintain?” ask Hameroff and Penrose in the current review. “This opens a potential Pandora’s Box, but our theory accommodates both these views, suggesting consciousness derives from quantum vibrations in microtubules, protein polymers inside brain neurons, which both govern neuronal and synaptic function, and connect brain processes to self-organizing processes in the fine scale, ‘proto-conscious’ quantum structure of reality.”

After 20 years of skeptical criticism, “the evidence now clearly supports Orch OR,” continue Hameroff and Penrose. “Our new paper updates the evidence, clarifies Orch OR quantum bits, or “qubits,” as helical pathways in microtubule lattices, rebuts critics, and reviews 20 testable predictions of Orch OR published in 1998 — of these, six are confirmed and none refuted.”

An important new facet of the theory is introduced. Microtubule quantum vibrations (e.g. in megahertz) appear to interfere and produce much slower EEG “beat frequencies.” Despite a century of clinical use, the underlying origins of EEG rhythms have remained a mystery. Clinical trials of brief brain stimulation aimed at microtubule resonances with megahertz mechanical vibrations using transcranial ultrasound have shown reported improvements in mood, and may prove useful against Alzheimer’s disease and brain injury in the future.

Lead author Stuart Hameroff concludes, “Orch OR is the most rigorous, comprehensive and successfully-tested theory of consciousness ever put forth. From a practical standpoint, treating brain microtubule vibrations could benefit a host of mental, neurological, and cognitive conditions.”

The review is accompanied by eight commentaries from outside authorities, including an Australian group of Orch OR arch-skeptics. To all, Hameroff and Penrose respond robustly.

Penrose, Hameroff and Bandyopadhyay will explore their theories during a session on “Microtubules and the Big Consciousness Debate” at the Brainstorm Sessions, a public three-day event at the Brakke Grond in Amsterdam, the Netherlands, January 16-18, 2014. They will engage skeptics in a debate on the nature of consciousness, and Bandyopadhyay and his team will couple microtubule vibrations from active neurons to play Indian musical instruments. “Consciousness depends on anharmonic vibrations of microtubules inside neurons, similar to certain kinds of Indian music, but unlike Western music which is harmonic,” Hameroff explains.

Journal References:

  1. Stuart Hameroff and Roger Penrose. Consciousness in the universe: A review of the ‘Orch OR’ theoryPhysics of Life Reviews, 2013 DOI: 10.1016/j.plrev.2013.08.002
  2. Stuart Hameroff, MD, and Roger Penrose. Reply to criticism of the ‘Orch OR qubit’–‘Orchestrated objective reduction’ is scientifically justifiedPhysics of Life Reviews, 2013 DOI: 10.1016/j.plrev.2013.11.00
  3. Stuart Hameroff, Roger Penrose. Consciousness in the universePhysics of Life Reviews, 2013; DOI:10.1016/j.plrev.2013.08.002

Photons Run out of Loopholes: Quantum World Really Is in Conflict With Our Everyday Experience (Science Daily)

Apr. 15, 2013 — A team led by the Austrian physicist Anton Zeilinger has now carried out an experiment with photons in which they have closed an important loophole. The researchers have thus provided the most complete experimental proof that the quantum world is in conflict with our everyday experience.

Lab IQOQI, Vienna 2012. (Credit: Copyright: Jacqueline Godany)

The results of this study appear this week in the journal Nature (Advance Online Publication/AOP).

When we observe an object, we make a number of intuitive assumptions, among them that the unique properties of the object have been determined prior to the observation and that these properties are independent of the state of other, distant objects. In everyday life, these assumptions are fully justified, but things are different at the quantum level. In the past 30 years, a number of experiments have shown that the behaviour of quantum particles — such as atoms, electrons or photons — can be in conflict with our basic intuition. However, these experiments have never delivered definite answers. Each previous experiment has left open the possibility, at least in principle, that the observed particles ‘exploited’ a weakness of the experimental setup.

Quantum physics is an exquisitely precise tool for understanding the world around us at a very fundamental level. At the same time, it is a basis for modern technology: semiconductors (and therefore computers), lasers, MRI scanners, and numerous other devices are based on quantum-physical effects. However, even after more than a century of intensive research, fundamental aspects of quantum theory are not yet fully understood. On a regular basis, laboratories worldwide report results that seem at odds with our everyday intuition but that can be explained within the framework of quantum theory.

On the trail of the quantum entanglement mystery

The physicists in Vienna report not a new effect, but a deep investigation into one of the most fundamental phenomena of quantum physics, known as ‘entanglement.’ The effect of quantum entanglement is amazing: when measuring a quantum object that has an entangled partner, the state of the one particle depends on measurements performed on the partner. Quantum theory describes entanglement as independent of any physical separation between the particles. That is, entanglement should also be observed when the two particles are sufficiently far apart from each other that, even in principle, no information can be exchanged between them (the speed of communication is fundamentally limited by the speed of light). Testing such predictions regarding the correlations between entangled quantum particles is, however, a major experimental challenge.

Towards a definitive answer

The young academics in Anton Zeilinger’s group including Marissa Giustina, Alexandra Mech, Rupert Ursin, Sven Ramelow and Bernhard Wittmann, in an international collaboration with the National Institute of Standards and Technology/NIST (USA), the Physikalisch-Technische Bundesanstalt (Germany), and the Max-Planck-Institute of Quantum Optics (Germany), have now achieved an important step towards delivering definitive experimental evidence that quantum particles can indeed do things that classical physics does not allow them to do. For their experiment, the team built one of the best sources for entangled photon pairs worldwide and employed highly efficient photon detectors designed by experts at NIST. These technological advances together with a suitable measurement protocol enabled the researchers to detect entangled photons with unprecedented efficiency. In a nutshell: “Our photons can no longer duck out of being measured,” says Zeilinger.

This kind of tight monitoring is important as it closes an important loophole. In previous experiments on photons, there has always been the possibility that although the measured photons do violate the laws of classical physics, such non-classical behaviour would not have been observed if all photons involved in the experiment could have been measured. In the new experiment, this loophole is now closed. “Perhaps the greatest weakness of photons as a platform for quantum experiments is their vulnerability to loss — but we have just demonstrated that this weakness need not be prohibitive,” explains Marissa Giustina, lead author of the paper.

Now one last step

Although the new experiment makes photons the first quantum particles for which, in several separate experiments, every possible loophole has been closed, the grand finale is yet to come, namely, a single experiment in which the photons are deprived of all possibilities of displaying their counterintuitive behaviour through means of classical physics. Such an experiment would also be of fundamental significance for an important practical application: ‘quantum cryptography,’ which relies on quantum mechanical principles and is considered to be absolutely secure against eavesdropping. Eavesdropping is still theoretically possible, however, as long as there are loopholes. Only when all of these are closed is a completely secure exchange of messages possible.

An experiment without any loopholes, says Zeilinger, “is a big challenge, which attracts groups worldwide.” These experiments are not limited to photons, but also involve atoms, electrons, and other systems that display quantum mechanical behaviour. The experiment of the Austrian physicists highlights the photons’ potential. Thanks to these latest advances, the photon is running out of places to hide, and quantum physicists are closer than ever to conclusive experimental proof that quantum physics defies our intuition and everyday experience to the degree suggested by research of the past decades.

This work was completed in a collaboration including the following institutions: Institute for Quantum Optics and Quantum Information — Vienna / IQOQI Vienna (Austrian Academy of Sciences), Quantum Optics, Quantum Nanophysics and Quantum Information, Department of Physics (University of Vienna), Max-Planck-Institute of Quantum Optics, National Institute of Standards and Technology / NIST, Physikalisch-Technische Bundesanstalt, Berlin.

This work was supported by: ERC (Advanced Grant), Austrian Science Fund (FWF), grant Q-ESSENCE, Marie Curie Research Training Network EMALI, and John Templeton Foundation. This work was also supported by NIST Quantum Information Science Initiative (QISI).

Journal Reference:

  1. Marissa Giustina, Alexandra Mech, Sven Ramelow, Bernhard Wittmann, Johannes Kofler, Jörn Beyer, Adriana Lita, Brice Calkins, Thomas Gerrits, Sae Woo Nam, Rupert Ursin, Anton Zeilinger. Bell violation using entangled photons without the fair-sampling assumptionNature, 2013; DOI: 10.1038/nature12012