Arquivo da tag: Física

New Research Shocks Scientists: Human Emotion Physically Shapes Reality! (IUV)

BY  /   SUNDAY, 12 MARCH 2017

published on Life Coach Code, on February 26, 2017

Three different studies, done by different teams of scientists proved something really extraordinary. But when a new research connected these 3 discoveries, something shocking was realized, something hiding in plain sight.

Human emotion literally shapes the world around us. Not just our perception of the world, but reality itself.

Emotions-Physically-Shape-Reality

In the first experiment, human DNA, isolated in a sealed container, was placed near a test subject. Scientists gave the donor emotional stimulus and fascinatingly enough, the emotions affected their DNA in the other room.

In the presence of negative emotions the DNA tightened. In the presence of positive emotions the coils of the DNA relaxed.

The scientists concluded that “Human emotion produces effects which defy conventional laws of physics.”

Emotions-Have-An-Effect-On-Reality

In the second, similar but unrelated experiment, different group of scientists extracted Leukocytes (white blood cells) from donors and placed into chambers so they could measure electrical changes.

In this experiment, the donor was placed in one room and subjected to “emotional stimulation” consisting of video clips, which generated different emotions in the donor.

The DNA was placed in a different room in the same building. Both the donor and his DNA were monitored and as the donor exhibited emotional peaks or valleys (measured by electrical responses), the DNA exhibited the IDENTICAL RESPONSES AT THE EXACT SAME TIME.

DNA-Responds-To-Our-Emotions

There was no lag time, no transmission time. The DNA peaks and valleys EXACTLY MATCHED the peaks and valleys of the donor in time.

The scientists wanted to see how far away they could separate the donor from his DNA and still get this effect. They stopped testing after they separated the DNA and the donor by 50 miles and STILL had the SAME result. No lag time; no transmission time.

The DNA and the donor had the same identical responses in time. The conclusion was that the donor and the DNA can communicate beyond space and time.

The third experiment proved something pretty shocking!

Scientists observed the effect of DNA on our physical world.

Light photons, which make up the world around us, were observed inside a vacuum. Their natural locations were completely random.

Human DNA was then inserted into the vacuum. Shockingly the photons were no longer acting random. They precisely followed the geometry of the DNA.

Light-Photons-Followed-The-Geometry-DNA

Scientists who were studying this, described the photons behaving “surprisingly and counter-intuitively”. They went on to say that “We are forced to accept the possibility of some new field of energy!”

They concluded that human DNA literally shape the behavior of light photons that make up the world around us!

So when a new research was done, and all of these 3 scientific claims were connected together, scientists were shocked.

They came to a stunning realization that if our emotions affect our DNA and our DNA shapes the world around us, than our emotions physically change the world around us.

Scientists-Make-A-Claim-That-Human-Emotion-Defy-The-Conventional-Laws-Of-Physics-And-Reality

And not just that, we are connected to our DNA beyond space and time.

We create our reality by choosing it with our feelings.

Science has already proven some pretty MINDBLOWING facts about The Universe we live in. All we have to do is connect the dots.

Sources:
– https://www.youtube.com/watch?v=pq1q58wTolk;
– Science Alert;
– Heart Math;
– Above Top Secret;
– http://www.bibliotecapleyades.net/mistic/esp_greggbraden_11.htm;

Anúncios

Nobody understands what consciousness is or how it works. Nobody understands quantum mechanics either. Could that be more than coincidence? (BBC)

What is going on in our brains? (Credit: Mehau Kulyk/Science Photo Library)

What is going on in our brains? (Credit: Mehau Kulyk/Science Photo Library)

Quantum mechanics is the best theory we have for describing the world at the nuts-and-bolts level of atoms and subatomic particles. Perhaps the most renowned of its mysteries is the fact that the outcome of a quantum experiment can change depending on whether or not we choose to measure some property of the particles involved.

When this “observer effect” was first noticed by the early pioneers of quantum theory, they were deeply troubled. It seemed to undermine the basic assumption behind all science: that there is an objective world out there, irrespective of us. If the way the world behaves depends on how – or if – we look at it, what can “reality” really mean?

The most famous intrusion of the mind into quantum mechanics comes in the “double-slit experiment”

Some of those researchers felt forced to conclude that objectivity was an illusion, and that consciousness has to be allowed an active role in quantum theory. To others, that did not make sense. Surely, Albert Einstein once complained, the Moon does not exist only when we look at it!

Today some physicists suspect that, whether or not consciousness influences quantum mechanics, it might in fact arise because of it. They think that quantum theory might be needed to fully understand how the brain works.

Might it be that, just as quantum objects can apparently be in two places at once, so a quantum brain can hold onto two mutually-exclusive ideas at the same time?

These ideas are speculative, and it may turn out that quantum physics has no fundamental role either for or in the workings of the mind. But if nothing else, these possibilities show just how strangely quantum theory forces us to think.

The famous double-slit experiment (Credit: Victor de Schwanberg/Science Photo Library)

The famous double-slit experiment (Credit: Victor de Schwanberg/Science Photo Library)

The most famous intrusion of the mind into quantum mechanics comes in the “double-slit experiment”. Imagine shining a beam of light at a screen that contains two closely-spaced parallel slits. Some of the light passes through the slits, whereupon it strikes another screen.

Light can be thought of as a kind of wave, and when waves emerge from two slits like this they can interfere with each other. If their peaks coincide, they reinforce each other, whereas if a peak and a trough coincide, they cancel out. This wave interference is called diffraction, and it produces a series of alternating bright and dark stripes on the back screen, where the light waves are either reinforced or cancelled out.

The implication seems to be that each particle passes simultaneously through both slits

This experiment was understood to be a characteristic of wave behaviour over 200 years ago, well before quantum theory existed.

The double slit experiment can also be performed with quantum particles like electrons; tiny charged particles that are components of atoms. In a counter-intuitive twist, these particles can behave like waves. That means they can undergo diffraction when a stream of them passes through the two slits, producing an interference pattern.

Now suppose that the quantum particles are sent through the slits one by one, and their arrival at the screen is likewise seen one by one. Now there is apparently nothing for each particle to interfere with along its route – yet nevertheless the pattern of particle impacts that builds up over time reveals interference bands.

The implication seems to be that each particle passes simultaneously through both slits and interferes with itself. This combination of “both paths at once” is known as a superposition state.

But here is the really odd thing.

The double-slit experiment (Credit: GIPhotoStock/Science Photo Library)

The double-slit experiment (Credit: GIPhotoStock/Science Photo Library)

If we place a detector inside or just behind one slit, we can find out whether any given particle goes through it or not. In that case, however, the interference vanishes. Simply by observing a particle’s path – even if that observation should not disturb the particle’s motion – we change the outcome.

The physicist Pascual Jordan, who worked with quantum guru Niels Bohr in Copenhagen in the 1920s, put it like this: “observations not only disturb what has to be measured, they produce it… We compel [a quantum particle] to assume a definite position.” In other words, Jordan said, “we ourselves produce the results of measurements.”

If that is so, objective reality seems to go out of the window.

And it gets even stranger.

Particles can be in two states (Credit: Victor de Schwanberg/Science Photo Library)

Particles can be in two states (Credit: Victor de Schwanberg/Science Photo Library)

If nature seems to be changing its behaviour depending on whether we “look” or not, we could try to trick it into showing its hand. To do so, we could measure which path a particle took through the double slits, but only after it has passed through them. By then, it ought to have “decided” whether to take one path or both.

The sheer act of noticing, rather than any physical disturbance caused by measuring, can cause the collapse

An experiment for doing this was proposed in the 1970s by the American physicist John Wheeler, and this “delayed choice” experiment was performed in the following decade. It uses clever techniques to make measurements on the paths of quantum particles (generally, particles of light, called photons) after they should have chosen whether to take one path or a superposition of two.

It turns out that, just as Bohr confidently predicted, it makes no difference whether we delay the measurement or not. As long as we measure the photon’s path before its arrival at a detector is finally registered, we lose all interference.

It is as if nature “knows” not just if we are looking, but if we are planning to look.

(Credit: Emilio Segre Visual Archives/American Institute Physics/Science Photo Library)

Eugene Wigner (Credit: Emilio Segre Visual Archives/American Institute of Physics/Science Photo Library)

Whenever, in these experiments, we discover the path of a quantum particle, its cloud of possible routes “collapses” into a single well-defined state. What’s more, the delayed-choice experiment implies that the sheer act of noticing, rather than any physical disturbance caused by measuring, can cause the collapse. But does this mean that true collapse has only happened when the result of a measurement impinges on our consciousness?

It is hard to avoid the implication that consciousness and quantum mechanics are somehow linked

That possibility was admitted in the 1930s by the Hungarian physicist Eugene Wigner. “It follows that the quantum description of objects is influenced by impressions entering my consciousness,” he wrote. “Solipsism may be logically consistent with present quantum mechanics.”

Wheeler even entertained the thought that the presence of living beings, which are capable of “noticing”, has transformed what was previously a multitude of possible quantum pasts into one concrete history. In this sense, Wheeler said, we become participants in the evolution of the Universe since its very beginning. In his words, we live in a “participatory universe.”

To this day, physicists do not agree on the best way to interpret these quantum experiments, and to some extent what you make of them is (at the moment) up to you. But one way or another, it is hard to avoid the implication that consciousness and quantum mechanics are somehow linked.

Beginning in the 1980s, the British physicist Roger Penrosesuggested that the link might work in the other direction. Whether or not consciousness can affect quantum mechanics, he said, perhaps quantum mechanics is involved in consciousness.

Physicist and mathematician Roger Penrose (Credit: Max Alexander/Science Photo Library)

Physicist and mathematician Roger Penrose (Credit: Max Alexander/Science Photo Library)

What if, Penrose asked, there are molecular structures in our brains that are able to alter their state in response to a single quantum event. Could not these structures then adopt a superposition state, just like the particles in the double slit experiment? And might those quantum superpositions then show up in the ways neurons are triggered to communicate via electrical signals?

Maybe, says Penrose, our ability to sustain seemingly incompatible mental states is no quirk of perception, but a real quantum effect.

Perhaps quantum mechanics is involved in consciousness

After all, the human brain seems able to handle cognitive processes that still far exceed the capabilities of digital computers. Perhaps we can even carry out computational tasks that are impossible on ordinary computers, which use classical digital logic.

Penrose first proposed that quantum effects feature in human cognition in his 1989 book The Emperor’s New Mind. The idea is called Orch-OR, which is short for “orchestrated objective reduction”. The phrase “objective reduction” means that, as Penrose believes, the collapse of quantum interference and superposition is a real, physical process, like the bursting of a bubble.

Orch-OR draws on Penrose’s suggestion that gravity is responsible for the fact that everyday objects, such as chairs and planets, do not display quantum effects. Penrose believes that quantum superpositions become impossible for objects much larger than atoms, because their gravitational effects would then force two incompatible versions of space-time to coexist.

Penrose developed this idea further with American physician Stuart Hameroff. In his 1994 book Shadows of the Mind, he suggested that the structures involved in this quantum cognition might be protein strands called microtubules. These are found in most of our cells, including the neurons in our brains. Penrose and Hameroff argue that vibrations of microtubules can adopt a quantum superposition.

But there is no evidence that such a thing is remotely feasible.

Microtubules inside a cell (Credit: Dennis Kunkel Microscopy/Science Photo Library)

Microtubules inside a cell (Credit: Dennis Kunkel Microscopy/Science Photo Library)

It has been suggested that the idea of quantum superpositions in microtubules is supported by experiments described in 2013, but in fact those studies made no mention of quantum effects.

Besides, most researchers think that the Orch-OR idea was ruled out by a study published in 2000. Physicist Max Tegmark calculated that quantum superpositions of the molecules involved in neural signaling could not survive for even a fraction of the time needed for such a signal to get anywhere.

Other researchers have found evidence for quantum effects in living beings

Quantum effects such as superposition are easily destroyed, because of a process called decoherence. This is caused by the interactions of a quantum object with its surrounding environment, through which the “quantumness” leaks away.

Decoherence is expected to be extremely rapid in warm and wet environments like living cells.

Nerve signals are electrical pulses, caused by the passage of electrically-charged atoms across the walls of nerve cells. If one of these atoms was in a superposition and then collided with a neuron, Tegmark showed that the superposition should decay in less than one billion billionth of a second. It takes at least ten thousand trillion times as long for a neuron to discharge a signal.

As a result, ideas about quantum effects in the brain are viewed with great skepticism.

However, Penrose is unmoved by those arguments and stands by the Orch-OR hypothesis. And despite Tegmark’s prediction of ultra-fast decoherence in cells, other researchers have found evidence for quantum effects in living beings. Some argue that quantum mechanics is harnessed by migratory birds that use magnetic navigation, and by green plants when they use sunlight to make sugars in photosynthesis.

Besides, the idea that the brain might employ quantum tricks shows no sign of going away. For there is now another, quite different argument for it.

Could phosphorus sustain a quantum state? (Credit: Phil Degginger/Science Photo Library)

Could phosphorus sustain a quantum state? (Credit: Phil Degginger/Science Photo Library)

In a study published in 2015, physicist Matthew Fisher of the University of California at Santa Barbara argued that the brain might contain molecules capable of sustaining more robust quantum superpositions. Specifically, he thinks that the nuclei of phosphorus atoms may have this ability.

Phosphorus atoms are everywhere in living cells. They often take the form of phosphate ions, in which one phosphorus atom joins up with four oxygen atoms.

Such ions are the basic unit of energy within cells. Much of the cell’s energy is stored in molecules called ATP, which contain a string of three phosphate groups joined to an organic molecule. When one of the phosphates is cut free, energy is released for the cell to use.

Cells have molecular machinery for assembling phosphate ions into groups and cleaving them off again. Fisher suggested a scheme in which two phosphate ions might be placed in a special kind of superposition called an “entangled state”.

Phosphorus spins could resist decoherence for a day or so, even in living cells

The phosphorus nuclei have a quantum property called spin, which makes them rather like little magnets with poles pointing in particular directions. In an entangled state, the spin of one phosphorus nucleus depends on that of the other.

Put another way, entangled states are really superposition states involving more than one quantum particle.

Fisher says that the quantum-mechanical behaviour of these nuclear spins could plausibly resist decoherence on human timescales. He agrees with Tegmark that quantum vibrations, like those postulated by Penrose and Hameroff, will be strongly affected by their surroundings “and will decohere almost immediately”. But nuclear spins do not interact very strongly with their surroundings.

All the same, quantum behaviour in the phosphorus nuclear spins would have to be “protected” from decoherence.

Quantum particles can have different spins (Credit: Richard Kail/Science Photo Library)

Quantum particles can have different spins (Credit: Richard Kail/Science Photo Library)

This might happen, Fisher says, if the phosphorus atoms are incorporated into larger objects called “Posner molecules”. These are clusters of six phosphate ions, combined with nine calcium ions. There is some evidence that they can exist in living cells, though this is currently far from conclusive.

I decided… to explore how on earth the lithium ion could have such a dramatic effect in treating mental conditions

In Posner molecules, Fisher argues, phosphorus spins could resist decoherence for a day or so, even in living cells. That means they could influence how the brain works.

The idea is that Posner molecules can be swallowed up by neurons. Once inside, the Posner molecules could trigger the firing of a signal to another neuron, by falling apart and releasing their calcium ions.

Because of entanglement in Posner molecules, two such signals might thus in turn become entangled: a kind of quantum superposition of a “thought”, you might say. “If quantum processing with nuclear spins is in fact present in the brain, it would be an extremely common occurrence, happening pretty much all the time,” Fisher says.

He first got this idea when he started thinking about mental illness.

A capsule of lithium carbonate (Credit: Custom Medical Stock Photo/Science Photo Library)

A capsule of lithium carbonate (Credit: Custom Medical Stock Photo/Science Photo Library)

“My entry into the biochemistry of the brain started when I decided three or four years ago to explore how on earth the lithium ion could have such a dramatic effect in treating mental conditions,” Fisher says.

At this point, Fisher’s proposal is no more than an intriguing idea

Lithium drugs are widely used for treating bipolar disorder. They work, but nobody really knows how.

“I wasn’t looking for a quantum explanation,” Fisher says. But then he came across a paper reporting that lithium drugs had different effects on the behaviour of rats, depending on what form – or “isotope” – of lithium was used.

On the face of it, that was extremely puzzling. In chemical terms, different isotopes behave almost identically, so if the lithium worked like a conventional drug the isotopes should all have had the same effect.

Nerve cells are linked at synapses (Credit: Sebastian Kaulitzki/Science Photo Library)

Nerve cells are linked at synapses (Credit: Sebastian Kaulitzki/Science Photo Library)

But Fisher realised that the nuclei of the atoms of different lithium isotopes can have different spins. This quantum property might affect the way lithium drugs act. For example, if lithium substitutes for calcium in Posner molecules, the lithium spins might “feel” and influence those of phosphorus atoms, and so interfere with their entanglement.

We do not even know what consciousness is

If this is true, it would help to explain why lithium can treat bipolar disorder.

At this point, Fisher’s proposal is no more than an intriguing idea. But there are several ways in which its plausibility can be tested, starting with the idea that phosphorus spins in Posner molecules can keep their quantum coherence for long periods. That is what Fisher aims to do next.

All the same, he is wary of being associated with the earlier ideas about “quantum consciousness”, which he sees as highly speculative at best.

Consciousness is a profound mystery (Credit: Sciepro/Science Photo Library)

Consciousness is a profound mystery (Credit: Sciepro/Science Photo Library)

Physicists are not terribly comfortable with finding themselves inside their theories. Most hope that consciousness and the brain can be kept out of quantum theory, and perhaps vice versa. After all, we do not even know what consciousness is, let alone have a theory to describe it.

We all know what red is like, but we have no way to communicate the sensation

It does not help that there is now a New Age cottage industrydevoted to notions of “quantum consciousness“, claiming that quantum mechanics offers plausible rationales for such things as telepathy and telekinesis.

As a result, physicists are often embarrassed to even mention the words “quantum” and “consciousness” in the same sentence.

But setting that aside, the idea has a long history. Ever since the “observer effect” and the mind first insinuated themselves into quantum theory in the early days, it has been devilishly hard to kick them out. A few researchers think we might never manage to do so.

In 2016, Adrian Kent of the University of Cambridge in the UK, one of the most respected “quantum philosophers”, speculated that consciousness might alter the behaviour of quantum systems in subtle but detectable ways.

We do not understand how thoughts work (Credit: Andrzej Wojcicki/Science Photo Library)

We do not understand how thoughts work (Credit: Andrzej Wojcicki/Science Photo Library)

Kent is very cautious about this idea. “There is no compelling reason of principle to believe that quantum theory is the right theory in which to try to formulate a theory of consciousness, or that the problems of quantum theory must have anything to do with the problem of consciousness,” he admits.

Every line of thought on the relationship of consciousness to physics runs into deep trouble

But he says that it is hard to see how a description of consciousness based purely on pre-quantum physics can account for all the features it seems to have.

One particularly puzzling question is how our conscious minds can experience unique sensations, such as the colour red or the smell of frying bacon. With the exception of people with visual impairments, we all know what red is like, but we have no way to communicate the sensation and there is nothing in physics that tells us what it should be like.

Sensations like this are called “qualia”. We perceive them as unified properties of the outside world, but in fact they are products of our consciousness – and that is hard to explain. Indeed, in 1995 philosopher David Chalmers dubbed it “the hard problem” of consciousness.

How does our consciousness work? (Credit: Victor Habbick Visions/Science Photo Library)

How does our consciousness work? (Credit: Victor Habbick Visions/Science Photo Library)

“Every line of thought on the relationship of consciousness to physics runs into deep trouble,” says Kent.

This has prompted him to suggest that “we could make some progress on understanding the problem of the evolution of consciousness if we supposed that consciousnesses alters (albeit perhaps very slightly and subtly) quantum probabilities.”

“Quantum consciousness” is widely derided as mystical woo, but it just will not go away

In other words, the mind could genuinely affect the outcomes of measurements.

It does not, in this view, exactly determine “what is real”. But it might affect the chance that each of the possible actualities permitted by quantum mechanics is the one we do in fact observe, in a way that quantum theory itself cannot predict. Kent says that we might look for such effects experimentally.

He even bravely estimates the chances of finding them. “I would give credence of perhaps 15% that something specifically to do with consciousness causes deviations from quantum theory, with perhaps 3% credence that this will be experimentally detectable within the next 50 years,” he says.

If that happens, it would transform our ideas about both physics and the mind. That seems a chance worth exploring.

Artificial intelligence replaces physicists (Science Daily)

Date:
May 16, 2016
Source:
Australian National University
Summary:
Physicists are putting themselves out of a job, using artificial intelligence to run a complex experiment. The experiment created an extremely cold gas trapped in a laser beam, known as a Bose-Einstein condensate, replicating the experiment that won the 2001 Nobel Prize.

The experiment, featuring the small red glow of a BEC trapped in infrared laser beams. Credit: Stuart Hay, ANU

Physicists are putting themselves out of a job, using artificial intelligence to run a complex experiment.

The experiment, developed by physicists from The Australian National University (ANU) and UNSW ADFA, created an extremely cold gas trapped in a laser beam, known as a Bose-Einstein condensate, replicating the experiment that won the 2001 Nobel Prize.

“I didn’t expect the machine could learn to do the experiment itself, from scratch, in under an hour,” said co-lead researcher Paul Wigley from the ANU Research School of Physics and Engineering.

“A simple computer program would have taken longer than the age of the Universe to run through all the combinations and work this out.”

Bose-Einstein condensates are some of the coldest places in the Universe, far colder than outer space, typically less than a billionth of a degree above absolute zero.

They could be used for mineral exploration or navigation systems as they are extremely sensitive to external disturbances, which allows them to make very precise measurements such as tiny changes in the Earth’s magnetic field or gravity.

The artificial intelligence system’s ability to set itself up quickly every morning and compensate for any overnight fluctuations would make this fragile technology much more useful for field measurements, said co-lead researcher Dr Michael Hush from UNSW ADFA.

“You could make a working device to measure gravity that you could take in the back of a car, and the artificial intelligence would recalibrate and fix itself no matter what,” he said.

“It’s cheaper than taking a physicist everywhere with you.”

The team cooled the gas to around 1 microkelvin, and then handed control of the three laser beams over to the artificial intelligence to cool the trapped gas down to nanokelvin.

Researchers were surprised by the methods the system came up with to ramp down the power of the lasers.

“It did things a person wouldn’t guess, such as changing one laser’s power up and down, and compensating with another,” said Mr Wigley.

“It may be able to come up with complicated ways humans haven’t thought of to get experiments colder and make measurements more precise.

The new technique will lead to bigger and better experiments, said Dr Hush.

“Next we plan to employ the artificial intelligence to build an even larger Bose-Einstein condensate faster than we’ve seen ever before,” he said.

The research is published in the Nature group journal Scientific Reports.


Journal Reference:

  1. P. B. Wigley, P. J. Everitt, A. van den Hengel, J. W. Bastian, M. A. Sooriyabandara, G. D. McDonald, K. S. Hardman, C. D. Quinlivan, P. Manju, C. C. N. Kuhn, I. R. Petersen, A. N. Luiten, J. J. Hope, N. P. Robins, M. R. Hush. Fast machine-learning online optimization of ultra-cold-atom experimentsScientific Reports, 2016; 6: 25890 DOI: 10.1038/srep25890

Weasel Apparently Shuts Down World’s Most Powerful Particle Collider (NPR)

April 29, 201611:04 AM ET

GEOFF BRUMFIEL

The Large Hadron Collider uses superconducting magnets to smash sub-atomic particles together at enormous energies.

The Large Hadron Collider uses superconducting magnets to smash sub-atomic particles together at enormous energies. CERN

A small mammal has sabotaged the world’s most powerful scientific instrument.

The Large Hadron Collider, a 17-mile superconducting machine designed to smash protons together at close to the speed of light, went offline overnight. Engineers investigating the mishap found the charred remains of a furry creature near a gnawed-through power cable.

A small mammal, possibly a weasel, gnawed-through a power cable at the Large Hadron Collider.A small mammal, possibly a weasel, gnawed-through a power cable at the Large Hadron Collider. Ashley Buttle/Flickr

“We had electrical problems, and we are pretty sure this was caused by a small animal,” says Arnaud Marsollier, head of press for CERN, the organization that runs the $7 billion particle collider in Switzerland. Although they had not conducted a thorough analysis of the remains, Marsollier says they believe the creature was “a weasel, probably.” (Update: An official briefing document from CERN indicates the creature may have been a marten.)

The shutdown comes as the LHC was preparing to collect new data on the Higgs Boson, a fundamental particle it discovered in 2012. The Higgs is believed to endow other particles with mass, and it is considered to be a cornerstone of the modern theory of particle physics.

Researchers have seen some hints in recent data that other, yet-undiscovered particles might also be generated inside the LHC. If those other particles exist, they could revolutionize researcher’s understanding of everything from the laws of gravity, to quantum mechanics.

Unfortunately, Marsollier says, scientists will have to wait while workers bring the machine back online. Repairs will take a few days, but getting the machine fully ready to smash might take another week or two. “It may be mid-May,” he says.

These sorts of mishaps are not unheard of, says Marsollier. The LHC is located outside of Geneva. “We are in the countryside, and of course we have wild animals everywhere.” There have been previous incidents, including one in 2009, when a bird is believed to have dropped a baguette onto critical electrical systems.

Nor are the problems exclusive to the LHC: In 2006, raccoons conducted a “coordinated” attack on a particle accelerator in Illinois.

It is unclear whether the animals are trying to stop humanity from unlocking the secrets of the universe.

Of course, small mammals cause problems in all sorts of organizations. Yesterday, a group of children took National Public Radio off the air for over a minute before engineers could restore the broadcast.

Stephen Hawking e Zuckerberg lançam projeto para buscar planeta habitável (O Globo)

O Globo, 12/04/2016

Uma ilustração de como deve ser a nave – Divulgação

NOVA YORK, EUA – O físico britânico Stephen Hawking e o bilionário russo Yuri Milner anunciaram nesta terça-feira um projeto de US$ 100 milhões para enviar uma nave até o sistema estelar mais próximo da Terra, o Alpha Centauri, que fica a 4,37 anos-luz de distância. Um dos principais objetivos é encontrar planetas habitáveis fora do nosso sistema solar.

A ideia do projeto “Breakthrough Starshot”, de diretoria composta por Milner e Hawking, além do CEO do Facebook, Mark Zuckerberg, é enviar uma nave minúscula, ou uma “nano nave”, numa viagem de 20 anos, atingindo, segundo eles, um quinto da velocidade da luz. O programa vai testar o know-how e as tecnologias necessárias para o projeto.

Da esquerda para a direita: o investidor Yuri Milner, Stephen Hawking, e os físicos Freeman Dyson e Avi Loeb – LUCAS JACKSON / REUTERS

O programa prevê a criação de uma nave automatizada pesando pouco mais do que uma folha de papel e impulsionada por uma vela solar não muito maior que uma pipa de criança, mas com uma fibra de apenas algumas centenas de átomos em grossura. Enquanto uma vela normal é impulsionada pelo vento, uma vela solar para uso espacial é impulsionada pela radiação emitida pelo Sol.

A ideia inicial é usar milhares de naves assim, que teriam um “empurrão” de um laser montado na Terra, que emitiria ainda mais radiação para ajudar na impulsão. Os desafios do projeto são muitos, entre eles juntar vários emissores num “grande canhão laser”, montar velas com nanotecnologia e juntar todos os componentes da nave num pequeno pacote de silicone.

“A história humana é feita de grandes saltos. Hoje estamos preparando o próximo grande salto, para as estrelas”, disse Yuri Milner em Londres. Já Hawking afirma que “a Terra é um lugar maravilho, mas pode não durar para sempre. Mais cedo ou mais tarde, devemos olhar para as estrelas. Esse projeto é um importante primeiro passo nessa jornada”.

A nova partícula que pode mudar o que sabemos sobre o Universo (BBC Brasil)

Da BBC Mundo – 21 março 2016

 

Reuters

Se for confirmada a existência de uma nova partícula, especialistas acreditam que poderá ser aberta uma porta para um mundo ‘desconhecido e inexplorado’ (Reuters)

O Grande Colisor de Hádrons (LHC, na sigla em inglês) – um acelerador de partículas gigantesco que fica na fronteira entre a França e a Suíça – causou fortes emoções entre físicos teóricos, uma comunidade que geralmente é muito cautelosa quando se trata de novas descobertas.

O motivo: “batidinhas” detectadas pelo Grande Colisor de Hádrons. Essas batidas, evidenciadas nos dados que resultam da aceleração dos prótons, podem sinalizar a existência de uma nova e desconhecida partícula seis vezes maior do que o Bóson de Higgs (a chamada “partícula de Deus”).

E isso, para o físico teórico Gian Giudice, significaria “uma porta para um mundo desconhecido e inexplorado”.

“Não é a confirmação de uma teoria já estabelecida”, disse à revista New Scientisto pesquisador, que também é trabalha na Organização Europeia para Investigação Nuclear (CERN).

A emoção dos cientistas começou quando, em dezembro de 2015, os dois laboratórios que trabalham no LHC de forma independente registraram os mesmos dados depois de colocar o colisor para funcionar praticamente na capacidade máxima (o dobro de energia necessária para detectar o Bóson de Higgs).

Os dados registrados não podem ser explicados com o que se sabe até hoje das leis da física.

Depois do anúncio desses novos dados foram publicados cerca de 280 ensaios que tentam explicar o que pode ser esse sinal – e nenhum deles descartou a teoria de que se trata de uma nova partícula.

Alguns cientistas sugerem que a partícula pode ser uma prima pesada do Bóson de Higgs, descoberto em 2012 e que explica por que a matéria tem massa.

Outros apresentaram a hipótese de o Bóson de Higgs ser feito de partículas menores. E ainda há o grupo dos que pensam que essas “batidinhas” podem ser de um gráviton, a partícula encarregada de transmitir a força da gravidade.

Se realmente for um gráviton, essa descoberta será um marco, porque até hoje não tinha sido possível conciliar a gravidade com o modelo padrão da física de partículas.

Extraordinário?

Para os especialistas, o fato de que ninguém conseguiu refutar o que os físicos detectaram é um sinal de que podemos estar perto de descobrir algo extraordinário.

“Se isso se provar verdadeiro, será uma (nota) dez na escala Richter dos físicos de partículas”, disse ao jornal britânico The Guardian o especialista John Ellis, do King’s College de Londres. Ele também já foi chefe do departamento de teoria da Organização Europeia para a Investigação Nuclear. “Seria a ponta de um iceberg de novas formas de matéria.”

Mesmo com toda a animação de Ellis, os cientistas não querem se precipitar.

AFP

Image captionEsta nova partícula seria seis vezes maior que o Bóson de Higgs (AFP)

Quando o anúncio foi feito pela primeira vez, alguns pensaram que tudo não passava de uma terrível coincidência que aconteceu devido à forma como o LHC funciona.

Duas máquinas de raios de prótons são aceleradas chegando quase à velocidade da luz. Elas vão em direções diferentes e se chocam em quatro pontos, criando padrões de dados diferentes.

Essas diferenças, batidas ou perturbações na estatística são o que permitem demonstrar a presença de partículas.

Mas estamos falando de bilhões de perturbações registradas a cada experimento, o que torna provável um erro estatístico.

Porém, o fato de que os dois laboratórios tenham detectado a mesma batida é o que faz com que os cientistas prestem mais atenção ao tema.

Boas notícias

PA

O Grande Colisor de Hádrons volta a funcionar nesta semana

Além disso, recentemente os cientistas dos laboratórios CMC e Atlas apresentaram novas provas depois de refinar e recalibrar seus resultados.

E nenhuma das equipes pôde atribuir a anomalia detectada a um eventual erro estatístico.

São boas notícias para os especialistas que acreditam que essa descoberta seja o início de algo muito grande.

O lado ruim é que nenhum dos laboratórios conseguiu explicar o que é esta misteriosa partícula. São necessárias mais experiências para qualificar o evento como um “descobrimento”.

O lado bom é que não será preciso esperar muito para ver o fim da história.

Nesta semana, o Grande Colisor de Hádrons sairá de seu período de hibernação para voltar a disparar prótons em direções diferentes.

Thinkstock

Uma das hipóteses é que esta nova partícula estaria relacionada com a gravidade (Thinkstock)

Nos próximos meses o colisor oferecerá o dobro de informação em comparação ao que os cientistas têm até agora.

E se estima que, em agosto, eles poderão saber o que é essa nova e promissora partícula.

A new form of frozen water? (Science Daily)

New study describes what could be the 18th known form of ice

Date:
February 12, 2016
Source:
University of Nebraska-Lincoln
Summary:
A research team has predicted a new molecular form of ice with a record-low density. If the ice can be synthesized, it would become the 18th known crystalline form of water and the first discovered in the US since before World War II.

This illustration shows the ice’s molecular configuration. Credit: Courtesy photo/Yingying Huang and Chongqin Zhu 

Amid the season known for transforming Nebraska into an outdoor ice rink, a University of Nebraska-Lincoln-led research team has predicted a new molecular form of the slippery stuff that even Mother Nature has never borne.

The proposed ice, which the researchers describe in a Feb. 12, 2016 study in the journal Science Advances, would be about 25 percent less dense than a record-low form synthesized by a European team in 2014.

If the ice can be synthesized, it would become the 18th known crystalline form of water — and the first discovered in the United States since before World War II.

“We performed a lot of calculations (focused on) whether this is not just a low-density ice, but perhaps the lowest-density ice to date,” said Xiao Cheng Zeng, an Ameritas University Professor of chemistry who co-authored the study. “A lot of people are interested in predicting a new ice structure beyond the state of the art.”

This newest finding represents the latest in a long line of ice-related research from Zeng, who previously discovered a two-dimensional “Nebraska Ice” that contracts rather than expands when frozen under certain conditions.

Zeng’s newest study, which was co-led by Dalian University of Technology’s Jijun Zhao, used a computational algorithm and molecular simulation to determine the ranges of extreme pressure and temperature under which water would freeze into the predicted configuration. That configuration takes the form of a clathrate — essentially a series of water molecules that form an interlocking cage-like structure.

It was long believed that these cages could maintain their structural integrity only when housing “guest molecules” such as methane, which fills an abundance of natural clathrates found on the ocean floor and in permafrost. Like the European team before them, however, Zeng and his colleagues have calculated that their clathrate would retain its stability even after its guest molecules have been evicted.

Actually synthesizing the clathrate will take some effort. Based on the team’s calculations, the new ice will form only when water molecules are placed inside an enclosed space that is subjected to ultra-high, outwardly expanding pressure.

Just how much? At minus-10 Fahrenheit, the enclosure would need to be surrounded by expansion pressure about four times greater than what is found at the Pacific Ocean’s deepest trench. At minus-460, that pressure would need to be even greater — roughly the same amount experienced by a person shouldering 300 jumbo jets at sea level.

The guest molecules would then need to be extracted via a vacuuming process pioneered by the European team, which Zeng credited with inspiring his own group to conduct the new study.

Yet Zeng said the wonders of ordinary ice — the type that has covered Earth for billions of years — have also motivated his team’s research.

“Water and ice are forever interesting because they have such relevance to human beings and life,” Zeng said. “If you think about it, the low density of natural ice protects the water below it; if it were denser, water would freeze from the bottom up, and no living species could survive. So Mother Nature’s combination is just so perfect.”

If confirmed, the new form of ice will be called “Ice XVII,” a naming quirk that resulted from scientists terming the first two identified forms “Ice I.”

Zeng and Zhao co-authored the Science Advances study with UNL postdoctoral researcher Chongqin Zhu; Yingying Huang, a visiting research fellow from the Dalian University of Technology; and researchers from the Chinese Academy of Sciences and the University of Science and Technology of China.

The team’s research was funded in part by the National Science Foundation and conducted with the assistance of UNL’s Holland Computing Center.


Journal Reference:

  1. Y. Huang, C. Zhu, L. Wang, X. Cao, Y. Su, X. Jiang, S. Meng, J. Zhao, X. C. Zeng. A new phase diagram of water under negative pressure: The rise of the lowest-density clathrate s-IIIScience Advances, 2016; 2 (2): e1501010 DOI: 10.1126/sciadv.1501010

Surface physics: How water learns to dance (Science Daily)

Pole dancing water molecules: Researchers have seen this remarkable phenomenon on the surface of an important technological material

Date: December 21, 2015

Source: Vienna University of Technology

Summary: From pole dancing to square dance: Water molecules on perovskite surfaces show interesting patterns of motion. Surface scientists have now managed to image the dance of the atoms.


This is a visualization of the dance of the atoms on a crystal surface. Credit: TU Wien

Perovskites are materials used in batteries, fuel cells, and electronic components, and occur in nature as minerals. Despite their important role in technology, little is known about the reactivity of their surfaces. Professor Ulrike Diebold’s team at TU Wien (Vienna) has answered a long-standing question using scanning tunnelling microscopes and computer simulations: How do water molecules behave when they attach to a perovskite surface? Normally only the outermost atoms at the surface influence this behaviour, but on perovskites the deeper layers are important, too. The results have been published in the journal Nature Materials.

Perovskite dissociates water molecules

“We studied strontium ruthenate — a typical perovskite material,” says Ulrike Diebold. It has a crystalline structure containing oxygen, strontium and ruthenium. When the crystal is broken apart, the outermost layer consists of only strontium and oxygen atoms; the ruthenium is located underneath, surrounded by oxygen atoms.

A water molecule that lands on this surface splits into two parts: A hydrogen atom is stripped off the molecule and attaches to an oxygen atom on the crystal’s surface. This process is known as dissociation. However, although they are physically separated, the pieces continue to interact through a weak “hydrogen bond.”

It is this interaction that causes a strange effect: The OH group cannot move freely, and circles the hydrogen atom like a dancer spinning on a pole. Although this is the first observation of such behaviour, it was not entirely unexpected: “This effect was predicted a few years ago based on theoretical calculations, and we have finally confirmed it with our experiments” said Diebold.

Dancing requires space

When more water is put on to the surface, the stage becomes too crowded and spinning stops. “The OH group can only move freely in a circle if none of the neighbouring spaces are occupied,” explains Florian Mittendorfer, who performed the calculations together with PhD student Wernfried Mayr-Schmölzer. At first, when two water molecules are in neighbouring sites, the spinning OH groups collide and get stuck together, forming pairs. Then, as the amount of water is increased, the pairs stick together and form long chains. Eventually, water molecular cannot find the pair of sites it needs to split up, and attaches instead as a complete molecule.

The new methods that have been developed and applied by the TU Wien research team have made significant advances in surface research. Whereas researchers were previously reliant on indirect measurements, they can now — with the necessary expertise — directly map and observe the behaviour of individual atoms on the surface. This opens up new possibilities for modern materials research, for example for developing and improving catalysts.


Story Source:

The above post is reprinted from materials provided by Vienna University of TechnologyNote: Materials may be edited for content and length.


Journal Reference:

  1. Daniel Halwidl, Bernhard Stöger, Wernfried Mayr-Schmölzer, Jiri Pavelec, David Fobes, Jin Peng, Zhiqiang Mao, Gareth S. Parkinson, Michael Schmid, Florian Mittendorfer, Josef Redinger, Ulrike Diebold. Adsorption of water at the SrO surface of ruthenatesNature Materials, 2015; DOI: 10.1038/nmat4512

Climate policy: Democracy is not an inconvenience (Nature)

NATURE | COMMENT

Nico Stehr

22 September 2015

Climate scientists are tiring of governance that does not lead to action. But democracy must not be weakened in the fight against global warming, warns Nico Stehr.

Illustration by David Parkins

There are many threats to democracy in the modern era. Not least is the risk posed by the widespread public feeling that politicians are not listening. Such discontent can be seen in the political far right: the Tea Party movement in the United States, the UK Independence Party, the Pegida (Patriotic Europeans Against the Islamization of the West) demonstrators in Germany, and the National Front in France.

More surprisingly, a similar impatience with the political elite is now also present in the scientific community. Researchers are increasingly concerned that no one is listening to their diagnosis of the dangers of human-induced climate change and its long-lasting consequences, despite the robust scientific consensus. As governments continue to fail to take appropriate political action, democracy begins to look to some like an inconvenient form of governance. There is a tendency to want to take decisions out of the hands of politicians and the public, and, given the ‘exceptional circumstances’, put the decisions into the hands of scientists themselves.

This scientific disenchantment with democracy has slipped under the radar of many social scientists and commentators. Attention is urgently needed: the solution to the intractable ‘wicked problem’ of global warming is to enhance democracy, not jettison it.

Voices of discontent

Democratic nations seem to have failed us in the climate arena so far. The past decade’s climate summits in Copenhagen, Cancun, Durban and Warsaw were political washouts. Expectations for the next meeting in Paris this December are low.

Academics increasingly point to democracy as a reason for failure. NASA climate researcher James Hansen was quoted in 2009 in The Guardian as saying: “the democratic process doesn’t quite seem to be working”1. In a special issue of the journal Environmental Politics in 2010, political scientist Mark Beeson argued2 that forms of ‘good’ authoritarianism “may become not only justifiable, but essential for the survival of humanity in anything approaching a civilised form”. The title of an opinion piece published earlier this year in The Conversation, an online magazine funded by universities, sums up the issue: ‘Hidden crisis of liberal democracy creates climate change paralysis’ (see go.nature.com/pqgysr).

The depiction of contemporary democracies as ill-equipped to deal with climate change comes from a range of considerations. These include a deep-seated pessimism about the psychological make-up of humans; the disinclination of people to mobilize on issues that seem far removed; and the presumed lack of intellectual competence of people to grasp complex issues. On top of these there is the presumed scientific illiteracy of most politicians and the electorate; the inability of governments locked into short-term voting cycles to address long-term problems; the influence of vested interests on political agendas; the addiction to fossil fuels; and the feeling among the climate-science community that its message falls on the deaf ears of politicians.

“It is dangerous to blindly believe that science and scientists alone can tell us what to do.”

Such views can be heard from the highest ranks of climate science. Hans Joachim Schellnhuber, founding director of the Potsdam Institute for Climate Impact Research and chair of the German Advisory Council on Global Change, said of the inaction in a 2011 interview with German newspaper Der Spiegel: “comfort and ignorance are the biggest flaws of human character. This is a potentially deadly mix”.

What, then, is the alternative? The solution hinted at by many people leans towards a technocracy, in which decisions are made by those with technical knowledge. This can be seen in a shift in the statements of some co-authors of Intergovernmental Panel on Climate Change reports, who are moving away from a purely advisory role towards policy prescription (see, for example, ref. 3).

We must be careful what we wish for. Nations that have followed the path of ‘authoritarian modernization’, such as China and Russia, cannot claim to have a record of environmental accomplishments. In the past two or three years, China’s system has made it a global leader in renewables (it accounts for more than one-quarter of the planet’s investment in such energies4). Despite this, it is struggling to meet ambitious environmental targets and will continue to lead the world for some time in greenhouse-gas emissions. As Chinese citizens become wealthier and more educated, they will surely push for more democratic inclusion in environmental policymaking.

Broad-based support for environmental concerns and subsequent regulations came about in open democratic argument on the value of nature for humanity. Democracies learn from mistakes; autocracies lack flexibility and adaptability5. Democratic nations have forged the most effective international agreements, such as the Montreal Protocol against ozone-depleting substances.

Global stage

Impatient scientists often privilege hegemonic players such as world powers, states, transnational organizations, and multinational corporations. They tend to prefer sweeping policies of global mitigation over messier approaches of local adaptation; for them, global knowledge triumphs over local know-how. But societal trends are going in the opposite direction. The ability of large institutions to impose their will on citizens is declining. People are mobilizing around local concerns and efforts6.

The pessimistic assessment of the ability of democratic governance to cope with and control exceptional circumstances is linked to an optimistic assessment of the potential of large-scale social and economic planning. The uncertainties of social, political and economic events are treated as minor obstacles that can be overcome easily by implementing policies that experts prescribe. But humanity’s capacity to plan ahead effectively is limited. The centralized social and economic planning concept, widely discussed decades ago, has rightly fallen into disrepute7.

The argument for an authoritarian political approach concentrates on a single effect that governance ought to achieve: a reduction of greenhouse-gas emissions. By focusing on that goal, rather than on the economic and social conditions that go hand-in-hand with it, climate policies are reduced to scientific or technical issues. But these are not the sole considerations. Environmental concerns are tightly entangled with other political, economic and cultural issues that both broaden the questions at hand and open up different ways of approaching it. Scientific knowledge is neither immediately performative nor persuasive.

Enhance engagement

There is but one political system that is able to rationally and legitimately cope with the divergent political interests affected by climate change and that is democracy. Only a democratic system can sensitively attend to the conflicts within and among nations and communities, decide between different policies, and generally advance the aspirations of different segments of the population. The ultimate and urgent challenge is that of enhancing democracy, for example by reducing social inequality8.

If not, the threat to civilization will be much more than just changes to our physical environment. The erosion of democracy is an unnecessary suppression of social complexity and rights.

The philosopher Friedrich Hayek, who led the debate against social and economic planning in the mid-twentieth century9, noted a paradox that applies today. As science advances, it tends to strengthen the idea that we should “aim at more deliberate and comprehensive control of all human activities”. Hayek pessimistically added: “It is for this reason that those intoxicated by the advance of knowledge so often become the enemies of freedom”10. We should heed his warning. It is dangerous to blindly believe that science and scientists alone can tell us what to do.

Nature 525, 449–450 (24 September 2015) dos:10.1038/525449a

References

  1. Adam, D. ‘Leading climate scientist: “democratic process isn’t working”’ The Guardian (18 March 2009).
  2. Beeson, M. Environ. Politics 19276294 (2010).
  3. Hansen, J. et alPLoS ONE http://dx.doi.org/10.1371/journal.pone.0081648 (2013).
  4. REN21Renewables 2015 Global Status Report (REN21, 2015).
  5. Runciman, D. The Confidence Trap: A History of Democracy in Crisis from World War I to the Present (Princeton Univ. Press, 2013).
  6. Stehr, N. Information, Power and Democracy, Liberty is a Daughter of Knowledge (Cambridge Univ. Press, 2015).
  7. Pierre, J. Debating Governance: Authority, Steering, and Democracy (Oxford Univ. Press, 2000).
  8. Rosanvallon, P. The Society of Equals (Harvard Univ. Press, 2013).
  9. Hayek, F. A. Nature 148580584 (1941).
  10. Hayek, F. A. The Constitution of Liberty (Routledge, 1960).

ON reúne placas fotográficas que contribuíram para comprovar Teoria da Relatividade (MCTI)

Equipe selecionou 61 placas do Observatório Nacional que documentam eclipse de 1919. Observação em Sobral (CE) ajudou a demonstrar conclusões de Albert Einstein

Uma equipe do Observatório Nacional (ON/MCTI) fez um levantamento das placas fotográficas que fazem parte do resultado da expedição que observou o eclipse total do Sol na cidade de Sobral (CE) em 1919 e contribuiu para a comprovação da Teoria da Relatividade Geral, de Albert Einstein.

Composto pelo astrônomo Carlos H. Veiga, pelas bibliotecárias Katia T. dos Santos e M. Luiza Dias e pelo analista Renaldo N. da S. Junior, o grupo avaliou 900 placas fotográficas do acervo da biblioteca do Observatório. Pela importância científica, foram selecionadas as 61 placas das observações do famoso eclipse, que ainda guardam fielmente a imagem da lua nova encobrindo perfeitamente a imagem do Sol, registradas num dia muito especial para a ciência.

A partir da segunda metade do século XIX, as imagens fotográficas eram registradas em placas de vidro. Esse dispositivo, coberto por uma emulsão contendo sais de prata sensíveis à luz, era usado não só para registrar o cotidiano, mas também pela comunidade astronômica, até a última década do século passado, para observação de corpos celestes. Por ter um baixo coeficiente de dilatação térmica, as placas de vidro garantiam, ao longo do tempo, a precisão e confiabilidade das medidas astronômicas.

A fotografia permitiu um grande avanço para a astronomia e para o desenvolvimento da astrofísica, passando a ter um papel de detector, comparando os dados observacionais com o distanciamento temporal de grandes estruturas, como as galáxias. Em 1873 foi iniciado um programa sistemático de observação da atividade das manchas solares e eclipses e da coroa solar.

Uma manhã que mudou a ciência

Na manhã de 29 de maio de 1919, um fenômeno celeste trocaria, por alguns minutos, o dia pela noite numa pacata cidade do Nordeste brasileiro. Os minutos de duração do fenômeno deveriam ser aproveitados ao máximo. Era a oportunidade para comprovar experimentalmente uma nova afirmação científica prevista por uma teoria idealizada por Einstein (1879-1955), físico de origem alemã: a relatividade geral, que pode ser entendida como uma teoria que explica os fenômenos gravitacionais.

Sobral, a cidade cearense, seria o palco que ajudaria a confirmar um efeito previsto pela relatividade geral: a deflexão da luz, na qual um feixe de luz (neste caso, vindo de uma estrela) deveria ter sua trajetória encurvada (ou desviada) ao passar nas proximidades de um forte campo gravitacional (no caso, gerado pelo Sol).

Esse desvio da luz faz com que a estrela observada seja vista em uma posição aparentemente diferente de sua posição real. O objetivo dos astrônomos era medir um pequeno ângulo formado por essas duas posições.

Naquele dia, aconteceria um eclipse solar total. Os cálculos previam que deveria haver, pelo menos, uma estrela localizada no fundo de céu cuja luz passasse próxima ao bordo solar. Com essa configuração e boas condições meteorológicas, haveria grande chance de comprovar a nova teoria.

Leia mais e veja outras imagens do evento histórico, além de indicações bibliográficas.

(MCTI)

Full-scale architecture for a quantum computer in silicon (Science Daily)

Scalable 3-D silicon chip architecture based on single atom quantum bits provides a blueprint to build operational quantum computers

Date:
October 30, 2015
Source:
University of New South Wales
Summary:
Researchers have designed a full-scale architecture for a quantum computer in silicon. The new concept provides a pathway for building an operational quantum computer with error correction.

This picture shows from left to right Dr Matthew House, Sam Hile (seated), Sciential Professor Sven Rogge and Scientia Professor Michelle Simmons of the ARC Centre of Excellence for Quantum Computation and Communication Technology at UNSW. Credit: Deb Smith, UNSW Australia 

Australian scientists have designed a 3D silicon chip architecture based on single atom quantum bits, which is compatible with atomic-scale fabrication techniques — providing a blueprint to build a large-scale quantum computer.

Scientists and engineers from the Australian Research Council Centre of Excellence for Quantum Computation and Communication Technology (CQC2T), headquartered at the University of New South Wales (UNSW), are leading the world in the race to develop a scalable quantum computer in silicon — a material well-understood and favoured by the trillion-dollar computing and microelectronics industry.

Teams led by UNSW researchers have already demonstrated a unique fabrication strategy for realising atomic-scale devices and have developed the world’s most efficient quantum bits in silicon using either the electron or nuclear spins of single phosphorus atoms. Quantum bits — or qubits — are the fundamental data components of quantum computers.

One of the final hurdles to scaling up to an operational quantum computer is the architecture. Here it is necessary to figure out how to precisely control multiple qubits in parallel, across an array of many thousands of qubits, and constantly correct for ‘quantum’ errors in calculations.

Now, the CQC2T collaboration, involving theoretical and experimental researchers from the University of Melbourne and UNSW, has designed such a device. In a study published today in Science Advances, the CQC2T team describes a new silicon architecture, which uses atomic-scale qubits aligned to control lines — which are essentially very narrow wires — inside a 3D design.

“We have demonstrated we can build devices in silicon at the atomic-scale and have been working towards a full-scale architecture where we can perform error correction protocols — providing a practical system that can be scaled up to larger numbers of qubits,” says UNSW Scientia Professor Michelle Simmons, study co-author and Director of the CQC2T.

“The great thing about this work, and architecture, is that it gives us an endpoint. We now know exactly what we need to do in the international race to get there.”

In the team’s conceptual design, they have moved from a one-dimensional array of qubits, positioned along a single line, to a two-dimensional array, positioned on a plane that is far more tolerant to errors. This qubit layer is “sandwiched” in a three-dimensional architecture, between two layers of wires arranged in a grid.

By applying voltages to a sub-set of these wires, multiple qubits can be controlled in parallel, performing a series of operations using far fewer controls. Importantly, with their design, they can perform the 2D surface code error correction protocols in which any computational errors that creep into the calculation can be corrected faster than they occur.

“Our Australian team has developed the world’s best qubits in silicon,” says University of Melbourne Professor Lloyd Hollenberg, Deputy Director of the CQC2T who led the work with colleague Dr Charles Hill. “However, to scale up to a full operational quantum computer we need more than just many of these qubits — we need to be able to control and arrange them in such a way that we can correct errors quantum mechanically.”

“In our work, we’ve developed a blueprint that is unique to our system of qubits in silicon, for building a full-scale quantum computer.”

In their paper, the team proposes a strategy to build the device, which leverages the CQC2T’s internationally unique capability of atomic-scale device fabrication. They have also modelled the required voltages applied to the grid wires, needed to address individual qubits, and make the processor work.

“This architecture gives us the dense packing and parallel operation essential for scaling up the size of the quantum processor,” says Scientia Professor Sven Rogge, Head of the UNSW School of Physics. “Ultimately, the structure is scalable to millions of qubits, required for a full-scale quantum processor.”

Background

In classical computers, data is rendered as binary bits, which are always in one of two states: 0 or 1. However, a qubit can exist in both of these states at once, a condition known as a superposition. A qubit operation exploits this quantum weirdness by allowing many computations to be performed in parallel (a two-qubit system performs the operation on 4 values, a three-qubit system on 8, and so on).

As a result, quantum computers will far exceed today’s most powerful super computers, and offer enormous advantages for a range of complex problems, such as rapidly scouring vast databases, modelling financial markets, optimising huge metropolitan transport networks, and modelling complex biological molecules.

How to build a quantum computer in silicon https://youtu.be/zo1q06F2sbY

What if Dean Radin is right? (The Sceptic’s Dictionary)

by Robert Todd Carroll

Dean Radin, author of The Conscious Universe: The Scientific Truth of Psychic Phenomena (HarperSanFrancisco 1997), says that “psi researchers have resolved a century of skeptical doubts through thousands of replicated laboratory studies” (289) regarding the reality of psychic phenomena such as ESP(extrasensory perception) and PK (psychokinesis). Of course, Radin also considers meta-analysis as the most widely accepted method of measuring replication in science (51). Few scientists would agree with either of these claims. In any case, most American adults—about 75%, according to a 2005 Gallup poll—believe in at least one paranormal phenomenon. Forty-one percent believe in ESP. Fifty-five percent believe in the power of the mind to heal the body. One doesn’t need to be psychic to know that the majority of believers in psi have come to their beliefs through experience or anecdotes, rather than through studying the scientific evidence Radin puts forth in his book.

Radin doesn’t claim that the scientific evidence is going to make more believers. He realizes that the kind of evidence psi researchers have put forth hasn’t persuaded most scientists that there is anything of value in parapsychology. He thinks  there is “a general uneasiness about parapsychology” and that because of the “insular nature of scientific disciplines, the vast majority of psi experiments are unknown to most scientists.” He also dismisses critics as skeptics who’ve conducted “superficial reviews.” Anyone familiar with the entire body of research, he says, would recognize he is correct and would see that there are “fantastic theoretical implications” (129) to psi research. Nevertheless, in 2005 the Nobel Committee once again  passed over the psi scientists when handing out awards to those who have made significant contributions to our scientific knowledge.

The evidence Radin presents, however, is little more than a hodgepodge of occult statistics. Unable to find a single person who can correctly guess a three-letter word or move a pencil an inch without trickery, the psi researchers have resorted to doing complex statistical analyses of data. In well-designed studies they assume that whenever they have data that, by some statistical formula, is not likely due to chance, they attribute the outcome to psi. A well-designed study is one that carefully controls for such things as cheating, sensory leakage (unintentional transfer of information by non-psychic means), inadequate randomization, and other factors that might lead to an artifact (something that looks like it’s due to psi when it’s actually due to something else).

The result of this enormous data that Radin cites is that there is statistical evidence (for what it’s worth) that indicates (however tentatively) that some very weak psi effects are present (so weak that not a single individual who participates in a successful study has any inkling of possessing psychic power). Nevertheless, Radin thinks it is appropriate to speculate about the enormous implications of psi for biology, psychology, sociology, philosophy, religion, medicine, technology, warfare, police work, business, and politics. Never mind that nobody has any idea as to how psi might work. That is a minor detail to someone who can write with a straight face (apparently) that:

lots of independent, simple glimpses of the future may one day innocently crash the future. It’s not clear what it means to “crash the future,” but it doesn’t sound good. (297)

No, it certainly doesn’t sound good. But, as somebody once said, “the future will be better tomorrow.”

According to Radin, we may look forward to a future with “psychic garage-door openers” and the ability to “push atoms around” with our minds (292). Radin is not the least bit put off by the criticism that all the other sciences have led us away from superstition andmagical thinking, while parapsychology tries to lead us into those pre-scientific modes. Radin notes that “the concept that mind is primary over matter is deeply rooted in Eastern philosophy and ancient beliefs about magic.” However, instead of saying that it is now time to move forward, he rebuffs “Western science” for rejecting such beliefs as “mere superstition.” Magical thinking, he says, “lies close beneath the veneer of the sophisticated modern mind” (293). He even claims that “the fundamental issues [of consciousness] remain as mysterious today as they did five thousand years ago.” We may not have arrived at a final theory of the mind, but a lot of the mystery has evaporated with the progress made in the neurosciences over the past century. None of our advancing knowledge of the mind, however, has been due to contributions from parapsychologists. (Cf. Blackmore 2001).

Radin doesn’t grasp the fact that the concept of mind can be an illusion without being a “meaningless illusion” (294). He seems to have read David Chalmers, but I suggest he and his followers read Daniel Dennett. I’d begin with Sweet Dreams (2005)Consciousness is not “a complete mystery,” as Radin claims (294). The best that Radin can come up with as evidence that psi research has something to offer consciousness studies is the claim that “information can be obtained in ways that bypass the ordinary sensory system altogether” (295). Let’s ignore the fact that this claim begs the question. What neuroscience has uncovered is just how interesting and complex this “ordinary sensory system” turns out to be.

Radin would have us believe that magical thinking is essential to our psychological well being (293). If he’s right, we’ll one day be able to solve all social problems by “mass-mind healings.” And religious claims will get new meaning as people come to understand the psychic forces behind miracles and talking to the dead. According to Radin, when a medium today talks to a spirit “perhaps he is in contact with someone who is alive in the past.From the ‘departed’ person’s perspective, she may find herself communicating with someone from the future, although it is not clear that she would know that” (295). Yes, I don’t think that would be clear, either.

In medicine, Radin expects distant mental healing (which he argues has been scientifically established) to expand to something that “might be called techno-shamanism” (296). He describes this new development as “an exotic, yet rigorously schooled combination of ancient magical principles and future technologies” (296). He expects psi to join magnetic resonance imaging and blood tests as common stock in the world of medicine. “This would translate into huge savings and improved quality of life for millions of people” (192) as “untold billions of dollars in medical costs could be saved” (193). 

Then, of course, there will be the very useful developments that include the ability to telepathically “call a friend in a distant spacecraft, or someone in a deeply submerged submarine” (296). On the other hand, the use of psychic power by the military and by police investigators will depend, Radin says, on “the mood of the times.” If what is popular on television is an indicator of the mood of the times, I predict that there will be full employment for psychic detectives and remote viewers in the future.

Radin looks forward to the day when psi technology “might allow thought control of prosthetics for paraplegics” and “mind-melding techniques to provide people with vast, computer-enhanced memories, lightning-fast mathematical capabilities, and supersensitive perceptions” (197). He even suggests we employ remote viewer Joe McMoneagle  to reveal future technological devices he “has sensed in his remote-viewing sessions” (100).

Radin considers a few other benefits that will come from our increased ability to use psi powers: “to guide archeological digs and treasure-hunting expeditions, enhance gambling profits, and provide insight into historical events” (202). However, he does not consider some of the obvious problems and benefits that would occur should psychic ability become common. Imagine the difficulties for the junior high teacher in a room full of adolescents trained in PK. Teachers and parents would be spending most of their psychic energy controlling the hormones of their charges. The female garment and beauty industries would be destroyed as many attractive females would be driven to try to make themselves look ugly to avoid having their clothes being constantly removed by psychic perverts and pranksters. 

Ben Radford has noted the potential for “gross and unethical violations of privacy,” as people would be peeping into each other’s minds. On the other hand, infidelity and all forms of deception might die out, since nobody could deceive anyone about anything if we were all psychic. Magic would become pointless and “professions that involve deception would be worthless” (Radford 2000). There wouldn’t be any need for undercover work or spies. Every child molester would be identified immediately. No double agent could ever get away with it. There wouldn’t be any more lotteries, since everybody could predict the winning numbers. We wouldn’t need trials of accused persons and the polygraph would be a thing of the past.

Hurricanes, tsunamis, earthquakes, floods, and other signs of intelligent design will become things of the past as billions of humans unite to focus their thoughts on predicting and controlling the forces of nature. We won’t need to build elaborate systems to turn away errant asteroids or comets heading for our planet: billons of us will unite to will the objects on their merry way toward some other oblivion. It is unlikely that human nature will change as we become more psychically able, so warfare will continue but will be significantly changed. Weapons won’t be needed because we’ll be able to rearrange our enemies’ atoms and turn them into mush from the comfort of our living rooms. (Who knows? It might only take a few folks with super psi powers to find Osama bin Laden and turn him into a puddle of irradiated meat.) Disease and old age will become things of the past as we learn to use our thoughts to kill cancer cells and control our DNA.

Space travel will become trivial and heavy lifting will be eliminated as we will be able to teleport anything to anywhere at anytime through global consciousness. We’ll be able to transport all the benefits of earthly consciousness to every planet in the universe. There are many other likely effects of global psychic ability that Radin has overlooked but this is understandable given his heavy workload as Senior Scientist at IONS (The Institute of Noetic Sciences) and as a blogger.

Radin notes only one problem should psi ability become common: we’ll all be dipping into the future and we might “crash the future,” whatever that means. The bright side of crashing the future will be the realization of “true freedom” as we will no longer be doomed to our predestined fate. We will all have the power “to create the future as we wish, rather than blindly follow a predetermined course through our ignorance” (297). That should make even the most cynical Islamic fundamentalist or doomsday Christian take heed. This psi stuff could be dangerous to one’s delusions even as it tickles one’s funny bone and stimulates one’s imagination to aspire to the power of gods and demons.

******      ******      ******

update: Radin has a follow-up book out called Entangled Minds: Extrasensory Experiences in a Quantum Reality. Like The Conscious Universe, this one lays out the scientific evidence for psi as seen from the eyes of a true believer. As noted above, in The Conscious Universe, Radin uses statistics and meta-analysisto prove that psychic phenomena really do exist even if those who have the experiences in the labs are unaware of them. Statistical data show that the world has gone psychic, according to the latest generation of parapsychologists. You may be unconscious of it, but your mind is affecting random number generators all over the world as you read this. The old psychic stuff—thinking about aunt Hildie moments before she calls to tell you to bugger off—is now demonstrated to be true by statistical methods that were validated in 1937 by Burton Camp and meta-validated by Radin 60 years later when he asserted that meta-analysis was the replication parapsychologists had been looking for. The only difference is that now when you think of aunt Hildie it might be moments before she calls her car mechanic and that, too, may be linked to activity in your mind that you are unaware of.

Radin’s second book sees entanglement as a key to understanding extrasensory phenomena. Entanglement is a concept from quantum physics that refers to connections between subatomic particles that persist regardless of being separated by various distances. He notes that some physicists have speculated that the entire universe might be entangled and that the Eastern mystics of old might have been on to something cosmic. His speculations are rather wild but his assertions are rather modest. For example: “I believe that entanglement suggests a scenario that may ultimately lead to a vastly improved understanding of psi” (p. 14) and “I propose that the fabric of reality is comprised [sic] of ‘entangled threads’ that are consistent with the core of psi experience” (p. 19). Skeptics might suggest that studying self-deception and wishful thinking would lead to a vastly improved understanding of psi research and that being consistent with a model is a minimal, necessary condition for taking any model seriously, but hardly sufficient to warrant much faith.

Readers of The Conscious Universe will be pleased to know that Radin has outdone himself on the meta-analysis front. In his second book, he provides a meta-meta-analysis of over 1,000 studies on dream psi, ganzfeld psi, staring, distant intention, dice PK, and RNG PK. He concludes that the odds against chance of getting these results are 10104 against 1 (p. 276). As Radin says, “there can be little doubt that something interesting is going on” (p. 275). Yes, but I’m afraid it may be going on only in some entangled minds.

On the bright side, Radin continues to ignore Gary Schwartz and self-proclaimed psychics like Jon Edward, Sylvia BrowneUri Geller, and Ted Owens. He still has a fondness for remote viewers like Joe McMoneagle, however, who seems impressive if you don’t understand subjective validation, are willing to ignore the vast majority of his visions, and aren’t bothered by vagueness in the criteria as to what counts as a “hit” in remote viewing. Even a broken clock is right twice a day.

Radin predicts that some day “psi research will be taught in universities with the same aplomb as today’s elementary economics and biology” (p. 295). Perhaps psi research will be taught in the same classroom as intelligent design, though this seems unlikely as parapsychology attempts to reduce all supernatural and paranormal phenomena to physics. Maybe they could both be taught in the same curriculum: things that explain everything but illuminate nothing.

note: If the reader wants to see a more complete review of Radin’s work, please read my reviews of his books. Links are given below.

further reading

book reviews by Robert T. Carroll

The Conscious Universe: The Scientific Truth of Psychic Phenomena
by Dean Radin (HarperOne 1997)

Entangled Minds: Extrasensory Experiences in a Quantum Reality
by Dean Radin (Paraview Pocket Books 2006)

The End of Materialism: How Evidence of the Paranormal is Bringing Science and Spirit Together by Charles T. Tart, Ph.D. (New Harbinger 2009)

Spook: Science Tackles the Afterlife 
by Mary Roach (W. W. Norton 2005).

The Afterlife Experiments: Breakthrough Scientific Evidence of Life After Death
by Gary Schwartz (Atria 2003)

Ghost Hunters – William James and the Hunt for Scientific Proof of Life After Death
by Deborah Blum (Penguin Press 2006).

books and articles

Blackmore, Susan. (2001) “What Can the Paranormal Teach Us About Consciousness?” Skeptical Inquirer, March/April.

Blackmore, Susan (2003). Consciousness: An Introduction. Oxford University Press.

Good, I. J. (1997). Review of The Conscious UniverseNatureOctober 23, with links to responses by Radin, Brian Josephson, and Nick Herbert.

Larsen, Claus. (2002). An evening with Dean Radin.

Pedersen, Morten Monrad. (2003). Book Review of Dean Radin’s The Conscious Universe

Radin, Dean. (1997). The Conscious Universe – The Scientific Truth of Psychic Phenomena. HarperCollins.

Radin, Dean. (2006). Entangled Minds: Extrasensory Experiences in a Quantum Reality. Paraview Pocket Books.

Radford, Benjamin. (2000). “Worlds in Collision – Applying Reality to the Paranormal,” Skeptical Inquirer, November/December.

Last updated 01-Aug-2015

Climate Debate Needs More Social Science, New Book Argues (Inside Science)

Image credit: Matt Jiggins via Flickr | http://bit.ly/1M6iSlZ

Physical scientists aren’t trained for all the political and moral issues.
Oct 2 2015 – 10:00am

By: Joel N. Shurkin, Contributor

(Inside Science) — The notion that Earth’s climate is changing—and that the threat to the world is serious—goes back to the 1980s, when a consensus began to form among climate scientists as temperatures began to rise noticeably. Thirty years later, that consensus is solid, yet climate change and the disruption it may cause remain divisive political issues, and millions of people remain unconvinced.

A new book argues that social scientists should play a greater role in helping natural scientists convince people of the reality of climate change and drive policy.

Climate Change and Society consists of 13 essays on why the debate needs the voices of social scientists, including political scientists, psychologists, anthropologists, and sociologists. It is edited by Riley E. Dunlap, professor of sociology at Oklahoma State University in Stillwater, and Robert J. Brulle, of Drexel University, professor of sociology and environmental science in Philadelphia.

Brulle said the physical scientists tend to frame climate change “as a technocratic and managerial problem.”

“Contrast that to the Pope,” he said.

Pope Francis sees it as a “political, moral issue that won’t be settled by a group of experts sitting in a room,” said Brulle, who emphasized that it will be settled by political process. Sociologists agree.

Sheila Jasanoff also agrees. She is the Pforzheimer professor of science and technology studies at the Harvard Kennedy School in Cambridge, Massachusetts, and did not participate in the book.

She said that understanding how people behave differently depending on their belief system is important.

“Denial is a somewhat mystical thing in people’s heads,” Jasanoff said. “One can bring tools of sociology of knowledge and belief—or social studies—to understand how commitments to particular statements of nature are linked with understanding how you would feel compelled to behave if nature were that way.”

Parts of the world where climate change is considered a result of the colonial past may resist taking drastic action at the behest of the former colonial rulers. Jasanoff said that governments will have to convince these groups that climate change is a present danger and attention must be paid.

Some who agree there is a threat are reluctant to advocate for drastic economic changes because they believe the world will be rescued by innovation and technology, Jasanoff said. Even among industrialized countries, views about the potential of technology differ.

Understanding these attitudes is what social scientists do, the book’s authors maintain.

“One of the most pressing contributions our field can make is to legitimate big questions, especially the ability of the current global economic system to take the steps needed to avoid catastrophic climate change,” editors of the book wrote.

The issue also is deeply embedded in the social science of economics and in the problem of “have” and “have-not” societies in consumerism and the economy.

For example, Bangladesh sits at sea level, and if the seas rise enough, nearly the entire country could disappear in the waters. Hurricane Katrina brought hints of the consequences of that reality to New Orleans, a city that now sits below sea level. The heaviest burden of the storm’s effects fell on the poor neighborhoods, Brulle said.

“The people of Bangladesh will suffer more than the people on the Upper East Side of Manhattan,” Brulle said. He said they have to be treated differently, which is not something many physical scientists studying the processes behind sea level rise have to factor into their research.

“Those of us engaged in the climate fight need valuable insight from political scientists and sociologists and psychologists and economists just as surely as from physicists,” agreed Bill McKibben, an environmentalist and author who is a scholar in residence at Middlebury College in Vermont. “It’s very clear carbon is warming the planet; it’s very unclear what mix of prods and preferences might nudge us to use much less.”


Joel Shurkin is a freelance writer in Baltimore. He was former science writer at the Philadelphia Inquirer and was part of the team that won a Pulitzer Prize for covering Three Mile Island. He has nine published books and is working on a tenth. He has taught journalism at Stanford University, the University of California at Santa Cruz and the University of Alaska Fairbanks. He tweets at @shurkin.

Thermodynamics, W.H. Auden and Philip K. Dick (Immanent Forms)

JUNE 5, 2015 – 

I was struck this morning by the similarity between two twentieth-century passages about entropy. The first is from W.H. Auden’s poem “As I Walked Out One Evening,” and the second from Philip K. Dick’s Do Androids Dream of Electric Sheep. If I was a betting man, I’d put money on PKD having read Auden. The cupboard and the teacup, especially, drew my attention, but it is also worth noting that the passage in PKD immediately precedes J.R. Isidore’s vision of the “tomb world,” a variation on Auden’s “land of the dead.”

Whether or not the passage in PKD is a explicit allusion or homage to Auden, I find it interesting that PKD’s passage, which several times mentions the irradiated dust of nuclear fallout, so closely resembles Auden’s pre-nuclear poem. The psychological issue, in each case, is not humanity’s ability to destroy itself (despite the post-apocalyptic setting of Androids) but the problem of being, as Carl Sagan puts it, “a way for the cosmos to know itself.” How do we live with our knowledge of geologic or cosmological time–scales on which all of human history occupy a mere blip–and, simultaneously, assert the meaningfulness of individual lives? More after the break, but, first the passages:

W.H. Auden, from “As I Walked Out One Evening” (1940):

But all the clocks in the city

Began to whirr and chime:

‘O let not Time deceive you,

You cannot conquer Time.

‘In the burrows of the Nightmare

Where Justice naked is,

Time watches from the shadow

And coughs when you would kiss.

‘In headaches and in worry

Vaguely life leaks away,

And Time will have his fancy

To-morrow or to-day.

‘Into many a green valley

Drifts the appalling snow;

Time breaks the threaded dances

And the diver’s brilliant bow.

‘O plunge your hands in water,

Plunge them in up to the wrist;

Stare, stare in the basin

And wonder what you’ve missed.

‘The glacier knocks in the cupboard,

The desert sighs in the bed,

And the crack in the tea-cup opens

A lane to the land of the dead.

‘Where the beggars raffle the banknotes

And the Giant is enchanting to Jack,

And the Lily-white Boy is a Roarer,

And Jill goes down on her back.

Philip K. Dick, from Do Androids Dream of Electric Sheep (1968):

“he saw the dust and the ruin of the apartment as it lay spreading out everywhere–he heard the kipple coming, the final disorder of all forms, the absence which would win out. It grew around him as he stood holding the empty ceramic cup; the cupboards of the kitchen creaked and split and he felt the floor beneath his feet give.

Reaching out, he touched the wall. His hand broke the surface; gray particles trickled and hurried down, fragments of plaster resembling the radioactive dust outside. He seated himself at the table and, like rotten, hollow tubes the legs of the chair bent; standing quickly, he set down the cup and tried to reform the chair, tried to press it back into its right shape. The chair came apart in his hands, the screws which had previously connected its several sections ripping out and hanging loose. He saw, on the table, the ceramic cup crack; webs of fine lines grew like the shadows of a vine, and then a chip dropped from the edge of the cup, exposing the rough, unglazed interior.”

Nietzsche frequently and disparately writes about this problem in terms of “eternal recurrence”: the natural cycles of life and death that repeat themselves across long stretches of time dwarf the appearance of any individual member of a single species on one planet. In The Birth of Tragedy (an early work that Nietzsche distances himself from, but still a valuable touchstone in his thought), Nietzsche frames this as a problem of identification. We identify with our individual selves, but those selves are also part of the large natural cycles whose inevitable continuation will destroy the individual. We can attempt to identify with the cycle itself as a claim to immortality. As Sagan says, “Some part of our being knows this is where we came from. We long to return, and wecan, because the cosmos is also within us. We‘re made of star stuff.

On the other hand, identifying with the cosmos as a whole diminishes the significance of our own disappearance within the natural cycle. As homo sapiens sapiens we may be part of the terran biosphere in the solar system (itself a secondary star system formed from the stuff of previous supernovas), but as Carl or Friedrich or Wiston or Dick, our individual deaths, like our lives, are not interchangeable. Hannah Arendt, in The Human Condition (1958), refers to this quality as “uniqueness”: “In man, otherness, which he shares with everything that is, and distinctness, which he shares with everything alive, become uniqueness, and human plurality is the paradoxical plurality of unique beings.” We act together, speak together, and, in the process, we forge identities that are irreducible to our membership in a class of objects or a biological species. We exercise what Nietzsche calls the “principle of individuation”: we create individual selves that will never be repeated in the eternal recurrence of natural cycles.

Taking this a step farther, our potential identification with the cosmos as a whole is only possible because we have individual consciousnesses that can identify/form identities. Nietzsche argues that simply disavowing our individual selves in favor of universal being/becoming prevents the cosmos from knowing or being known. The individual (what he calls Apollonian) may be a temporary, fleeting form, but for us to experience our place within the universal (what he calls Dionysian), we must hold our individual selves in tension with those larger processes.

The highest forms of art are born, Nietzsche argues, when Apollo and Dionysus are locked in conflict. We are individuals who will die, and our unique lives will be gone. We are also part of, constitutive of, and coextensive with the dynamic unfolding of the universe as a whole. A few billion years from now, the sun will die and take the Earth (and Mercury and Venus) with it, but even that will not be the end of our story. The productive problem we face is finding meaning that can emerge from both biography and cosmology and their vast differences in scale.

Arendt has some very interesting things to say about entropy and the apparently miraculous rescue of human life and worldliness from the seemingly inevitable destruction of natural cycles. I am tempted to end with her, but, for this post, I want to give Auden the final word. His poem begins with lovers declaring that they will love forever, and the entropic wisdom of the cities chiming clocks interrupts those declarations. The meaning of that interruption, however, is not a simple rejection of subjective folly in favor of a more objective, longer view. It leaves the lovers (and the listeners who are left long after the lovers leave) with a peculiar form of responsibility:

‘O look, look in the mirror,

O look in your distress:

Life remains a blessing

Although you cannot bless.

‘O stand, stand at the window

As the tears scald and start;

You shall love your crooked neighbour

With your crooked heart.’

It was late, late in the evening,

The lovers they were gone;

The clocks had ceased their chiming,

And the deep river ran on.

Experiment Provides Further Evidence That Reality Doesn’t Exist Until We Measure It (IFLScience)

June 2, 2015 | by Stephen Luntz

photo credit: Pieter Kuiper via Wikimedia Commons. A comparison of double slit interference patterns with different widths. Similar patterns produced by atoms have confirmed the dominant model of quantum mechanics 

Physicists have succeeded in confirming one of the theoretical aspects of quantum physics: Subatomic objects switch between particle and wave states when observed, while remaining in a dual state beforehand.

In the macroscopic world, we are used to waves being waves and solid objects being particle-like. However, quantum theory holds that for the very small this distinction breaks down. Light can behave either as a wave, or as a particle. The same goes for objects with mass like electrons.

This raises the question of what determines when a photon or electron will behave like a wave or a particle. How, anthropomorphizing madly, do these things “decide” which they will be at a particular time?

The dominant model of quantum mechanics holds that it is when a measurement is taken that the “decision” takes place. Erwin Schrodinger came up with his famous thought experiment using a cat to ridicule this idea. Physicists think that quantum behavior breaks down on a large scale, so Schrödinger’s cat would not really be both alive and dead—however, in the world of the very small, strange theories like this seem to be the only way to explain what we we see.

In 1978, John Wheeler proposed a series of thought experiments to make sense of what happens when a photon has to either behave in a wave-like or particle-like manner. At the time, it was considered doubtful that these could ever be implemented in practice, but in 2007 such an experiment was achieved.

Now, Dr. Andrew Truscott of the Australian National University has reported the same thing in Nature Physics, but this time using a helium atom, rather than a photon.

“A photon is in a sense quite simple,” Truscott told IFLScience. “An atom has significant mass and couples to magnetic and electric fields, so it is much more in tune with its environment. It is more of a classical particle in a sense, so this was a test of whether a more classical particle would behave in the same way.”

Trustcott’s experiment involved creating a Bose-Einstein Condensate of around a hundred helium atoms. He conducted the experiment first with this condensate, but says the possibility that atoms were influencing each other made it important to repeat after ejecting all but one. The atom was passed through a “grate” made by two laser beams that can scatter an atom in a similar manner to a solid grating that can scatter light. These have been shown to cause atoms to either pass through one arm, like a particle, or both, like a wave.

A random number generator was then used to determine whether a second grating would appear further along the atom’s path. Crucially, the number was only generated after the atom had passed the first grate.

The second grating, when applied, caused an interference pattern in the measurement of the atom further along the path. Without the second grating, the atom had no such pattern.

An optical version of Wheeler’s delayed choice experiment (left) and an atomic version as used by Truscott (right). Credit: Manning et al.

Truscott says that there are two possible explanations for the behavior observed. Either, as most physicists think, the atom decided whether it was a wave or a particle when measured, or “a future event (the method of detection) causes the photon to decide its past.”

In the bizarre world of quantum mechanics, events rippling back in time may not seem that much stranger than things like “spooky action at a distance” or even something being a wave and a particle at the same time. However, Truscott said, “this experiment can’t prove that that is the wrong interpretation, but it seems wrong, and given what we know from elsewhere, it is much more likely that only when we measure the atoms do their observable properties come into reality.”

Is the universe a hologram? (Science Daily)

Date:
April 27, 2015
Source:
Vienna University of Technology
Summary:
The ‘holographic principle,’ the idea that a universe with gravity can be described by a quantum field theory in fewer dimensions, has been used for years as a mathematical tool in strange curved spaces. New results suggest that the holographic principle also holds in flat spaces. Our own universe could in fact be two dimensional and only appear three dimensional — just like a hologram.

Is our universe a hologram? Credit: TU Wien 

At first glance, there is not the slightest doubt: to us, the universe looks three dimensional. But one of the most fruitful theories of theoretical physics in the last two decades is challenging this assumption. The “holographic principle” asserts that a mathematical description of the universe actually requires one fewer dimension than it seems. What we perceive as three dimensional may just be the image of two dimensional processes on a huge cosmic horizon.

Up until now, this principle has only been studied in exotic spaces with negative curvature. This is interesting from a theoretical point of view, but such spaces are quite different from the space in our own universe. Results obtained by scientists at TU Wien (Vienna) now suggest that the holographic principle even holds in a flat spacetime.

The Holographic Principle

Everybody knows holograms from credit cards or banknotes. They are two dimensional, but to us they appear three dimensional. Our universe could behave quite similarly: “In 1997, the physicist Juan Maldacena proposed the idea that there is a correspondence between gravitational theories in curved anti-de-sitter spaces on the one hand and quantum field theories in spaces with one fewer dimension on the other,” says Daniel Grumiller (TU Wien).

Gravitational phenomena are described in a theory with three spatial dimensions, the behaviour of quantum particles is calculated in a theory with just two spatial dimensions — and the results of both calculations can be mapped onto each other. Such a correspondence is quite surprising. It is like finding out that equations from an astronomy textbook can also be used to repair a CD-player. But this method has proven to be very successful. More than ten thousand scientific papers about Maldacena’s “AdS-CFT-correspondence” have been published to date.

Correspondence Even in Flat Spaces

For theoretical physics, this is extremely important, but it does not seem to have much to do with our own universe. Apparently, we do not live in such an anti-de-sitter-space. These spaces have quite peculiar properties. They are negatively curved, any object thrown away on a straight line will eventually return. “Our universe, in contrast, is quite flat — and on astronomic distances, it has positive curvature,” says Daniel Grumiller.

However, Grumiller has suspected for quite some time that a correspondence principle could also hold true for our real universe. To test this hypothesis, gravitational theories have to be constructed, which do not require exotic anti-de-sitter spaces, but live in a flat space. For three years, he and his team at TU Wien (Vienna) have been working on that, in cooperation with the University of Edinburgh, Harvard, IISER Pune, the MIT and the University of Kyoto. Now Grumiller and colleagues from India and Japan have published an article in the journal Physical Review Letters, confirming the validity of the correspondence principle in a flat universe.

Calculated Twice, Same Result

“If quantum gravity in a flat space allows for a holographic description by a standard quantum theory, then there must by physical quantities, which can be calculated in both theories — and the results must agree,” says Grumiller. Especially one key feature of quantum mechanics -quantum entanglement — has to appear in the gravitational theory.

When quantum particles are entangled, they cannot be described individually. They form a single quantum object, even if they are located far apart. There is a measure for the amount of entanglement in a quantum system, called “entropy of entanglement.” Together with Arjun Bagchi, Rudranil Basu and Max Riegler, Daniel Grumiller managed to show that this entropy of entanglement takes the same value in flat quantum gravity and in a low dimension quantum field theory.

“This calculation affirms our assumption that the holographic principle can also be realized in flat spaces. It is evidence for the validity of this correspondence in our universe,” says Max Riegler (TU Wien). “The fact that we can even talk about quantum information and entropy of entanglement in a theory of gravity is astounding in itself, and would hardly have been imaginable only a few years back. That we are now able to use this as a tool to test the validity of the holographic principle, and that this test works out, is quite remarkable,” says Daniel Grumiller.

This however, does not yet prove that we are indeed living in a hologram — but apparently there is growing evidence for the validity of the correspondence principle in our own universe.


Journal Reference:

  1. Arjun Bagchi, Rudranil Basu, Daniel Grumiller, Max Riegler. Entanglement Entropy in Galilean Conformal Field Theories and Flat HolographyPhysical Review Letters, 2015; 114 (11) DOI: 10.1103/PhysRevLett.114.111602

Out of Place: Space/Time and Quantum (In)security (The Disorder of Things)

APRIL 21, 2015 – DRLJSHEPHERD

A demon lives behind my left eye. As a migraine sufferer, I have developed a very personal relationship with my pain and its perceived causes. On a bad day, with a crippling sensitivity to light, nausea, and the feeling that the blood flowing to my brain has slowed to a crawl and is the poisoned consistency of pancake batter, I feel the presence of this demon keenly.

On the first day of the Q2 Symposium, however, which I was delighted to attend recently, the demon was in a tricksy mood, rather than out for blood: this was a vestibular migraine. The symptoms of this particular neurological condition are dizziness, loss of balance, and sensitivity to motion. Basically, when the demon manifests in this way, I feel constantly as though I am falling: falling over, falling out of place. The Q Symposium, hosted by James Der Derian and the marvellous team at the University of Sydney’s Centre for International Security Studies,  was intended, over the course of two days and a series of presentations, interventions, and media engagements,  to unsettle, to make participants think differently about space/time and security, thinking through quantum rather than classical theory, but I do not think that this is what the organisers had in mind.

photo of cabins and corridors at Q Station, SydneyAt the Q Station, located in Sydney where the Q Symposium was held, my pain and my present aligned: I felt out of place, I felt I was falling out of place. I did not expect to like the Q Station. It is the former quarantine station used by the colonial administration to isolate immigrants they suspected of carrying infectious diseases. Its location, on the North Head of Sydney and now within the Sydney Harbour National Park, was chosen for strategic reasons – it is secluded, easy to manage, a passageway point on the journey through to the inner harbour – but it has a much longer historical relationship with healing and disease. The North Head is a site of Aboriginal cultural significance; the space was used by the spiritual leaders (koradgee) of the Guringai peoples for healing and burial ceremonies.

So I did not expect to like it, as such an overt symbol of the colonisation of Aboriginal lands, but it disarmed me. It is a place of great natural beauty, and it has been revived with respect, I felt, for the rich spiritual heritage of the space that extended long prior to the establishment of the Quarantine Station in 1835. When we Q2 Symposium participants were welcomed to country by and invited to participate in a smoking ceremony to protect us as we passed through the space, we were reminded of this history and thus reminded – gently, respectfully (perhaps more respectfully than we deserved) – that this is not ‘our’ place. We were out of place.

We were all out of place at the Q2 Symposium. That is the point. Positioning us thus was deliberate; we were to see whether voluntary quarantine would produce new interactions and new insights, guided by the Q Vision, to see how quantum theory ‘responds to global events like natural and unnatural disasters, regime change and diplomatic negotiations that phase-shift with media interventions from states to sub-states, local to global, public to private, organised to chaotic, virtual to real and back again, often in a single news cycle’. It was two days of rich intellectual exploration and conversation, and – as is the case when these experiments work – beautiful connections began to develop between those conversations and the people conversing, conversations about peace, security, and innovation, big conversations about space, and time.

I felt out of place. Mine is not the language of quantum theory. I learned so much from listening to my fellow participants, but I was insecure; as the migraine took hold on the first day, I was not only physically but intellectually feeling as though I was continually falling out of the moment, struggling to maintain the connections between what I was hearing and what I thought I knew.

Quantum theory departs from classical theory in the proposition of entanglement and the uncertainty principle:

This principle states the impossibility of simultaneously specifying the precise position and momentum of any particle. In other words, physicists cannot measure the position of a particle, for example, without causing a disturbance in the velocity of that particle. Knowledge about position and velocity are said to be complementary, that is, they cannot be precise at the same time.

I do not know anything about quantum theory – I found it hard to follow even the beginner’s guides provided by the eloquent speakers at the Symposium – but I know a lot about uncertainty. I also feel that I know something about entanglement, perhaps not as it is conceived of within quantum physics, but perhaps that is the point of events such as the Q Symposium: to encourage us to allow the unfamiliar to flow through and around us until the stream snags, to produce an idea or at least a moment of alternative cognition.

My moment of alternative cognition was caused by foetal microchimerism, a connection that flashed for me while I was listening to a physicist talk about entanglement. Scientists have shown that during gestation, foetal cells migrate into the body of the mother and can be found in the brain, spleen, liver, and elsewhere decades later. There are (possibly) parts of my son in my brain, literally as well as simply metaphorically (as the latter was already clear). I am entangled with him in ways that I cannot comprehend. Listening to the speakers discuss entanglement, all I could think was, This is what entanglement means to me, it is in my body.

Perhaps I am not proposing entanglement as Schrödinger does, as ‘the characteristic trait of quantum mechanics, the one that enforces its entire departure from classical lines of thought’. Perhaps I am just using the concept of entanglement to denote the inextricable, inexplicable, relationality that I have with my son, my family, my community, humanity. It is this entanglement that undoes me, to use Judith Butler’s most eloquent phrase, in the face of grief, violence, and injustice. Perhaps this is the value of the quantum: to make connections that are not possible within the confines of classical thought.

I am not a scientist. I am a messy body out of place, my ‘self’ apparently composed of bodies out of place. My world is not reducible. My uncertainty is vast. All of these things make me insecure, challenge how I move through professional time and space as I navigate the academy. But when I return home from my time in quarantine and joyfully reconnect with my family, I am grounded by how I perceive my entanglement. It is love, not science, that makes me a better scholar.

photo of sign that says 'laboratory and mortuary' from Q station, sydney.

I was inspired by what I heard, witnessed, discussed at the Q2 Symposium. I was – and remain – inspired by the vision of the organisers, the refusal to be bound by classical logics in any field that turns into a drive, a desire to push our exploration of security, peace, and war in new directions. We need new directions; our classical ideas have failed us, and failed humanity, a point made by Colin Wight during his remarks on the final panel at the Symposium. Too often we continue to act as though the world is our laboratory; we have ‘all these theories yet the bodies keep piling up…‘.

But if this is the case, I must ask: do we need a quantum turn to get us to a space within which we can admit entanglement, admit uncertainty, admit that we are out of place? We are never (only) our ‘selves’: we are always both wave and particle and all that is in between and it is our being entangled that renders us human. We know this from philosophy, from art and the humanities. Can we not learn this from art? Must we turn to science (again)? I felt diminished by the asking of these questions, insecure, but I did not feel that these questions were out of place.

Usina Nuclear de Angra 3 e a Operação Lava Jato (JC)

Para o físico Heitor Scalambrini Costa, denúncias de propinas na construção da usina e objeções técnicas quanto à obsolescência dos equipamentos tecnologicamente defasados, são fatos graves que devem ser apurados com urgência

Apesar de toda a movimentação no cenário internacional acerca dos problemas e riscos de instalações nucleares, que ficou exacerbada após o desastre de Fukushima (11/3/2011), surpreende a posição das autoridades do Ministério de Minas e Energia, dos “lobistas” da área nuclear,das empreiteiras e fornecedoras de equipamentos ― pois todos continuam insistindo na instalação de mais quatro usinas nucleares no país até 2030, sendo duas delas no Nordeste brasileiro. Além da construção de Angra 3 ― já aprovada.

No caso de Angra 3, a estimativa de custos da obra era de R$ 7,2 bilhões, em 2008; pulou para R$ 10,4 bilhões,no final de 2010;em julho de 2013, de acordo com a Eletronuclear, superava os R$ 13 bilhões; e, até 2018, ano de sua conclusão, devem alcançar R$ 14,9 bilhões. Obviamente a duplicação nos custos de construção desta usina nuclear impactam decisivamente o preço médio de venda de eletricidade no país.

A história da indústria nuclear no Brasil mostra que ela sempre foi ― e continua sendo ― uma indústria altamente dependente de subsídios públicos. Sem dúvida, são perversas as condições de financiamento de Angra 3, com subsídios governamentais ocultos, a serem posteriormente disfarçados nas contas de luz. E quem vai pagar essa conta seremos nós, os usuários, que já pagamos uma das mais altas tarifas de energia elétrica do mundo.

Com a Operação Lava Jato, deflagrada em março de 2014, para investigar um grande esquema de lavagem e desvio de dinheiro envolvendo a Petrobras, grandes empreiteiras do país e diversos políticos, começam a ter desnudados os reais interesses, nada republicanos, da decisão de construção das grandes obras energéticas, como a usina hidroelétrica de Belo Monte e a usina nuclear Angra 3.

Desde a decisão de construí-la no âmbito do conturbado acordo nuclear Brasil-Alemanha, a usina de Angra 3foi cercada de mistério, controvérsias, incertezas e falta de transparência, comuns no setor nuclear brasileiro.

As obras civis da usina foram licitadas à Construtora Andrade Gutierrez mediante contrato assinado em 16 de junho de 1983(governo Figueiredo, 1979-1985). Em abril de 1986, as obras foram paralisadas por falta de recursos, alto custo e dúvidas quanto à conveniência e riscos desta fonte de energia. Mesmo assim a construtora recebeu durante décadas um pagamento de aproximadamente US$ 20 milhões/ano.

Depois de 23 anos parada, as obras de Angra 3 foram retomadas em 2009 (governo Lula, 2003-2010). O governo Lula optou por não fazer licitações, e revalidou a concorrência ganha pela construtora Andrade Gutierrez, em 1983. Embora não tenha feito novas licitações, a Eletronuclear negociou atualizações de valores com todos os fornecedores e prestadores de serviços. A obra e seus equipamentos ficaram bem mais caros. Em dólares, seu valor pulou de US$ 1,8 bilhão para aproximadamente cerca de US$ 3,3 bilhões.

Diante da decisão de manter o contrato com a Andrade Gutierrez, construtoras concorrentes, especialmente a Camargo Corrêa, tentaram em vão convencer o governo a rever sua decisão, alegando que neste período houve uma revolução tecnológica que reduziu em até 40% o custo de obras civis de usinas nucleares. Também o plenário do Tribunal de Contas da União, em setembro de 2008, ao avaliar o assunto não impediu a revalidação dos contratos. Porém considerou que Angra 3 apresentava “indícios de irregularidade grave” sem recomendar, todavia, a paralisação do empreendimento.

O contrato das obras civis não foi o único a ser tirado do congelador pelo governo Lula. Para o fornecimento de bens e serviços importados foi definida a fabricante Areva, empresa resultante da fusão entre a alemã Siemens KWU e a francesa Framatome. A rigor, a Areva nem assinou o contrato. Ela foi escolhida porque herdou da KWU o acordo original.

Já os contratos da montagem foram assinados em 2 de setembro de 2014 com os seguintes consórcios: consórcio ANGRA 3, para a realização dos serviços de montagens eletromecânicas dos sistemas associados ao circuito primário da usina (sistemas associados ao circuito de geração de vapor por fonte nuclear),constituído pela empresas Construtora Queiroz Galvão S.A., EBE – Empresa Brasileira de Engenharia S.A. e Techint Engenharia S.A. E consórcio UNA 3, para a execução das montagens associadas aos sistemas convencionais da usina, constituído pelas empresas Construtora Andrade Gutierrez S.A., Construtora Norberto Odebrecht S.A., Construções e Comércio Camargo Corrêa S.A. e UTC Engenharia S.A.

O atual planejamento da Eletronuclear prevê a entrada em operação de Angra 3 em maio de 2018. Mas esta meta deverá ser revista depois de a obra ser praticamente paralisada no final de abril de 2014, devido à alegação de dívidas não pagas a empreiteira (governo Dilma, 2011-2014).

Depois de todos estes percalços, para uma obra tão polêmica, tomamos conhecimento das denúncias feitas por um dos executivos da empreiteira Camargo Correa, que passou a colaborar com as investigações da Operação Lava Jato e relatou aos procuradores, durante negociações para o acordo de delação premiada, uma suposta propina para o ex-ministro das Minas e Energia, Edson Lobão, na contratação da Camargo Correa para a execução de obras da usina de Angra 3.

Caso se confirmem tais acusações ficará claro para a sociedade brasileira que os reais interesses pela construção de Angra 3 e de mais 4 usinas nucleares tiveram como principal motivação as altas somas que autoridades públicas receberam como suborno. É bom lembrar que neste caso o ministro Lobão tinha poder de comando sobre a empresa pública responsável pela obra, a Eletronuclear ― subsidiária da Eletrobrás.

A partir deste episódio não podemos mais ignorar as objeções técnicas, como as denúncias com relação à obsolescência dos equipamentos tecnologicamente defasados (comprometendo o seu funcionamento e aumentando o risco de um desastre nuclear). Nem as denúncias de que o custo desta obra poderia encarecer durante a sua construção ― o que,de fato, já aconteceu.Tampouco o questionamento sobre o empréstimo realizado pela Caixa Econômica Federal, para a construção de Angra 3.

A expectativa é que todas as denúncias sejam investigadas e apuradas as responsabilidades. O fato em si é gravíssimo, e suficiente para a interrupção das atividades nucleares no país, em particular a construção de Angra 3, com o congelamento de novas instalações. Não se pode admitir que a decisão de construir centrais nucleares no país tenha sido feita em um mero balcão de negócios.

Heitor Scalambrini Costa é graduado em Física pela Universidade de Campinas/SP, mestrado em Ciências e Tecnologias Nucleares na Universidade Federal de Pernambuco, e doutorado em Energética – Université dAix-Marseille III (Droit, Econ. et Sciences (1992). Atualmente é professor associado da Universidade Federal de Pernambuco.

Time and Events (Knowledge Ecology)

March 24, 2015 / Adam Robbert

tumblr_nivrggIBpb1qd0i7oo1_1280

[Image: Mohammad Reza Domiri Ganji]

I just came across Massimo Pigliucci’s interesting review of Mangabeira Unger and Lee Smolin’s book The Singular Universe and the Reality of Time. There are more than a few Whiteheadian themes explored throughout the review, including Unger and Smolin’s (U&S) view that time should be read as an abstraction from events and that the “laws” of the universe are better conceptualized as habits or contingent causal connections secured by the ongoingness of those events rather than as eternal, abstract formalisms. (This entangling of laws with phenomena, of events with time, is one of the ways we can think towards an ecological metaphysics.)

But what I am particularly interested in is the short discussion on Platonism and mathematical realism. I sometimes think of mathematical realism as the view that numbers, and thus the abstract formalisms they create, are real, mind-independent entities, and that, given this view, mathematical equations are discovered (i.e., they actually exist in the world) rather than created (i.e., humans made them up to fill this or that pragmatic need). The review makes it clear, though, that this definition doesn’t push things far enough for the mathematical realist. Instead, the mathematical realist argues for not just the mind-independent existence of numbers but also their nature-independence—math as independent not just of all knowers but of all natural phenomena, past, present, or future.

U&S present an alternative to mathematical realisms of this variety that I find compelling and more consistent with the view that laws are habits and that time is an abstraction from events. Here’s the reviewer’s take on U&S’s argument (the review starts with a quote from U&S and then unpacks it a bit):

“The third idea is the selective realism of mathematics. (We use realism here in the sense of relation to the one real natural world, in opposition to what is often described as mathematical Platonism: a belief in the real existence, apart from nature, of mathematical entities.) Now dominant conceptions of what the most basic natural science is and can become have been formed in the context of beliefs about mathematics and of its relation to both science and nature. The laws of nature, the discerning of which has been the supreme object of science, are supposed to be written in the language of mathematics.” (p. xii)

But they are not, because there are no “laws” and because mathematics is a human (very useful) invention, not a mysterious sixth sense capable of probing a deeper reality beyond the empirical. This needs some unpacking, of course. Let me start with mathematics, then move to the issue of natural laws.

I was myself, until recently, intrigued by mathematical Platonism [8]. It is a compelling idea, which makes sense of the “unreasonable effectiveness of mathematics” as Eugene Wigner famously put it [9]. It is a position shared by a good number of mathematicians and philosophers of mathematics. It is based on the strong gut feeling that mathematicians have that they don’t invent mathematical formalisms, they “discover” them, in a way analogous to what empirical scientists do with features of the outside world. It is also supported by an argument analogous to the defense of realism about scientific theories and advanced by Hilary Putnam: it would be nothing short of miraculous, it is suggested, if mathematics were the arbitrary creation of the human mind, and yet time and again it turns out to be spectacularly helpful to scientists [10].

But there are, of course, equally (more?) powerful counterarguments, which are in part discussed by Unger in the first part of the book. To begin with, the whole thing smells a bit too uncomfortably of mysticism: where, exactly, is this realm of mathematical objects? What is its ontological status? Moreover, and relatedly, how is it that human beings have somehow developed the uncanny ability to access such realm? We know how we can access, however imperfectly and indirectly, the physical world: we evolved a battery of sensorial capabilities to navigate that world in order to survive and reproduce, and science has been a continuous quest for expanding the power of our senses by way of more and more sophisticated instrumentation, to gain access to more and more (and increasingly less relevant to our biological fitness!) aspects of the world.

Indeed, it is precisely this analogy with science that powerfully hints to an alternative, naturalistic interpretation of the (un)reasonable effectiveness of mathematics. Math too started out as a way to do useful things in the world, mostly to count (arithmetics) and to measure up the world and divide it into manageable chunks (geometry). Mathematicians then developed their own (conceptual, as opposed to empirical) tools to understand more and more sophisticated and less immediate aspects of the world, in the process eventually abstracting entirely from such a world in pursuit of internally generated questions (what we today call “pure” mathematics).

U&S do not by any means deny the power and effectiveness of mathematics. But they also remind us that precisely what makes it so useful and general — its abstraction from the particularities of the world, and specifically its inability to deal with temporal asymmetries (mathematical equations in fundamental physics are time-symmetric, and asymmetries have to be imported as externally imposed background conditions) — also makes it subordinate to empirical science when it comes to understanding the one real world.

This empiricist reading of mathematics offers a refreshing respite to the resurgence of a certain Idealism in some continental circles (perhaps most interestingly spearheaded by Quentin Meillassoux). I’ve heard mention a few times now that the various factions squaring off within continental philosophy’s avant garde can be roughly approximated as a renewed encounter between Kantian finitude and Hegelian absolutism. It’s probably a bit too stark of a binary, but there’s a sense in which the stakes of these arguments really do center on the ontological status of mathematics in the natural world. It’s not a direct focus of my own research interests, really, but it’s a fascinating set of questions nonetheless.

Los rayos cósmicos confirman que se fundió el corazón de Fukushima (El País)

Un detector de muones muestra el interior de dos reactores accidentados en Japón

, 23 MAR 2015 – 17:04 CET

Imagen proporcionada por Tepco sobre estos trabajos de detección.

Mientras Chernóbil todavía lucha para cubrir los restos de la tragediacon un segundo sarcófago, en Fukushima aún dan los primeros pasos para controlar por completo y desmantelar los reactores accidentados en 2011, una tarea que durará unas cuatro décadas. Al margen de las interminables fugas de agua que traen de cabeza a los responsables de la central, el principal objetivo es determinar la situación exacta del combustible radiactivo que quedó fuera de control durante varios días, provocando la mayor catástrofe atómica en lustros. Ahora, gracias a los rayos cósmicos, tenemos la confirmación de que el núcleo del reactor 1 de Fukushima se fundió por completo y que también se derritió, parcialmente, el combustible del reactor 2.

Los trabajos de desmantelamiento de la central ya han costado 1.450 millones a Japón

Esas barras de uranio derretidas generan tanto peligro que no ha sido posible entrar hasta el corazón de los reactores accidentados para determinar exactamente su estado. Las mediciones indirectas indicaban que estábamos en un escenario de fusión de los núcleos pero una nueva técnica que se sirve de la física de partículas ha ayudado a radiografiar, por el momento, dos de los reactores accidentados. Se trata de un detector de muones, unas partículas elementales que surgen cuando penetran en la atmósfera los rayos cósmicos, y que llegan por miles hasta la superficie de la Tierra. Estas partículas que frenan al chocar con objetos muy densos, como el combustible nuclear, y se pueden detectar con una suerte de placas de radiografía colocadas a los lados del reactor.

Al atravesar todo el invento, los muones han mostrado que no queda nada de combustible en el corazón del reactor número 1. Es decir, mientras el núcleo estuvo sin refrigerar con agua durante el accidente, las barras de uranio se derritieron por completo, cayendo por el fondo de la vasija que las contenía. Por eso no salen en la fotografía que han conseguido los físicos de varias universidades japonesas, que han desarrollado esta técnica junto a científicos del Laboratorio de Los Álamos y la empresa Toshiba, responsable de los trabajos de desmantelamiento de Fukushima.

Como la plancha detectora de los muones se coloca a ras de suelo, la imagen que ha devuelto de este reactor solo permite saber que el combustible se fundió y ya no está en su sitio, pero no ayuda a saber cuál es su situación en el sótano del reactor o si ha comprometido por el suelo la robusta contención que separa el núcleo del exterior. Posteriormente, Tepco ha dado a conocer el resultado de este examen en el reactor 2, que ha mostrado una descomposición parcial del núcleo al comparar la imagen con la de un reactor en condiciones normales.

Los científicos no pueden saber hasta dónde ha caído el núcleo fundido del reactor

“Los resultados reafirman nuestra idea previa de que una cantidad considerable de combustible se había fundido en el interior”, explicó Hiroshi Miyano, uno de los científicos, a AFP.  “Pero no hay evidencia de que el combustible se haya derretido a través de los edificios de contención y alcanzado el exterior”. Para asegurarse, el siguiente paso será el uso de robots que se cuelen por todos los rincones de los edificios.

Hoy se ha conocido el gasto que ha supuesto hasta el momento el desmantelamiento de Fukushima para los japoneses: 1.450 millones de euros de las arcas públicas, según un informe gubernamental que recoge la agencia Kyodo. Poco más de un tercio de ese dinero se ha gastado en los esfuerzos por controlar las continuas filtraciones y fugas de agua que inundan todo el entorno de la central.

Why Hollywood had to Fudge The General Relativity-Based Wormhole Scenes in Interstellar (The Physics arXiv Blog)

The Physics arXiv Blog

Interstellar is the only Hollywood movie to use the laws of physics to create film footage of the most extreme regions of the Universe. Now the film’s scientific advisor, Kip Thorne, reveals why they fudged the final footage

Wormholes are tunnel-like structures that link regions of spacetime. In effect, they are shortcuts from one part of the universe to another. Theoretical physicists have studied their properties for decades but despite all this work, nobody quite knows if they can exist in our universe or whether matter could pass through them if they did.

That hasn’t stopped science fiction writers making liberal use of wormholes as a convenient form of transport over otherwise unnavigable distances. And where science fiction writers roam, Hollywood isn’t far behind. Wormholes have played starring roles in films such as Star Trek, Stargate and even Bill & Ted’s Excellent Adventure. But none of these films depict wormholes in the way they might look like in real life.

All that has now changed thanks to the work of film director Christopher Nolan and Kip Thorne, a theoretical physicist at the California Institute of Technology in Pasadena, who collaborated on the science fiction film Interstellar, which was released in 2014.

Nolan wanted the film to be as realistic as possible and so invited Thorne, an expert on black holes and wormholes, to help create the footage. Thorne was intrigued by the possibility of studying wormholes visually, given that they are otherwise entirely theoretical. The result, he thought, could be a useful way of teaching students about general relativity.

So Thorne agreed to collaborate with a special effects team at Double Negative in London to create realistic footage. And today they publish a paper on the arXiv about the collaboration and what they learnt.

Interstellar is an epic tale. It begins with the discovery of a wormhole near Saturn and the decision to send a team of astronauts through it in search of a habitable planet that humans can populate because Earth is dying.

A key visual element of the story is the view through the wormhole of a different galaxy and the opposite view of Saturn. But what would these views look like?

One way to create computer generated images is to trace all the rays of light in a given scene and then determine which rays enter a camera placed at a given spot. But this is hugely inefficient because most of the rays never enter the camera.

A much more efficient method is to allow time to run backwards and trace the trajectories of light rays leaving the camera and travelling back to their source. In that way, the computing power is focused only on light rays that contribute to the final image.

So Thorne derived the various equations from general relativity that would determine the trajectory of the rays through a wormhole and the team at Double Negative created a computer model that simulated this, which they could run backwards. They also experimented with wormholes of different shapes, for example with long thin throats or much shorter ones and so on.

The results provided some fascinating insights into the way a wormhole might appear in the universe. But it also threw up some challenges for the film makers.

One problem was that Nolan chose to use footage of the view through a short wormhole, which produced fascinatingly distorted images of the distant galaxy. However, the footage of travelling through such a wormhole was too short. “The trip was quick and not terribly interesting, visually — not at all what Nolan wanted for his movie,” say Thorne and co.

But the journey through a longer wormhole was like travelling through a tunnel and very similar to things seen in other movies. “None of the clips, for any choice of parameters, had the compelling freshness that Nolan sought,” they admit.

In particular, when travelling through a wormhole, the object at the end becomes larger, scaling up from its centre and growing in size until it fills the frame. That turns out to be hard to process visually. “Because there is no parallax or other relative motion in the frame, to the audience it looks like the camera is zooming into the center of the wormhole,” say Thorne and co.

But camera zoom was utterly unlike the impression the film-makers wanted to portray, which was the sense of travelling through a shortcut from one part of the universe to another. “To foster that understanding, Nolan asked the visual effects team to convey a sense of travel through an exotic environment, one that was thematically linked to the exterior appearance of the wormhole but also incorporated elements of passing landscapes and the sense of a rapidly approaching destination,” they say.

So for the final cut, they asked visual effects artists to add some animation that gave this sense of motion. “The end result was a sequence of shots that told a story comprehensible by a general audience while resembling the wormhole’s interior,” they say.

In other words, they had to fudge it. Nevertheless, the remarkable attention to detail is a testament to the scientific commitment of the director and his team. And Thorne is adamant that the entire process of creating the footage will be an inspiration to students of film-making and of general relativity.

Of course, whether wormholes really do look like any of this is hard to say. The current thinking is that the laws of physics probably forbid the creation of wormholes like the one in Interstellar.

However, there are several ideas that leave open the possibility that wormholes might exist. The first is that wormholes may exist on the quantum scale, so a sufficiently advanced technology could enlarge them in some way.

The second is that our universe may be embedded in a larger multidimensional cosmos called a brane. That opens the possibility of travelling into other dimensions and then back into our own.

But the possibility that wormholes could exist in these scenarios reflects our ignorance of the physics involved rather any important insight. Nevertheless, there’s no harm in a little speculation!

Ref: arxiv.org/abs/1502.03809 : Visualizing Interstellar’s Wormhole

Physics’s pangolin (AEON)

Trying to resolve the stubborn paradoxes of their field, physicists craft ever more mind-boggling visions of reality

by 

Illustration by Claire ScullyIllustration by Claire Scully

Margaret Wertheim is an Australian-born science writer and director of the Institute For Figuring in Los Angeles. Her latest book is Physics on the Fringe (2011).

Theoretical physics is beset by a paradox that remains as mysterious today as it was a century ago: at the subatomic level things are simultaneously particles and waves. Like the duck-rabbit illusion first described in 1899 by the Polish-born American psychologist Joseph Jastrow, subatomic reality appears to us as two different categories of being.

But there is another paradox in play. Physics itself is riven by the competing frameworks of quantum theory and general relativity, whose differing descriptions of our world eerily mirror the wave-particle tension. When it comes to the very big and the extremely small, physical reality appears to be not one thing, but two. Where quantum theory describes the subatomic realm as a domain of individual quanta, all jitterbug and jumps, general relativity depicts happenings on the cosmological scale as a stately waltz of smooth flowing space-time. General relativity is like Strauss — deep, dignified and graceful. Quantum theory, like jazz, is disconnected, syncopated, and dazzlingly modern.

Physicists are deeply aware of the schizophrenic nature of their science and long to find a synthesis, or unification. Such is the goal of a so-called ‘theory of everything’. However, to non-physicists, these competing lines of thought, and the paradoxes they entrain, can seem not just bewildering but absurd. In my experience as a science writer, no other scientific discipline elicits such contradictory responses.

In string cosmology, the totality of existing universes exceeds the number of particles in our universe by more than 400 orders of magnitude

This schism was brought home to me starkly some months ago when, in the course of a fortnight, I happened to participate in two public discussion panels, one with a cosmologist at Caltech, Pasadena, the other with a leading literary studies scholar from the University of Southern Carolina. On the panel with the cosmologist, a researcher whose work I admire, the discussion turned to time, about which he had written a recent, and splendid, book. Like philosophers, physicists have struggled with the concept of time for centuries, but now, he told us, they had locked it down mathematically and were on the verge of a final state of understanding. In my Caltech friend’s view, physics is a progression towards an ever more accurate and encompassing Truth. My literary theory panellist was having none of this. A Lewis Carroll scholar, he had joined me for a discussion about mathematics in relation to literature, art and science. For him, maths was a delightful form of play, a ludic formalism to be admired and enjoyed; but any claims physicists might make about truth in their work were, in his view, ‘nonsense’. This mathematically based science, he said, was just ‘another kind of storytelling’.

On the one hand, then, physics is taken to be a march toward an ultimate understanding of reality; on the other, it is seen as no different in status to the understandings handed down to us by myth, religion and, no less, literary studies. Because I spend my time about equally in the realms of the sciences and arts, I encounter a lot of this dualism. Depending on whom I am with, I find myself engaging in two entirely different kinds of conversation. Can we all be talking about the same subject?

Many physicists are Platonists, at least when they talk to outsiders about their field. They believe that the mathematical relationships they discover in the world about us represent some kind of transcendent truth existing independently from, and perhaps a priori to, the physical world. In this way of seeing, the universe came into being according to a mathematical plan, what the British physicist Paul Davies has called ‘a cosmic blueprint’. Discovering this ‘plan’ is a goal for many theoretical physicists and the schism in the foundation of their framework is thus intensely frustrating. It’s as if the cosmic architect has designed a fiendish puzzle in which two apparently incompatible parts must be fitted together. Both are necessary, for both theories make predictions that have been verified to a dozen or so decimal places, and it is on the basis of these theories that we have built such marvels as microchips, lasers, and GPS satellites.

Quite apart from the physical tensions that exist between them, relativity and quantum theory each pose philosophical problems. Are space and time fundamental qualities of the universe, as general relativity suggests, or are they byproducts of something even more basic, something that might arise from a quantum process? Looking at quantum mechanics, huge debates swirl around the simplest situations. Does the universe split into multiple copies of itself every time an electron changes orbit in an atom, or every time a photon of light passes through a slit? Some say yes, others say absolutely not.

Theoretical physicists can’t even agree on what the celebrated waves of quantum theory mean. What is doing the ‘waving’? Are the waves physically real, or are they just mathematical representations of probability distributions? Are the ‘particles’ guided by the ‘waves’? And, if so, how? The dilemma posed by wave-particle duality is the tip of an epistemological iceberg on which many ships have been broken and wrecked.

Undeterred, some theoretical physicists are resorting to increasingly bold measures in their attempts to resolve these dilemmas. Take the ‘many-worlds’ interpretation of quantum theory, which proposes that every time a subatomic action takes place the universe splits into multiple, slightly different, copies of itself, with each new ‘world’ representing one of the possible outcomes.

When this idea was first proposed in 1957 by the American physicist Hugh Everett, it was considered an almost lunatic-fringe position. Even 20 years later, when I was a physics student, many of my professors thought it was a kind of madness to go down this path. Yet in recent years the many-worlds position has become mainstream. The idea of a quasi-infinite, ever-proliferating array of universes has been given further credence as a result of being taken up by string theorists, who argue that every mathematically possible version of the string theory equations corresponds to an actually existing universe, and estimate that there are 10 to the power of 500 different possibilities. To put this in perspective: physicists believe that in our universe there are approximately 10 to the power of 80 subatomic particles. In string cosmology, the totality of existing universes exceeds the number of particles in our universe by more than 400 orders of magnitude.

Nothing in our experience compares to this unimaginably vast number. Every universe that can be mathematically imagined within the string parameters — including ones in which you exist with a prehensile tail, to use an example given by the American string theorist Brian Greene — is said to be manifest somewhere in a vast supra-spatial array ‘beyond’ the space-time bubble of our own universe.

What is so epistemologically daring here is that the equations are taken to be the fundamental reality. The fact that the mathematics allows for gazillions of variations is seen to be evidence for gazillions of actual worlds.

Perhaps what we are encountering here is not so much the edge of reality, but the limits of the physicists’ category system

This kind of reification of equations is precisely what strikes some humanities scholars as childishly naive. At the very least, it raises serious questions about the relationship between our mathematical models of reality, and reality itself. While it is true that in the history of physics many important discoveries have emerged from revelations within equations — Paul Dirac’s formulation for antimatter being perhaps the most famous example — one does not need to be a cultural relativist to feel sceptical about the idea that the only way forward now is to accept an infinite cosmic ‘landscape’ of universes that embrace every conceivable version of world history, including those in which the Middle Ages never ended or Hitler won.

In the 30 years since I was a student, physicists’ interpretations of their field have increasingly tended toward literalism, while the humanities have tilted towards postmodernism. Thus a kind of stalemate has ensued. Neither side seems inclined to contemplate more nuanced views. It is hard to see ways out of this tunnel, but in the work of the late British anthropologist Mary Douglas I believe we can find a tool for thinking about some of these questions.

On the surface, Douglas’s great book Purity and Danger (1966) would seem to have nothing do with physics; it is an inquiry into the nature of dirt and cleanliness in cultures across the globe. Douglas studied taboo rituals that deal with the unclean, but her book ends with a far-reaching thesis about human language and the limits of all language systems. Given that physics is couched in the language-system of mathematics, her argument is worth considering here.

In a nutshell, Douglas notes that all languages parse the world into categories; in English, for instance, we call some things ‘mammals’ and other things ‘lizards’ and have no trouble recognising the two separate groups. Yet there are some things that do not fit neatly into either category: the pangolin, or scaly anteater, for example. Though pangolins are warm-blooded like mammals and birth their young, they have armoured bodies like some kind of bizarre lizard. Such definitional monstrosities are not just a feature of English. Douglas notes that all category systems contain liminal confusions, and she proposes that such ambiguity is the essence of what is seen to be impure or unclean.

Whatever doesn’t parse neatly in a given linguistic system can become a source of anxiety to the culture that speaks this language, calling forth special ritual acts whose function, Douglas argues, is actually to acknowledge the limits of language itself. In the Lele culture of the Congo, for example, this epistemological confrontation takes place around a special cult of the pangolin, whose initiates ritualistically eat the abominable animal, thereby sacralising it and processing its ‘dirt’ for the entire society.

‘Powers are attributed to any structure of ideas,’ Douglas writes. We all tend to think that our categories of understanding are necessarily real. ‘The yearning for rigidity is in us all,’ she continues. ‘It is part of our human condition to long for hard lines and clear concepts’. Yet when we have them, she says, ‘we have to either face the fact that some realities elude them, or else blind ourselves to the inadequacy of the concepts’. It is not just the Lele who cannot parse the pangolin: biologists are still arguing about where it belongs on the genetic tree of life.

As Douglas sees it, cultures themselves can be categorised in terms of how well they deal with linguistic ambiguity. Some cultures accept the limits of their own language, and of language itself, by understanding that there will always be things that cannot be cleanly parsed. Others become obsessed with ever-finer levels of categorisation as they try to rid their system of every pangolin-like ‘duck-rabbit’ anomaly. For such societies, Douglas argues, a kind of neurosis ensues, as the project of categorisation takes ever more energy and mental effort. If we take this analysis seriously, then, in Douglas’ terms, might it be that particle-waves are our pangolins? Perhaps what we are encountering here is not so much the edge of reality, but the limits of the physicists’ category system.

In its modern incarnation, physics is grounded in the language of mathematics. It is a so-called ‘hard’ science, a term meant to imply that physics is unfuzzy — unlike, say, biology whose classification systems have always been disputed. Based in mathematics, the classifications of physicists are supposed to have a rigour that other sciences lack, and a good deal of the near-mystical discourse that surrounds the subject hinges on ideas about where the mathematics ‘comes from’.

According to Galileo Galilei and other instigators of what came to be known as the Scientific Revolution, nature was ‘a book’ that had been written by God, who had used the language of mathematics because it was seen to be Platonically transcendent and timeless. While modern physics is no longer formally tied to Christian faith, its long association with religion lingers in the many references that physicists continue to make about ‘the mind of God’, and many contemporary proponents of a ‘theory of everything’ remain Platonists at heart.

It’s a startling thought, in an age when we can read the speed of our cars from our digitised dashboards, that somebody had to discover ‘velocity’

In order to articulate a more nuanced conception of what physics is, we need to offer an alternative to Platonism. We need to explain how the mathematics ‘arises’ in the world, in ways other than assuming that it was put there there by some kind of transcendent being or process. To approach this question dispassionately, it is necessary to abandon the beautiful but loaded metaphor of the cosmic book — and all its authorial resonances — and focus, not the creation of the world, but on the creation of physics as a science.

When we say that ‘mathematics is the language of physics’, we mean that physicists consciously comb the world for patterns that are mathematically describable; these patterns are our ‘laws of nature’. Since mathematical patterns proceed from numbers, much of the physicist’s task involves finding ways to extract numbers from physical phenomena. In the 16th and 17th centuries, philosophical discussion referred to this as the process of ‘quantification’; today we call it measurement. One way of thinking about modern physics is as an ever more sophisticated process of quantification that multiplies and diversifies the ways we extract numbers from the world, thus giving us the raw material for our quest for patterns or ‘laws’. This is no trivial task. Indeed, the history of physics has turned on the question of whatcan be measured and how.

Stop for a moment and take a look around you. What do you think can be quantified? What colours and forms present themselves to your eye? Is the room bright or dark? Does the air feel hot or cold? Are birds singing? What other sounds do you hear? What textures do you feel? What odours do you smell? Which, if any, of these qualities of experience might be measured?

In the early 14th century, a group of scholarly monks known as the calculatores at the University of Oxford began to think about this problem. One of their interests was motion, and they were the first to recognise the qualities we now refer to as ‘velocity’ and ‘acceleration’ — the former being the rate at which a body changes position, the latter, the rate at which the velocity itself changes. It’s a startling thought, in an age when we can read the speed of our cars from our digitised dashboards, that somebody had to discover ‘velocity’.

Yet despite the calculatores’ advances, the science of kinematics made barely any progress until Galileo and his contemporaries took up the baton in the late-16th century. In the intervening time, the process of quantification had to be extracted from a burden of dreams in which it became, frankly, bogged down. For along with motion, the calculatoreswere also interested in qualities such as sin and grace and they tried to find ways to quantify these as well. Between the calculatores and Galileo, students of quantification had to work out what they were going to exclude from the project. To put it bluntly, in order for the science of physics to get underway, the vision had to be narrowed.

How, exactly, this narrowing was to be achieved was articulated by the 17th-century French mathematician and philosopher René Descartes. What could a mathematically based science describe? Descartes’s answer was that the new natural philosophers must restrict themselves to studying matter in motion through space and time. Maths, he said, could describe the extended realm — or res extensa.Thoughts, feelings, emotions and moral consequences, he located in the ‘realm of thought’, or res cogitans, declaring them inaccessible to quantification, and thus beyond the purview of science. In making this distinction, Descartes did not divide mind from body (that had been done by the Greeks), he merely clarified the subject matter for a new physical science.

So what else apart from motion could be quantified? To a large degree, progress in physics has been made by slowly extending the range of answers. Take colour. At first blush, redness would seem to be an ineffable and irreducible quale. In the late 19th century, however, physicists discovered that each colour in the rainbow, when diffracted through a prism, corresponds to a different wavelength of light. Red light has a wavelength of around 700 nanometres, violet light around 400 nanometres. Colour can be correlated with numbers — both the wavelength and frequency of an electromagnetic wave. Here we have one half of our duality: the wave.

The discovery of electromagnetic waves was in fact one of the great triumphs of the quantification project. In the 1820s, Michael Faraday noticed that, if he sprinkled iron filings around a magnet, the fragments would spontaneously assemble into a pattern of lines that, he conjectured, were caused by a ‘magnetic field’. Physicists today accept fields as a primary aspect of nature but at the start of the Industrial Revolution, when philosophical mechanism was at its peak, Faraday’s peers scoffed. Invisible fields smacked of magic. Yet, later in the 19th century, James Clerk Maxwell showed that magnetic and electric fields were linked by a precise set of equations — today known as Maxwell’s Laws — that enabled him to predict the existence of radio waves. The quantification of these hitherto unsuspected aspects of our world — these hidden invisible ‘fields’ — has led to the whole gamut of modern telecommunications on which so much of modern life is now staged.

Turning to the other side of our duality – the particle – with a burgeoning array of electrical and magnetic equipment, physicists in the late 19th and early 20th centuries began to probe matter. They discovered that atoms were composed from parts holding positive and negative charge. The negative electrons, were found to revolve around a positive nucleus in pairs, with each member of the pair in a slightly different state, or ‘spin’. Spin turns out to be a fundamental quality of the subatomic realm. Matter particles, such as electrons, have a spin value of one half. Particles of light, or photons, have a spin value of one. In short, one of the qualities that distinguishes ‘matter’ from ‘energy’ is the spin value of its particles.

We have seen how light acts like a wave, yet experiments over the past century have shown that under many conditions it behaves instead like a stream of particles. In the photoelectric effect (the explanation of which won Albert Einstein his Nobel Prize in 1921), individual photons knock electrons out of their atomic orbits. In Thomas Young’s infamous double-slit experiment of 1805, light behaves simultaneously like waves and particles. Here, a stream of detectably separate photons are mysteriously guided by a wave whose effect becomes manifest over a long period of time. What is the source of this wave and how does it influence billions of isolated photons separated by great stretches of time and space? The late Nobel laureate Richard Feynman — a pioneer of quantum field theory — stated in 1965 that the double-slit experiment lay at ‘the heart of quantum mechanics’. Indeed, physicists have been debating how to interpret its proof of light’s duality for the past 200 years.

Just as waves of light sometimes behave like particles of matter, particles of matter can sometimes behave like waves. In many situations, electrons are clearly particles: we fire them from electron guns inside the cathode-ray tubes of old-fashioned TV sets and each electron that hits the screen causes a tiny phosphor to glow. Yet, in orbiting around atoms, electrons behave like three-dimensional waves. Electron microscopes put the wave-quality of these particles to work; here, in effect, they act like short-wavelengths of light.

Physics is not just another story about the world: it is a qualitatively different kind of story to those told in the humanities, in myths and religions

Wave-particle duality is a core feature of our world. Or rather, we should say, it is a core feature of our mathematical descriptions of our world. The duck-rabbits are everywhere, colonising the imagery of physicists like, well, rabbits. But what is critical to note here is that however ambiguous our images, the universe itself remains whole and is manifestly not fracturing into schizophrenic shards. It is this tantalising wholeness in the thing itself that drives physicists onward, like an eternally beckoning light that seems so teasingly near yet is always out of reach.

Instrumentally speaking, the project of quantification has led physicists to powerful insights and practical gain: the computer on which you are reading this article would not exist if physicists hadn’t discovered the equations that describe the band-gaps in semiconducting materials. Microchips, plasma screens and cellphones are all byproducts of quantification and, every decade, physicists identify new qualities of our world that are amendable to measurement, leading to new technological possibilities. In this sense, physics is not just another story about the world: it is a qualitatively different kind of story to those told in the humanities, in myths and religions. No language other than maths is capable of expressing interactions between particle spin and electromagnetic field strength. The physicists, with their equations, have shown us new dimensions of our world.

That said, we should be wary of claims about ultimate truth. While quantification, as a project, is far from complete, it is an open question as to what it might ultimately embrace. Let us look again at the colour red. Red is not just an electromagnetic phenomenon, it is also a perceptual and contextual phenomenon. Stare for a minute at a green square then look away: you will see an afterimage of a red square. No red light has been presented to your eyes, yet your brain will perceive a vivid red shape. As Goethe argued in the late-18th century, and Edwin Land (who invented Polaroid film in 1932) echoed, colour cannot be reduced to purely prismatic effects. It exists as much in our minds as in the external world. To put this into a personal context, no understanding of the electromagnetic spectrum will help me to understand why certain shades of yellow make me nauseous, while electric orange fills me with joy.

Descartes was no fool; by parsing reality into the res extensa and res cogitans he captured something critical about human experience. You do not need to be a hard-core dualist to imagine that subjective experience might not be amenable to mathematical law. For Douglas, ‘the attempt to force experience into logical categories of non-contradiction’ is the ‘final paradox’ of an obsessive search for purity. ‘But experience is not amenable [to this narrowing],’ she insists, and ‘those who make the attempt find themselves led into contradictions.’

Quintessentially, the qualities that are amenable to quantification are those that are shared. All electrons are essentially the same: given a set of physical circumstances, every electron will behave like any other. But humans are not like this. It is our individuality that makes us so infuriatingly human, and when science attempts to reduce us to the status of electrons it is no wonder that professors of literature scoff.

Douglas’s point about attempting to corral experience into logical categories of non-contradiction has obvious application to physics, particularly to recent work on the interface between quantum theory and relativity. One of the most mysterious findings of quantum science is that two or more subatomic particles can be ‘entangled’. Once particles are entangled, what we do to one immediately affects the other, even if the particles are hundreds of kilometres apart. Yet this contradicts a basic premise of special relativity, which states that no signal can travel faster than the speed of light. Entanglement suggests that either quantum theory or special relativity, or both, will have to be rethought.

More challenging still, consider what might happen if we tried to send two entangled photons to two separate satellites orbiting in space, as a team of Chinese physicists, working with the entanglement theorist Anton Zeilinger, is currently hoping to do. Here the situation is compounded by the fact that what happens in near-Earth orbit is affected by both special and general relativity. The details are complex, but suffice it to say that special relativity suggests that the motion of the satellites will cause time to appear to slow down, while the effect of the weaker gravitational field in space should cause time to speed up. Given this, it is impossible to say which of the photons would be received first at which satellite. To an observer on the ground, both photons should appear to arrive at the same time. Yet to an observer on satellite one, the photon at satellite two should appear to arrive first, while to an observer on satellite two the photon at satellite one should appear to arrive first. We are in a mire of contradiction and no one knows what would in fact happen here. If the Chinese experiment goes ahead, we might find that some radical new physics is required.

To say that every possible version of their equations must be materially manifest strikes me as a kind of berserk literalism

You will notice that the ambiguity in these examples focuses on the issue of time — as do many paradoxes relating to relativity and quantum theory. Time indeed is a huge conundrum throughout physics, and paradoxes surround it at many levels of being. In Time Reborn: From the Crisis in Physics to the Future of the Universe (2013) the American physicist Lee Smolin argues that for 400 years physicists have been thinking about time in ways that are fundamentally at odds with human experience and therefore wrong. In order to extricate ourselves from some of the deepest paradoxes in physics, he says, its very foundations must be reconceived. In an op-ed in New Scientist in April this year, Smolin wrote:
The idea that nature consists fundamentally of atoms with immutable properties moving through unchanging space, guided by timeless laws, underlies a metaphysical view in which time is absent or diminished. This view has been the basis for centuries of progress in science, but its usefulness for fundamental physics and cosmology has come to an end.

In order to resolve contradictions between how physicists describetime and how we experience time, Smolin says physicists must abandon the notion of time as an unchanging ideal and embrace an evolutionary concept of natural laws.

This is radical stuff, and Smolin is well-known for his contrarian views — he has been an outspoken critic of string theory, for example. But at the heart of his book is a worthy idea: Smolin is against the reflexive reification of equations. As our mathematical descriptions of time are so starkly in conflict with our lived experience of time, it is our descriptions that will have to change, he says.

To put this into Douglas’s terms, the powers that have been attributed to physicists’ structure of ideas have been overreaching. ‘Attempts to force experience into logical categories of non-contradiction’ have, she would say, inevitablyfailed. From the contemplation of wave-particle pangolins we have been led to the limits of the linguistic system of physicists. Like Smolin, I have long believed that the ‘block’ conception of time that physics proposes is inadequate, and I applaud this thrilling, if also at times highly speculative, book. Yet, if we can fix the current system by reinventing its axioms, then (assuming that Douglas is correct) even the new system will contain its own pangolins.

In the early days of quantum mechanics, Niels Bohr liked to say that we might never know what ‘reality’ is. Bohr used John Wheeler’s coinage, calling the universe ‘a great smoky dragon’, and claiming that all we could do with our science was to create ever more predictive models. Bohr’s positivism has gone out of fashion among theoretical physicists, replaced by an increasingly hard-core Platonism. To say, as some string theorists do, that every possible version of their equations must be materially manifest strikes me as a kind of berserk literalism, reminiscent of the old Ptolemaics who used to think that every mathematical epicycle in their descriptive apparatus must represent a physically manifest cosmic gear.

We are veering here towards Douglas’s view of neurosis. Will we accept, at some point, that there are limits to the quantification project, just as there are to all taxonomic schemes? Or will we be drawn into ever more complex and expensive quests — CERN mark two, Hubble, the sequel — as we try to root out every lingering paradox? In Douglas’s view, ambiguity is an inherent feature of language that we must face up to, at some point, or drive ourselves into distraction.

3 June 2013

A física e Hollywood (Folha de S.Paulo)

HENRIQUE GOMES

22/02/2015  03h15

RESUMO “Interestelar” integra leva de filmes pautados pela ciência. Nele, a busca pela sobrevivência leva humanos à proximidade de um buraco negro, mote para especulações ligadas à pesquisa de Stephen Hawking -tema de “A Teoria de Tudo”, que, como a ficção científica de Christopher Nolan, disputa hoje categorias do Oscar.

*

Em 2014, houve um “boom” de filmes em Hollywood levando a ciência a sério. “A Teoria de Tudo” e “O Jogo da Imitação” tratam da vida de cientistas importantes do século 21: Stephen Hawking e Alan Turing, respectivamente. Um terceiro longa, a ficção científica “Interestelar”, inova por não só aderir fielmente ao que se sabe sobre o espaço-tempo mas por usar os conhecimentos em prol da narrativa.

Não estou falando aqui sobre incluir efeitos sonoros no espaço. Isso já foi feito antes e não muda significativamente o roteiro. Os responsáveis por “Interestelar” não se contentaram em preencher a burocracia técnica só para escapar dos chatos de plantão. Aplicaram um esforço homérico em inúmeras reuniões com o renomado físico Kip Thorne (que também aparece em “A Teoria de Tudo”), em simulações de buracos negros e efetivamente reescreveram o roteiro para se adequar às diretrizes físicas.

O resultado final não perde em nada –ao menos no quesito de excitar a imaginação e produzir efeitos fantásticos– para catástrofes científicas como “Além da Escuridão – Star Trek” (2013) e “Prometheus” (2012). Em “Star Trek”, por exemplo, Isaac Newton e até Galileu ficariam horrorizados ao ver uma nave entrar em queda livre em direção à Terra, enquanto seus tripulantes, simultaneamente, entram em queda livre em relação à nave. Como bem sabem aqueles que já estiveram em naves em queda livre, tripulantes flutuam, não caem. (Pedras de pesos diferentes caem com a mesma velocidade da torre de Pisa e de outras torres.)

“Interestelar” ultrapassa as regras da ficção científica hollywoodiana. Carl Sagan disse que “a ciência não é só compatível com a espiritualidade; é uma profunda fonte de espiritualidade”. “Interestelar” prova o que nós, cientistas, sabemos há muito tempo: a frase se aplica igualmente ao encanto humano com o desconhecido.

Divulgação
Matthew McConaughey como Cooper em "Interestelar"
Matthew McConaughey como Cooper em “Interestelar”

“Interestelar” e “A Teoria de Tudo” têm alguns temas em comum.

O primeiro é a degeneração –do planeta Terra, em um; de um sistema neuromuscular, em outro. À deterioração do planeta Terra, se busca escapar por meio da exploração interestelar, liderada pelo personagem Cooper ( Matthew McConaughey). À do corpo humano, por meio da mente incansável de Stephen Hawking, vivido pelo excelente Eddie Redmayne.

O segundo tema em comum é justamente uma parte importante da obra de Hawking.

ESTRELAS

Stephen Hawking nasceu em Oxford, em 1942. Aos 21 anos, já no primeiro ano de seu doutorado, recebeu o diagnóstico de ELA (esclerose lateral amiotrófica), doença degenerativa que atinge a comunicação nervosa com os músculos, mas que deixa outras funções cerebrais intactas. Decidido a continuar seus estudos, um dos primeiros problemas a que Hawking se dedicou foi à questão do que acontece quando uma estrela é tão pesada que não aguenta o próprio peso.

O colapso da estrela concentra toda a sua massa em um único ponto, onde a teoria deixa de fazer sentido. Já se antecipando a aplicações em filmes de ficção científica, físicos chamaram esse ponto de singularidade. A uma certa distância desse ponto singular, a atração de toda a massa concentrada é suficiente para que nem a luz consiga escapar. Uma lanterna acesa a (no mínimo) esse raio não pode ser enxergada por alguém a uma distância maior; nada escapa a essa esfera, chamada de buraco negro (por motivos óbvios).

Na época em que Hawking era estudante, existia uma única solução para as equações da relatividade geral de Einstein –que descrevem como concentrações de matéria e energia distorcem a geometria do espaço-tempo– que representava um buraco negro, descoberta pelo alemão Schwarzschild. Um grupo de físicos russos argumentava que essa solução era artificial, nascida do arranjo de partículas colapsando em perfeita sincronização para que chegassem juntas ao centro, formando assim um ponto de densidade infinita: a singularidade.

Hawking e Roger Penrose, matemático de Oxford, demonstraram que, na verdade, essa era uma característica genérica das equações de Einstein –e mais: que o universo teria começado no que se convencionou chamar “singularidade cosmológica”, na qual a noção de tempo deixaria de ter significado. Como Hawking diz no filme, “seria o começo do tempo em si”.

Não há consenso na física teórica moderna sobre o que acontece de fato a quem se aproxima de uma singularidade dentro de um buraco negro. O maior obstáculo ao nosso entendimento é que, a pequenas distâncias da singularidade, precisamos levar em conta efeitos quânticos, e –como comenta a escolada Jane Hawking em “A Teoria de Tudo”– a teoria quântica e a relatividade geral são escritas em linguagens completamente diferentes. Não que seja preciso chegar tão perto da singularidade para sabermos que os efeitos seriam drásticos.

A crítica que mais ouvi de físicos amadores (e não tão amadores) a “Interestelar” é a de que –atenção, spoiler– Cooper seria trucidado ao entrar em Gargantua, um buraco negro gigante. “Trucidado” talvez seja a palavra incorreta: “espaguetificado” é o termo técnico.

MARÉS

O que mataria você ao cair em um buraco negro não é a força absoluta da gravidade. Assim como pedras jogadas por hereges italianos de cima de torres, partes diferentes do seu corpo caem com a mesma aceleração, mesmo que a aceleração em si seja enorme. Essa conclusão vale se a força da gravidade for relativamente constante –quase a mesma no seu pé e na sua cabeça. Apesar de essa condição ser satisfeita na superfície da Terra, a força da gravidade obviamente não é constante. Ela decai com a distância, e é possível observar efeitos da variação dessa força –pequenos até mesmo na escala da torre de Pisa– em corpos bem maiores. O exemplo mais familiar para nós terráqueos é o efeito das marés no nosso planeta. A Lua puxa a Terra com mais força em sua face mais próxima, com os oceanos inchando e desinchando de acordo a essa atração. Apesar da força gravitacional solar absoluta na Terra ser maior, por a Lua estar bem mais próxima de nós do que o Sol, o maior gradiente da força é lunar, e é por isso que nós sentimos mais os efeitos de maré provindos da Lua que do Sol.

Pelo mesmo motivo, assim que entrássemos em buracos negros estaríamos sujeitos a uma força gravitacional imensa, mas, ainda bem longe da singularidade central, não necessariamente sentiríamos força de maré. Essa ausência de efeitos dramáticos nesse estágio da nossa queda é adequadamente chamada de “sem drama” na comunidade, e até ali a entrada de Cooper em Gargantua seria assim: sem drama. Mas não depois. Ao se aproximar de uma singularidade, mesmo antes de precisarmos incluir efeitos quânticos, a força gravitacional pode ser tão diferente dos pés à cabeça que Cooper seria esticado –daí a “espaguetificação”.

Não era possível, claro, incluir essa explicação (macarrônica?) em “Interestelar”. Mesmo assim, uma das cenas mais espantosas do filme envolve justamente marés no planeta Miller, que orbita Gargantua. No filme, marés enormes atingem os protagonistas de hora em hora, de forma bem inconveniente. Para que o efeito de maré fosse aproximadamente correto, o físico Kip Thorne calculou o tamanho do buraco negro, sua rotação e a órbita do planeta. As imagens estarrecedoras do filme são frutos de cálculos.

Ainda que não seja realista esperar igual cuidado em produções futuras, talvez o fato de algumas dessas simulações de buracos negros serem novidade até na comunidade científica estimule alguns produtores/físicos amadores de plantão –uma demografia bem magra– a seguir o exemplo.

Mas voltemos ao destino de Cooper. Já vimos que ele escaparia ileso à entrada no buraco negro. Sem drama até ali. Mas e a tal da “espaguetificação”? Muitos comentaristas, como o popular Neil deGrasse Tyson, argumentaram que simplesmente não sabemos o que acontece dentro de um buraco negro. Passando daquela fronteira, o roteiro adquiriria então imunidade diplomática às leis da física, virando terreno fértil para especulações mais ousadas –para não dizer “terra de ninguém”.

Bem, que me desculpe Tyson, mas isso não é exatamente verdade. Acreditamos que a relatividade geral funcionaria muito bem até antes que efeitos quânticos fossem importantes (para um buraco negro do tamanho de Gargantua). Somada a isso, na solução de Schwarzschild, a aproximação da singularidade é inevitável. Assim como não conseguimos parar o tempo, não conseguiríamos manter a mesma distância do centro, tendo que inexoravelmente aproximarmo-nos mais e mais da singularidade, que se agigantaria à nossa frente, inevitável como o futuro. Nesse caso, Cooper viraria espaguete antes que os trompetes da mecânica quântica pudessem soar a sua (possível) salvação. Felizmente para toda raça humana no filme, esse não é o caso.

PIÕES

Em 1962, 43 anos após a descoberta do buraco negro nas trincheiras da Grande Guerra, o físico matemático neozelandês Roy Kerr, em circunstâncias bastante mais confortáveis, generalizou a solução de Schwarzschild, descobrindo uma solução da teoria de Einstein que correspondia a um buraco negro em rotação –girando como um pião.

Mais tarde, Hawking e colaboradores mostraram que qualquer buraco negro se assenta na forma de Kerr e, adequadamente, Gargantua é um desses, em altíssima rotação. Mas, quando piões como esses giram, eles puxam consigo o próprio espaço-tempo, e há uma espécie de força centrífuga –aquela força que sentimos no carro quando fazemos uma curva fechada– inevitável, que aumenta conforme o centro do buraco negro se aproxima. A uma certa distância do centro, o cabo de guerra entre a força de atração e a centrípeta se equilibra, e a singularidade deixa de ser inevitável.

A partir desse momento, realmente não sabemos bem o que acontece, e Cooper fica livre para fazer o que os roteiristas inventarem. Não que adentrar uma quarta dimensão, ver o tempo como mais uma direção do espaço e todo o resto não tenham nenhum embasamento, mas a partir dali entramos no reino da especulação científica. Pelo menos o fizemos com consciência limpa.

Carl Sagan, no excelente “Cosmos”, mais uma vez nos guia: “Nós não teremos medo de especular. Mas teremos cuidado em separar especulação de fato. O cosmos é cheio, além de qualquer medida, de verdades elegantes, de requintadas inter-relações, do impressionante maquinário da natureza”. O universo é mais estranho (e mais fascinante) do que a ficção. Está mais do que na hora de explorarmos uma ficção, científica não só no nome.

HENRIQUE GOMES, 34, é doutor em física pela Universidade de Nottingham (Reino Unido) e pesquisador no Perimeter Institute for Theoretical Physics (Canadá).

How The Nature of Information Could Resolve One of The Great Paradoxes Of Cosmology (The Physics Arxiv Blog)

Feb 17, 2015

Stephen Hawking described it as the most spectacular failure of any physical theory in history. Can a new theory of information rescue cosmologists?

One of the biggest puzzles in science is the cosmological constant paradox. This arises when physicists attempt to calculate the energy density of the universe from first principles. Using quantum mechanics, the number they come up with is 10^94 g/cm^3.

And yet the observed energy density, calculated from the density of mass in the cosmos and the way the universe is expanding, is about 10^-27 g/cm^3. In other words, our best theory of the universe misses the mark by 120 orders of magnitude.

That’s left cosmologists somewhat red-faced. Indeed, Stephen Hawking has famously described this as the most spectacular failure of any physical theory in history. This huge discrepancy is all the more puzzling because quantum mechanics makes such accurate predictions in other circumstances. Just why it goes so badly wrong here is unknown.

Today, Chris Fields, an independent researcher formerly with New Mexico State University in Las Cruces, puts forward a simple explanation. His idea is that the discrepancy arises because large objects, such as planets and stars, behave classically rather than demonstrating quantum properties. And he’s provided some simple calculations to make his case.

One of the key properties of quantum objects is that they can exist in a superposition of states until they are observed. When that happens, these many possibilities “collapse” and become one specific outcome, a process known as quantum decoherence.

For example, a photon can be in a superposition of states that allow it to be in several places at the same time. However, as soon as the photon is observed the superposition decoheres and the photon appears in one place.

This process of decoherence must apply to everything that has a specific position, says Fields. Even to large objects such as stars, whose position is known with respect to the cosmic microwave background, the echo of the big bang which fills the universe.

In fact, Fields argues that it is the interaction between the cosmic microwave background and all large objects in the universe that causes them to decohere giving them specific positions which astronomers observe.

But there is an important consequence from having a specific position — there must be some information associated with this location in 3D space. If a location is unknown, then the amount of information must be small. But if it is known with precision, the information content is much higher.

And given that there are some 10^25 stars in the universe, that’s a lot of information. Fields calculates that encoding the location of each star to within 10 cubic kilometres requires some 10^93 bits.

That immediately leads to an entirely new way of determining the energy density of the cosmos. Back in the 1960s, the physicist Rolf Landauer suggested that every bit of information had an energy associated with it, an idea that has gained considerable traction since then.

So Fields uses Landauer’s principle to calculate the energy associated with the locations of all the stars in the universe. This turns out to be about 10^-30 g /cm^3, very similar to the observed energy density of the universe.

But here’s the thing. That calculation requires the position of each star to be encoded only to within 10 km^3. Fields also asks how much information is required to encode the position of stars to the much higher resolution associated with the Planck length. “Encoding 10^25 stellar positions at [the Planck length] would incur a free-energy cost ∼ 10^117 larger than that found here,” he says.

That difference is remarkably similar to the 120 orders of magnitude discrepancy between the observed energy density and that calculated using quantum mechanics. Indeed, Fields says that the discrepancy arises because the positions of the stars can be accounted for using quantum mechanics. “It seems reasonable to suggest that the discrepancy between these numbers may be due to the assumption that encoding classical information at [the Planck scale] can be considered physically meaningful.”

That’s a fascinating result that raises important questions about the nature of reality. First, there is the hint in Fields’ ideas that information provides the ghostly bedrock on which the laws of physics are based. That’s an idea that has gained traction among other physicists too.

Then there is the role of energy. One important question is where this energy might have come from in the first place. The process of decoherence seems to create it from nothing.

Cosmologists generally overlook violations of the principle of conservation of energy. After all, the big bang itself is the biggest offender. So don’t expect much hand wringing over this. But Fields’ approach also implies that a purely quantum universe would have an energy density of zero, since nothing would have localised position. That’s bizarre.

Beyond this is the even deeper question of how the universe came to be classical at all, given that cosmologists would have us believe that the big bang was a quantum process. Fields suggests that it is the interaction between the cosmic microwave background and the rest of the universe that causes the quantum nature of the universe to decohere and become classical.

Perhaps. What is all too clear is that there are fundamental and fascinating problems in cosmology — and the role that information plays in reality.

Ref: arxiv.org/abs/1502.03424 : Is Dark Energy An Artifact Of Decoherence?