A new paper explores how the opinions of an electorate may be reflected in a mathematical model ‘inspired by models of simple magnetic systems’
Date: October 8, 2021
Source: University at Buffalo
Summary: A study leverages concepts from physics to model how campaign strategies influence the opinions of an electorate in a two-party system.
A study in the journal Physica A leverages concepts from physics to model how campaign strategies influence the opinions of an electorate in a two-party system.
Researchers created a numerical model that describes how external influences, modeled as a random field, shift the views of potential voters as they interact with each other in different political environments.
The model accounts for the behavior of conformists (people whose views align with the views of the majority in a social network); contrarians (people whose views oppose the views of the majority); and inflexibles (people who will not change their opinions).
“The interplay between these behaviors allows us to create electorates with diverse behaviors interacting in environments with different levels of dominance by political parties,” says first author Mukesh Tiwari, PhD, associate professor at the Dhirubhai Ambani Institute of Information and Communication Technology.
“We are able to model the behavior and conflicts of democracies, and capture different types of behavior that we see in elections,” says senior author Surajit Sen, PhD, professor of physics in the University at Buffalo College of Arts and Sciences.
Sen and Tiwari conducted the study with Xiguang Yang, a former UB physics student. Jacob Neiheisel, PhD, associate professor of political science at UB, provided feedback to the team, but was not an author of the research. The study was published online in Physica A in July and will appear in the journal’s Nov. 15 volume.
The model described in the paper has broad similarities to the random field Ising model, and “is inspired by models of simple magnetic systems,” Sen says.
The team used this model to explore a variety of scenarios involving different types of political environments and electorates.
Among key findings, as the authors write in the abstract: “In an electorate with only conformist agents, short-duration high-impact campaigns are highly effective. … In electorates with both conformist and contrarian agents and varying level(s) of dominance due to local factors, short-term campaigns are effective only in the case of fragile dominance of a single party. Strong local dominance is relatively difficult to influence and long-term campaigns with strategies aimed to impact local level politics are seen to be more effective.”
“I think it’s exciting that physicists are thinking about social dynamics. I love the big tent,” Neiheisel says, noting that one advantage of modeling is that it could enable researchers to explore how opinions might change over many election cycles — the type of longitudinal data that’s very difficult to collect.
Mathematical modeling has some limitations: “The real world is messy, and I think we should embrace that to the extent that we can, and models don’t capture all of this messiness,” Neiheisel says.
But Neiheisel was excited when the physicists approached him to talk about the new paper. He says the model provides “an interesting window” into processes associated with opinion dynamics and campaign effects, accurately capturing a number of effects in a “neat way.”
“The complex dynamics of strongly interacting, nonlinear and disordered systems have been a topic of interest for a long time,” Tiwari says. “There is a lot of merit in studying social systems through mathematical and computational models. These models provide insight into short- and long-term behavior. However, such endeavors can only be successful when social scientists and physicists come together to collaborate.”
Mukesh Tiwari, Xiguang Yang, Surajit Sen. Modeling the nonlinear effects of opinion kinematics in elections: A simple Ising model with random field based study. Physica A: Statistical Mechanics and its Applications, 2021; 582: 126287 DOI: 10.1016/j.physa.2021.126287
Pesquisadores Syukuro Manabe, Klaus Hasselmann e Giorgio Parisi vão dividir prêmio de 10 milhões de coroas suecas
O prêmio Nobel em Física deste ano foi dedicado ao estudo de sistemas complexos, dentre eles os que permitem a compreensão das mudanças climáticas que afetam nosso planeta. A escolha coloca um carimbo definitivo de consenso sobre a ciência do clima.
Os pesquisadores Syukuro Manabe, dos Estados Unidos, e Klaus Hasselmann, da Alemanha, foram premiados especificamente por modelarem o clima terrestre e fazerem predições sobre o aquecimento global. A outra metade do prêmio foi para Giorgio Parisi, da Itália, que revelou padrões ocultos em materiais complexos desordenados, das escalas atômica à planetária, em uma contribuição essencial à teoria de sistemas complexos, com relevância também para o estudo do clima.
“Muitas pessoas pensam que a física lida com fenômenos simples, como a órbita perfeitamente elíptica da Terra ao redor do Sol ou átomos em estruturas cristalinas”, disse Thors Hans Hansson, membro do comitê de escolha do Nobel, na coletiva que apresentou a escolha.
”Mas a física é muito mais que isso. Uma das tarefas básicas da física é usar teorias básicas da matéria para explicar fenômenos e processos complexos, como o comportamento de materiais e qual é o desenvolvimento no clima da Terra. Isso exige intuição profunda por quais estruturas e quais progressões são essenciais, e também engenhosidade matemática para desenvolver os modelos e as teorias que as descrevem, coisas em que os laureados deste ano são poderosos.”
“Eu acho que é urgente que tomemos decisões muito fortes e nos movamos em um passo forte, porque estamos numa situação em que podemos ter uma retroalimentação positiva e isso pode acelerar o aumento de temperatura”, disse Giorgio Parisi, um dos vencedores, na coletiva de apresentação do evento. “É claro que para as gerações futuras nós temos de agir agora de uma forma muito rápida.”
COMO É ESCOLHIDO O GANHADOR DO NOBEL
A tradicional premiação do Nobel teve início com a morte do químico sueco Alfred Nobel (1833-1896), inventor da dinamite. Em 1895, em seu último testamento, Nobel registrou que sua fortuna deveria ser destinada para a construção de um prêmio —o que foi recebido por sua família com contestação. O primeiro prêmio só foi dado em 1901.
O processo de escolha do vencedor do prêmio da área de física começa no ano anterior à premiação. Em setembro, o Comitê do Nobel de Física envia convites (cerca de 3.000) para a indicação de nomes que merecem a homenagem. As respostas são enviadas até o dia 31 de janeiro.
Podem indicar nomes os membros da Academia Real Sueca de Ciências; membros do Comitê do Nobel de Física; ganhadores do Nobel de Física; professores física em universidades e institutos de tecnologia da Suécia, Dinamarca, Finlândia, Islândia e Noruega, e do Instituto Karolinska, em Estocolmo; professores em cargos semelhantes em pelo menos outras seis (mas normalmente em centenas de) universidades escolhidas pela Academia de Ciências, com o objetivo de assegurar a distribuição adequada pelos continentes e áreas de conhecimento; e outros cientistas que a Academia entenda adequados para receber os convites.
Autoindicações não são aceitas.
Começa então um processo de análise das centenas de nomes apontados, com consulta a especialistas e o desenvolvimento de relatórios, a fim de afunilar a seleção. Finalmente, em outubro, a Academia, por votação majoritária, decide quem receberá o reconhecimento.
Peebles ajudou a entender como o Universo evoluiu após o Big Bang, e Mayor e Queloz descobriram um exoplaneta (planeta fora do Sistema Solar) que orbitava uma estrela do tipo solar.
Pesquisas com laser foram premiadas em 2018, com láureas para Arthur Ashkin, Donna Strickland e Gérard Mourou.
Indo um pouco mais longe, o prêmio já esteve nas mãos de Max Planck (1918), por ter lançado as bases da física quântica e de Albert Einstein (1921), pela descoberta do efeito fotoelétrico. Niels Bohr (1922), por suas contribuições para o entendimento da estrutura atômica, e Paul Dirac e Erwin Schrödinger (1933), pelo desenvolvimento de novas versões da teoria quântica, também foram premiados.
Two new books on quantum theory could not, at first glance, seem more different. The first, Something Deeply Hidden, is by Sean Carroll, a physicist at the California Institute of Technology, who writes, “As far as we currently know, quantum mechanics isn’t just an approximation of the truth; it is the truth.” The second, Einstein’s Unfinished Revolution, is by Lee Smolin of the Perimeter Institute for Theoretical Physics in Ontario, who insists that “the conceptual problems and raging disagreements that have bedeviled quantum mechanics since its inception are unsolved and unsolvable, for the simple reason that the theory is wrong.”
Given this contrast, one might expect Carroll and Smolin to emphasize very different things in their books. Yet the books mirror each other, down to chapters that present the same quantum demonstrations and the same quantum parables. Carroll and Smolin both agree on the facts of quantum theory, and both gesture toward the same historical signposts. Both consider themselves realists, in the tradition of Albert Einstein. They want to finish his work of unifying physical theory, making it offer one coherent description of the entire world, without ad hoc exceptions to cover experimental findings that don’t fit. By the end, both suggest that the completion of this project might force us to abandon the idea of three-dimensional space as a fundamental structure of the universe.
But with Carroll claiming quantum mechanics as literally true and Smolin claiming it as literally false, there must be some underlying disagreement. And of course there is. Traditional quantum theory describes things like electrons as smeary waves whose measurable properties only become definite in the act of measurement. Sean Carroll is a supporter of the “Many Worlds” interpretation of this theory, which claims that the multiple measurement possibilities all simultaneously exist. Some proponents of Many Worlds describe the existence of a “multiverse” that contains many parallel universes, but Carroll prefers to describe a single, radically enlarged universe that contains all the possible outcomes running alongside each other as separate “worlds.” But the trouble, says Lee Smolin, is that in the real world as we observe it, these multiple possibilities never appear — each measurement has a single outcome. Smolin takes this fact as evidence that quantum theory must be wrong, and argues that any theory that supersedes quantum mechanics must do away with these multiple possibilities.
So how can such similar books, informed by the same evidence and drawing upon the same history, reach such divergent conclusions? Well, anyone who cares about politics knows that this type of informed disagreement happens all the time, especially, as with Carroll and Smolin, when the disagreements go well beyond questions that experiments could possibly resolve.
But there is another problem here. The question that both physicists gloss over is that of just how much we should expect to get out of our best physical theories. This question pokes through the foundation of quantum mechanics like rusted rebar, often luring scientists into arguments over parables meant to illuminate the obscure.
With this in mind, let’s try a parable of our own, a cartoon of the quantum predicament. In the tradition of such parables, it’s a story about knowing and not knowing.
We fade in on a scientist interviewing for a job. Let’s give this scientist a name, Bobby Alice, that telegraphs his helplessness to our didactic whims. During the part of the interview where the Reality Industries rep asks him if he has any questions, none of them are answered, except the one about his starting salary. This number is high enough to convince Bobby the job is right for him.
Knowing so little about Reality Industries, everything Bobby sees on his first day comes as a surprise, starting with the campus’s extensive security apparatus of long gated driveways, high tree-lined fences, and all the other standard X-Files elements. Most striking of all is his assigned building, a structure whose paradoxical design merits a special section of the morning orientation. After Bobby is given his project details (irrelevant for us), black-suited Mr. Smith–types tell him the bad news: So long as he works at Reality Industries, he may visit only the building’s fourth floor. This, they assure him, is standard, for all employees but the top executives. Each project team has its own floor, and the teams are never allowed to intermix.
The instructors follow this with what they claim is the good news. Yes, they admit, this tightly tiered approach led to worker distress in the old days, back on the old campus, where the building designs were brutalist and the depression rates were high. But the new building is designed to subvert such pressures. The trainers lead Bobby up to the fourth floor, up to his assignment, through a construction unlike any research facility he has ever seen. The walls are translucent and glow on all sides. So do the floor and ceiling. He is guided to look up, where he can see dark footprints roving about, shadows from the project team on the next floor. “The goal here,” his guide remarks, “is to encourage a sort of cultural continuity, even if we can’t all communicate.”
Over the next weeks, Bobby Alice becomes accustomed to the silent figures floating above him. Eventually, he comes to enjoy the fourth floor’s communal tracking of their fifth-floor counterparts, complete with invented names, invented personalities, invented purposes. He makes peace with the possibility that he is himself a fantasy figure for the third floor.
Then, one day, strange lights appear in a corner of the ceiling.
Naturally phlegmatic, Bobby Alice simply takes notes. But others on the fourth floor are noticeably less calm. The lights seem not to follow any known standard of the physics of footfalls, with lights of different colors blinking on and off seemingly at random, yet still giving the impression not merely of a constructed display but of some solid fixture in the fifth-floor commons. Some team members, formerly of the same anti-philosophical bent as most hires, now spend their coffee breaks discussing increasingly esoteric metaphysics. Productivity declines.
Meanwhile, Bobby has set up a camera to record data. As a work-related extracurricular, he is able in the following weeks to develop a general mathematical description that captures an unexpected order in the flashing lights. This description does not predict exactly which lights will blink when, but, by telling a story about what’s going on between the frames captured by the camera, he can predict what sorts of patterns are allowed, how often, and in what order.
Does this solve the mystery? Apparently it does. Conspiratorial voices on the fourth floor go quiet. The “Alice formalism” immediately finds other applications, and Reality Industries gives Dr. Alice a raise. They give him everything he could want — everything except access to the fifth floor.
In time, Bobby Alice becomes a fourth-floor legend. Yet as the years pass — and pass with the corner lights as an apparently permanent fixture — new employees occasionally massage the Alice formalism to unexpected ends. One worker discovers that he can rid the lights of their randomness if he imagines them as the reflections from a tank of iridescent fish, with the illusion of randomness arising in part because it’s a 3-D projection on a 2-D ceiling, and in part because the fish swim funny. The Alice formalism offers a series of color maps showing the different possible light patterns that might appear at any given moment, and another prominent interpreter argues, with supposed sincerity (although it’s hard to tell), that actually not one but all of the maps occur at once — each in parallel branching universes generated by that spooky alien light source up on the fifth floor.
As the interpretations proliferate, Reality Industries management occasionally finds these side quests to be a drain on corporate resources. But during the Alice decades, the fourth floor has somehow become the company’s most productive. Why? Who knows. Why fight it?
The history of quantum mechanics, being a matter of record, obviously has more twists than any illustrative cartoon can capture. Readers interested in that history are encouraged to read Adam Becker’s recent retelling, What Is Real?, which was reviewed in these pages (“Make Physics Real Again,” Winter 2019). But the above sketch is one attempt to capture the unusual flavor of this history.
Like the fourth-floor scientists in our story who, sight unseen, invented personas for all their fifth-floor counterparts, nineteenth-century physicists are often caricatured as having oversold their grasp on nature’s secrets. But longstanding puzzles — puzzles involving chemical spectra and atomic structure rather than blinking ceiling lights — led twentieth-century pioneers like Niels Bohr, Wolfgang Pauli, and Werner Heisenberg to invent a new style of physical theory. As with the formalism of Bobby Alice, mature quantum theories in this tradition were abstract, offering probabilistic predictions for the outcomes of real-world measurements, while remaining agnostic about what it all meant, about what fundamental reality undergirded the description.
From the very beginning, a counter-tradition associated with names like Albert Einstein, Louis de Broglie, and Erwin Schrödinger insisted that quantum models must ultimately capture something (but probably not everything) about the real stuff moving around us. This tradition gave us visions of subatomic entities as lumps of matter vibrating in space, with the sorts of orbital visualizations one first sees in high school chemistry.
But once the various quantum ideas were codified and physicists realized that they worked remarkably well, most research efforts turned away from philosophical agonizing and toward applications. The second generation of quantum theorists, unburdened by revolutionary angst, replaced every part of classical physics with a quantum version. As Max Planck famously wrote, “A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die.” Since this inherited framework works well enough to get new researchers started, the question of what it all means is usually left alone.
Of course, this question is exactly what most non-experts want answered. For past generations, books with titles like The Tao of Physics and Quantum Reality met this demand, with discussions that wildly mixed conventions of scientific reportage with wisdom literature. Even once quantum theories themselves became familiar, interpretations of them were still new enough to be exciting.
Today, even this thrill is gone. We are now in the part of the story where no one can remember what it was like not to have the blinking lights on the ceiling. Despite the origins of quantum theory as an empirical framework — a container flexible enough to wrap around whatever surprises experiments might uncover — its success has led today’s theorists to regard it as fundamental, a base upon which further speculations might be built.
Regaining that old feeling of disorientation now requires some extra steps.
As interlopers in an ongoing turf war, modern explainers of quantum theory must reckon both with arguments like Niels Bohr’s, which emphasize the theory’s limits on knowledge, and with criticisms like Albert Einstein’s, which demand that the theory represent the real world. Sean Carroll’s Something Deeply Hidden pitches itself to both camps. The title stems from an Einstein anecdote. As “a child of four or five years,” Einstein was fascinated by his father’s compass. He concluded, “Something deeply hidden had to be behind things.” Carroll agrees with this, but argues that the world at its roots is quantum. We only need courage to apply that old Einsteinian realism to our quantum universe.
Carroll is a prolific popularizer — alongside his books, his blog, and his Twitter account, he has also recorded three courses of lectures for general audiences, and for the last year has released a weekly podcast. His new book is appealingly didactic, providing a sustained defense of the Many Worlds interpretation of quantum mechanics, first offered by Hugh Everett III as a graduate student in the 1950s. Carroll maintains that Many Worlds is just quantum mechanics, and he works hard to convince us that supporters aren’t merely perverse. In the early days of electrical research, followers of James Clerk Maxwell were called Maxwellians, but today all physicists are Maxwellians. If Carroll’s project pans out, someday we’ll all be Everettians.
Standard applications of quantum theory follow a standard logic. A physical system is prepared in some initial condition, and modeled using a mathematical representation called a “wave function.” Then the system changes in time, and these changes, governed by the Schrödinger equation, are tracked in the system’s wave function. But when we interpret the wave function in order to generate a prediction of what we will observe, we get only probabilities of possible experimental outcomes.
Carroll insists that this quantum recipe isn’t good enough. It may be sufficient if we care only to predict the likelihood of various outcomes for a given experiment, but it gives us no sense of what the world is like. “Quantum mechanics, in the form in which it is currently presented in physics textbooks,” he writes, “represents an oracle, not a true understanding.”
Most of the quantum mysteries live in the process of measurement. Questions of exactly how measurements force determinate outcomes, and of exactly what we sweep under the rug with that bland word “measurement,” are known collectively in quantum lore as the “measurement problem.” Quantum interpretations are distinguished by how they solve this problem. Usually, solutions involve rejecting some key element of common belief. In the Many Worlds interpretation, the key belief we are asked to reject is that of one single world, with one single future.
The version of the Many Worlds solution given to us in Something Deeply Hidden sidesteps the history of the theory in favor of a logical reconstruction. What Carroll enunciates here is something like a quantum minimalism: “There is only one wave function, which describes the entire system we care about, all the way up to the ‘wave function of the universe’ if we’re talking about the whole shebang.”
Putting this another way, Carroll is a realist about the quantum wave function, and suggests that this mathematical object simply is the deep-down thing, while everything else, from particles to planets to people, are merely its downstream effects. (Sorry, people!) The world of our experience, in this picture, is just a tiny sliver of the real one, where all possible outcomes — all outcomes for which the usual quantum recipe assigns a non-zero probability — continue to exist, buried somewhere out of view in the universal wave function. Hence the “Many Worlds” moniker. What we experience as a single world, chock-full of foreclosed opportunities, Many Worlders understand as but one swirl of mist foaming off an ever-breaking wave.
The position of Many Worlds may not yet be common, but neither is it new. Carroll, for his part, is familiar enough with it to be blasé, presenting it in the breezy tone of a man with all the answers. The virtue of his presentation is that whether or not you agree with him, he gives you plenty to consider, including expert glosses on ongoing debates in cosmology and field theory. But Something Deeply Hidden still fails where it matters. “If we train ourselves to discard our classical prejudices, and take the lessons of quantum mechanics at face value,” Carroll writes near the end, “we may eventually learn how to extract our universe from the wave function.”
But shouldn’t it be the other way around? Why should we have to work so hard to “extract our universe from the wave function,” when the wave function itself is an invention of physicists, not the inerrant revelation of some transcendental truth? Interpretations of quantum theory live or die on how well they are able to explain its success, and the most damning criticism of the Many Worlds interpretation is that it’s hard to see how it improves on the standard idea that probabilities in quantum theory are just a way to quantify our expectations about various measurement outcomes.
Carroll argues that, in Many Worlds, probabilities arise from self-locating uncertainty: “You know everything there is to know about the universe, except where you are within it.” During a measurement, “a single world splits into two, and there are now two people where I used to be just one.” “For a brief while, then, there are two copies of you, and those two copies are precisely identical. Each of them lives on a distinct branch of the wave function, but neither of them knows which one it is on.” The job of the physicist is then to calculate the chance that he has ended up on one branch or another — which produces the probabilities of the various measurement outcomes.
If, alongside Carroll, you convince yourself that it is reasonable to suppose that these worlds exist outside our imaginations, you still might conclude, as he does, that “at the end of the day it doesn’t really change how we should go through our lives.” This conclusion comes in a chapter called “The Human Side,” where Carroll also dismisses the possibility that humans might have a role in branching the wave function, or indeed that we have any ultimate agency: “While you might be personally unsure what choice you will eventually make, the outcome is encoded in your brain.” These views are rewarmed arguments from his previous book, The Big Picture, which I reviewed in these pages (“Pop Goes the Physics,” Spring 2017) and won’t revisit here.
Although this book is unlikely to turn doubters of Many Worlds into converts, it is a credit to Carroll that he leaves one with the impression that the doctrine is probably consistent, whether or not it is true. But internal consistency has little power against an idea that feels unacceptable. For doctrines like Many Worlds, with key claims that are in principle unobservable, some of us will always want a way out.
Lee Smolin is one such seeker for whom Many Worlds realism — or “magical realism,” as he likes to call it — is not real enough. In his new book, Einstein’s Unfinished Revolution, Smolin assures us that “however weird the quantum world may be, it need not threaten anyone’s belief in commonsense realism. It is possible to be a realist while living in the quantum universe.” But if you expect “commonsense realism” by the end of his book, prepare for a surprise.
Smolin is less congenial than Carroll, with a brooding vision of his fellow scientists less as fellow travelers and more as members of an “orthodoxy of the unreal,” as Smolin stirringly puts it. Smolin is best known for his role as doomsayer about string theory — his 2006 book The Trouble with Physics functioned as an entertaining jeremiad. But while his books all court drama and are never boring, that often comes at the expense of argumentative care.
Einstein’s Unfinished Revolution can be summarized briefly. Smolin states early on that quantum theory is wrong: It gives probabilities for many and various measurement outcomes, whereas the world of our observation is solid and singular. Nevertheless, quantum theory can still teach us important lessons about nature. For instance, Smolin takes at face value the claim that entangled particles far apart in the universe can communicate information to each other instantaneously, unbounded by the speed of light. This ability of quantum entities to be correlated while separated in space is technically called “nonlocality,” which Smolin enshrines as a fundamental principle. And while he takes inspiration from an existing nonlocal quantum theory, he rejects it for violating other favorite physical principles. Instead, he elects to redo physics from scratch, proposing partial theories that would allow his favored ideals to survive.
This is, of course, an insane act of hubris. But no red line separates the crackpot from the visionary in theoretical physics. Because Smolin presents himself as a man up against the status quo, his books are as much autobiography as popular science, with personality bleeding into intellectual commitments. Smolin’s last popular book, Time Reborn (2013), showed him changing his mind about the nature of time after doing bedtime with his son. This time around, Smolin tells us in the preface about how he came to view the universe as nonlocal:
I vividly recall that when I understood the proof of the theorem, I went outside in the warm afternoon and sat on the steps of the college library, stunned. I pulled out a notebook and immediately wrote a poem to a girl I had a crush on, in which I told her that each time we touched there were electrons in our hands which from then on would be entangled with each other. I no longer recall who she was or what she made of my poem, or if I even showed it to her. But my obsession with penetrating the mystery of nonlocal entanglement, which began that day, has never since left me.
The book never seriously questions whether the arguments for nonlocality should convince us; Smolin’s experience of conviction must stand in for our own. These personal detours are fascinating, but do little to convince skeptics.
Once you start turning the pages of Einstein’s Unfinished Revolution, ideas fly by fast. First, Smolin gives us a tour of the quantum fundamentals — entanglement, nonlocality, and all that. Then he provides a thoughtful overview of solutions to the measurement problem, particularly those of David Bohm, whose complex legacy he lingers over admiringly. But by the end, Smolin abandons the plodding corporate truth of the scientist for the hope of a private perfection.
Many physicists have never heard of Bohm’s theory, and some who have still conclude that it’s worthless. Bohm attempted to salvage something like the old classical determinism, offering a way to understand measurement outcomes as caused by the motion of particles, which in turn are guided by waves. This conceptual simplicity comes at the cost of brazen nonlocality, and an explicit dualism of particles and waves. Einstein called the theory a “physical fairy-tale for children”; Robert Oppenheimer declared about Bohm that “we must agree to ignore him.”
Bohm’s theory is important to Smolin mainly as a prototype, to demonstrate that it’s possible to situate quantum mechanics within a single world — unlike Many Worlds, which Smolin seems to dislike less for physical than for ethical reasons: “It seems to me that the Many Worlds Interpretation offers a profound challenge to our moral thinking because it erases the distinction between the possible and the actual.” In his survey, Smolin sniffs each interpretation as he passes it, looking for a whiff of the real quantum story, which will preserve our single universe while also maintaining the virtues of all the partial successes.
When Smolin finally explains his own idiosyncratic efforts, his methods — at least in the version he has dramatized here — resemble some wild descendant of Cartesian rationalism. From his survey, Smolin lists the principles he would expect from an acceptable alternative to quantum theory. He then reports back to us on the incomplete models he has found that will support these principles.
Smolin’s tour leads us all over the place, from a review of Leibniz’s Monadology (“shockingly modern”), to a new law of physics he proposes (the “principle of precedence”), to a solution to the measurement problem involving nonlocal interactions among all similar systems everywhere in the universe. Smolin concludes with the grand claim that “the universe consists of nothing but views of itself, each from an event in its history.” Fine. Maybe there’s more to these ideas than a casual reader might glean, but after a few pages of sentences like, “An event is something that happens,” hope wanes.
For all their differences, Carroll and Smolin similarly insist that, once the basic rules governing quantum systems are properly understood, the rest should fall into place. “Once we understand what’s going on for two particles, the generalization to 1088 particles is just math,” Carroll assures us. Smolin is far less certain that physics is on the right track, but he, too, believes that progress will come with theoretical breakthroughs. “I have no better answer than to face the blank notebook,” Smolin writes. This was the path of Bohr, Einstein, Bohm and others. “Ask yourself which of the fundamental principles of the present canon must survive the coming revolution. That’s the first page. Then turn again to a blank page and start thinking.”
Physicists are always tempted to suppose that successful predictions prove that a theory describes how the world really is. And why not? Denying that quantum theory captures something essential about the character of those entities outside our heads that we label with words like “atoms” and “molecules” and “photons” seems far more perverse, as an interpretive strategy, than any of the mainstream interpretations we’ve already discussed. Yet one can admit that something is captured by quantum theory without jumping immediately to the assertion that everything must flow from it. An invented language doesn’t need to be universal to be useful, and it’s smart to keep on honing tools for thinking that have historically worked well.
As an old mentor of mine, John P. Ralston, wrote in his book How to Understand Quantum Mechanics, “We don’t know what nature is, and it is not clear whether quantum theory fully describes it. However, it’s not the worst thing. It has not failed yet.” This seems like the right attitude to take. Quantum theory is a fabulously rich subject, but the fact that it has not failed yet does not allow us to generalize its results indefinitely.
There is value in the exercises that Carroll and Smolin perform, in their attempts to imagine principled and orderly universes, to see just how far one can get with a straitjacketed imagination. But by assuming that everything is captured by the current version of quantum theory, Carroll risks credulity, foreclosing genuinely new possibilities. And by assuming that everything is up for grabs, Smolin risks paranoia, ignoring what is already understood.
Perhaps the agnostics among us are right to settle in as permanent occupants of Reality Industries’ fourth floor. We can accept that scientists have a role in creating stories that make sense, while also appreciating the possibility that the world might not be made of these stories. To the big, unresolved questions — questions about where randomness enters in the measurement process, or about how much of the world our physical theories might capture — we can offer only a laconic who knows? The world is filled with flashing lights, and we should try to find some order in them. Scientific success often involves inventing a language that makes the strange sensible, warping intuitions along the way. And while this process has allowed us to make progress, we should never let our intuitions get so strong that we stop scanning the ceiling for unexpected dazzlements.
David Kordahl is a graduate student in physics at Arizona State University. David Kordahl, “Inventing the Universe,” The New Atlantis, Number 61, Winter 2020, pp. 114-124.
Marcella Duarte Colaboração para Tilt – 05/01/2021 17h02 4-5 minutos
Parecia que 2020 nunca ia acabar, mas, tecnicamente, ele passou mais depressa que o normal. E este ano será ainda mais ligeiro. O motivo? A Terra tem “girado” estranhamente depressa ultimamente. Por isso, pode ser que a gente precise adiantar nossos relógios, mas você nem vai perceber.
No ano passado, foi registrado o dia mais curto da história, desde que foram iniciadas as medições, há 50 anos. Em 19 de julho de 2020, o planeta completou sua rotação 1,4602 milésimo de segundo mais rápido que os costumeiros 86.400 segundos (24 horas).
O dia mais curto que até então se tinha registro aconteceu em 2005, e foi superado 28 vezes em 2020. E este ano deve ser o mais rápido da história, porque os dias de 2021 deverão ser, em média, 0,5 milissegundo mais curtos que o normal.
Essas pequenas mudanças na duração dos dias só foram descobertas após o desenvolvimento de relógios atômicos superprecisos, na década de 1960. Inicialmente, percebeu-se que a velocidade de rotação da Terra, quando gira em torno de seu próprio eixo resultando nos dias e noites, estava diminuindo ano após ano.
Desde a década de 1970, foi necessário “adicionar” 27 segundos no tempo atômico internacional, para manter nossa contagem de tempo sincronizada com o planeta mais lento. É o chamado “leap second” ou “inserção de segundo intercalado”.
Essas correções acontecem sempre ao final de um semestre, em 31 de dezembro ou 30 de junho. Assim, garante-se que o Sol sempre esteja exatamente no meio do céu ao meio-dia.
A última vez que ocorreu foi no Ano Novo de 2016, quando relógios no mundo todo pausaram por um segundo para “esperar” a Terra.
Mas recentemente, está acontecendo o oposto: a rotação está acelerando. E pode ser que a gente precise “saltar” o tempo para “alcançar” o movimento do planeta. Seria a primeira vez na história que um segundo seria deletado dos relógios internacionais.
Há um debate internacional sobre a necessidade deste ajuste e o futuro do cálculo do tempo. Cientistas acreditam que, ao longo de 2021, os relógios atômicos acumularão um atraso de 19 milésimos de segundos.
Se os ajustes não forem feitos, levaria centenas de anos para uma pessoa comum notar a diferença. Mas sistemas de navegação e de comunicação por satélite —que usam a posição da Terra, do Sol e das estrelas para funcionar— podem ser impactados mais brevemente.
Nossos “guardiões do tempo” são os oficiais do Serviço Internacional de Sistemas de Referência e Rotação da Terra (Iers), em Paris, França. São eles que monitoram a rotação da Terra e os 260 relógios atômicos espalhados pelo mundo e avisam quando é necessário adicionar —ou eventualmente deletar— algum segundo.
Manipular o tempo pode ter consequências. Quando foi adicionado um “leap second” em 2012, gigantes tecnológicos da época, como Linux, Mozilla, Java, Reddit, Foursquare, Yelp e LinkedIn reportaram falhas.
A velocidade de rotação da Terra varia constantemente, dependendo de diversos fatores, como o complexo movimento de seu núcleo derretido, dos oceanos e da atmosfera, além das interações gravitacionais com outros corpos celestes, como a Lua. O aquecimento global, e consequente derretimento das calotas polares e gelo das montanhas também tem acelerado a movimentação.
Por isso, os dias nunca têm duração exatamente igual. O último domingo (3) teve “apenas” 23 horas, 59 minutos e 59,9998927 segundos. Já a segunda-feira (4) foi mais preguiçosa, com pouco mais de 24 horas.
There’s something mysterious coming up from the frozen ground in Antarctica, and it could break physics as we know it.
Physicists don’t know what it is exactly. But they do know it’s some sort of cosmic ray — a high-energy particle that’s blasted its way through space, into the Earth, and back out again. But the particles physicists know about — the collection of particles that make up what scientists call the Standard Model (SM) of particle physics — shouldn’t be able to do that. Sure, there are low-energy neutrinos that can pierce through miles upon miles of rock unaffected. But high-energy neutrinos, as well as other high-energy particles, have “large cross-sections.” That means that they’ll almost always crash into something soon after zipping into the Earth and never make it out the other side.
And yet, since March 2016, researchers have been puzzling over two events in Antarctica where cosmic rays did burst out from the Earth, and were detected by NASA’s Antarctic Impulsive Transient Antenna (ANITA) — a balloon-borne antenna drifting over the southern continent.
ANITA is designed to hunt cosmic rays from outer space, so the high-energy neutrino community was buzzing with excitement when the instrument detected particles that seemed to be blasting up from Earth instead of zooming down from space. Because cosmic rays shouldn’t do that, scientists began to wonder whether these mysterious beams are made of particles never seen before.
Since then, physicists have proposed all sorts of explanations for these “upward going” cosmic rays, from sterile neutrinos (neutrinos that rarely ever bang into matter) to “atypical dark matter distributions inside the Earth,” referencing the mysterious form of matter that doesn’t interact with light.
All the explanations were intriguing, and suggested that ANITA might have detected a particle not accounted for in the Standard Model. But none of the explanations demonstrated conclusively that something more ordinary couldn’t have caused the signal at ANITA.
A new paper uploaded today (Sept. 26) to the preprint server arXiv changes that. In it, a team of astrophysicists from Penn State University showed that there have been more upward-going high-energy particles than those detected during the two ANITA events. Three times, they wrote, IceCube (another, larger neutrino observatory in Antarctica) detected similar particles, though no one had yet connected those events to the mystery at ANITA. And, combining the IceCube and ANITA data sets, the Penn State researchers calculated that, whatever particle is bursting up from the Earth, it has much less than a 1-in-3.5 million chance of being part of the Standard Model. (In technical, statistical terms, their results had confidences of 5.8 and 7.0 sigma, depending on which of their calculations you’re looking at.)
Derek Fox, the lead author on the new paper, said that he first came across the ANITA events in May 2018, in one of the earlier papers attempting to explain them.
“I was like, ‘Well this model doesn’t make much sense,'” Fox told Live Science, “but the [ANITA] result is very intriguing, so I started checking up on it. I started talking to my office neighbor Steinn Sigurdsson [the second author on the paper, who is also at Penn State] about whether maybe we could gin up some more plausible explanations than the papers that have been published to date.”
Fox, Sigurdsson and their colleagues started looking for similar events in data collected by other detectors. When they came across possible upward-going events in IceCube data, he said, he realized that he might have come across something really game-changing for physics.
“That’s what really got me going, and looking at the ANITA events with the utmost seriousness,” he said, later adding, “This is what physicists live for. Breaking models, setting new constraints [on reality], learning things about the universe we didn’t know.”
As Live Science has previously reported, experimental, high-energy particle physics has been at a standstill for the last several years. When the 17-mile (27 kilometers), $10 billion Large Hadron Collider (LHC) was completed on the border between France and Switzerland in 2009, scientists thought it would unlock the mysteries of supersymmetry — the mysterious, theoretical class of particles that scientists suspect might exist outside of current physics, but had never detected. According to supersymmetry, every existing particle in the Standard Model has a supersymmetric partner. Researchers suspect these partners exist because the masses of known particles are out of wack — not symmetric with one another.
“Even though the SM works very well in explaining a plethora of phenomena, it still has many handicaps,” said Seyda Ipek, a particle physicist at UC Irvine, who was not involved in the current research. “For example, it cannot account for the existence of dark matter, [explain mathematical weirdness in] neutrino masses, or the matter-antimatter asymmetry of the universe.”
Instead, the LHC confirmed the Higgs boson, the final undetected part of the Standard Model, in 2012. And then it stopped detecting anything else that important or interesting. Researchers began to question whether any existing physics experiment could ever detect a supersymmetric particle.
“We need new ideas,” Jessie Shelton, a theoretical physicist at the University of Illinois at Urbana-Champaign, told Live Science in May, around the same time that Fox first became interested in the ANITA data.
Now, several scientists not involved in the Penn State paper told Live Science that it offers solid (if incomplete) evidence that something new has really arrived.
“It was clear from the start that if the ANITA anomalous events are due to particles that had propagated through thousands of kilometers of Earth, then those particles were very likely not SM particles,” said Mauricio Bustamante, an astrophysicist at the Niels Bohr Institute at the University of Copenhagen, who was not an author on the new paper.Advertisement
“The paper that appeared today is the first systematic calculation of how unlikely is that these events were due to SM neutrinos,” he added. “Their result strongly disfavors a SM explanation.”
“I think it’s very compelling,” said Bill Louis, a neutrino physicist at Los Alamos National Laboratory who was not involved in the paper and has been following research into the ANITA events for several months.
If standard model particle created these anomalies, they should have been neutrinos. Researchers know that both because of the particles they decayed into, and because no other standard model particle would even have a fragment of a chance in a million of making it through the Earth.
But neutrinos of this energy, Louis said, just shouldn’t make it through the Earth often enough for ANITA or IceCube to detect. It’s not how they work. But neutrino detectors like ANITA and IceCube don’t detect neutrinos directly. Instead, they detect the particles that neutrinos decay into after smashing into Earth’s atmosphere or Antarctic ice. And there are other events that can generate those particles, triggering the detectors. This paper strongly suggests that those events must have been supersymmetric, Louis said, though he added that more data is necessary.
Louis said that at this stage he thinks that level of specificity is “a bit of a stretch.”
The authors make a strong statistical case that no conventional particle would be likely to travel through the Earth in this way, he said, but there isn’t yet enough data to be certain. And there’s certainly not enough that they could definitively figure out what particle made the trip.
Fox didn’t dispute that.
“As an observer, there’s no way that I can know that this is a stau,” he said. “From my perspective, I go trawling around trying to discover new things about the universe, I come upon some really bizarre phenomenon, and then with my colleagues, we do a little literature search to see if anybody has ever thought that this might happen. And then if we find papers in the literature, including one from 14 years ago that predict something just like this phenomenon, then that gets really high weight from me.”
He and his colleagues did find a long chain of papers from theorists predicting that stau sleptons might turn up like this in neutrino observatories. And because those papers were written before the ANITA anomaly, Fox said, that suggests strongly to him that those theorists were onto something.
But there remains a lot of uncertainty on that front, he said. Right now, researchers just know that whatever this particle is, it interacts very weakly with other particles, or else it would have never survived the trip through the planet’s dense mass.
Every physicist who spoke with Live Science agreed that researchers need to collect more data to verify that ANITA and IceCube have cracked supersymmetry. It’s possible, Fox said, that when IceCube researchers dig into their data archives they’ll find more, similar events that had previously gone unnoticed. Louis and Bustamante both said that NASA should run more ANITA flights to see if similar upward-going particles turn up.
“For us to be certain that these events are not due to unknown unknowns — say, unmapped properties of the Antarctic ice — we would like other instruments to also detect these sort of events,” Bustamante said.
Over the long-term, if these results are confirmed and the details of what particle is causing them are nailed down, several researchers said that the ANITA anomaly might unlock even more new physics at the LHC.
“Any observation a non-SM particle would be a game changer, because it would tell us which path we should take after the SM,” Ipek said. “The type of [supersymmetric] particle they claim to have produced the signals of, sleptons, are very hard to produce and detect at LHC.”
“So, it is very interesting if they can be observed by other types of experiments. Of course, if this is true, then we will expect a ladder of other [supersymmetric] particles to be observed at the LHC, which would be a complementary test of the claims.”Advertisement
In other words, the ANITA anomalies could offer scientists the key information necessary to properly tune the LHC to unlock more of supersymmetry. Those experiments might even turn up an explanation for dark matter.
Right now, Fox said, he’s just hungry for more data.
Three different studies, done by different teams of scientists proved something really extraordinary. But when a new research connected these 3 discoveries, something shocking was realized, something hiding in plain sight.
Human emotion literally shapes the world around us. Not just our perception of the world, but reality itself.
In the first experiment, human DNA, isolated in a sealed container, was placed near a test subject. Scientists gave the donor emotional stimulus and fascinatingly enough, the emotions affected their DNA in the other room.
In the presence of negative emotions the DNA tightened. In the presence of positive emotions the coils of the DNA relaxed.
The scientists concluded that “Human emotion produces effects which defy conventional laws of physics.”
In the second, similar but unrelated experiment, different group of scientists extracted Leukocytes (white blood cells) from donors and placed into chambers so they could measure electrical changes.
In this experiment, the donor was placed in one room and subjected to “emotional stimulation” consisting of video clips, which generated different emotions in the donor.
The DNA was placed in a different room in the same building. Both the donor and his DNA were monitored and as the donor exhibited emotional peaks or valleys (measured by electrical responses), the DNA exhibited the IDENTICAL RESPONSES AT THE EXACT SAME TIME.
There was no lag time, no transmission time. The DNA peaks and valleys EXACTLY MATCHED the peaks and valleys of the donor in time.
The scientists wanted to see how far away they could separate the donor from his DNA and still get this effect. They stopped testing after they separated the DNA and the donor by 50 miles and STILL had the SAME result. No lag time; no transmission time.
The DNA and the donor had the same identical responses in time. The conclusion was that the donor and the DNA can communicate beyond space and time.
The third experiment proved something pretty shocking!
Scientists observed the effect of DNA on our physical world.
Light photons, which make up the world around us, were observed inside a vacuum. Their natural locations were completely random.
Human DNA was then inserted into the vacuum. Shockingly the photons were no longer acting random. They precisely followed the geometry of the DNA.
Scientists who were studying this, described the photons behaving “surprisingly and counter-intuitively”. They went on to say that “We are forced to accept the possibility of some new field of energy!”
They concluded that human DNA literally shape the behavior of light photons that make up the world around us!
So when a new research was done, and all of these 3 scientific claims were connected together, scientists were shocked.
They came to a stunning realization that if our emotions affect our DNA and our DNA shapes the world around us, than our emotions physically change the world around us.
And not just that, we are connected to our DNA beyond space and time.
We create our reality by choosing it with our feelings.
Science has already proven some pretty MINDBLOWING facts about The Universe we live in. All we have to do is connect the dots.
“I cannot define the real problem, therefore I suspect there’s no real problem, but I’m not sure there’s no real problem.”
The American physicist Richard Feynman said this about the notorious puzzles and paradoxes of quantum mechanics, the theory physicists use to describe the tiniest objects in the Universe. But he might as well have been talking about the equally knotty problem of consciousness.
Some scientists think we already understand what consciousness is, or that it is a mere illusion. But many others feel we have not grasped where consciousness comes from at all.
The perennial puzzle of consciousness has even led some researchers to invoke quantum physics to explain it. That notion has always been met with skepticism, which is not surprising: it does not sound wise to explain one mystery with another. But such ideas are not obviously absurd, and neither are they arbitrary.
For one thing, the mind seemed, to the great discomfort of physicists, to force its way into early quantum theory. What’s more, quantum computers are predicted to be capable of accomplishing things ordinary computers cannot, which reminds us of how our brains can achieve things that are still beyond artificial intelligence. “Quantum consciousness” is widely derided as mystical woo, but it just will not go away.
What is going on in our brains? (Credit: Mehau Kulyk/Science Photo Library)
Quantum mechanics is the best theory we have for describing the world at the nuts-and-bolts level of atoms and subatomic particles. Perhaps the most renowned of its mysteries is the fact that the outcome of a quantum experiment can change depending on whether or not we choose to measure some property of the particles involved.
When this “observer effect” was first noticed by the early pioneers of quantum theory, they were deeply troubled. It seemed to undermine the basic assumption behind all science: that there is an objective world out there, irrespective of us. If the way the world behaves depends on how – or if – we look at it, what can “reality” really mean?
The most famous intrusion of the mind into quantum mechanics comes in the “double-slit experiment”
Some of those researchers felt forced to conclude that objectivity was an illusion, and that consciousness has to be allowed an active role in quantum theory. To others, that did not make sense. Surely, Albert Einstein once complained, the Moon does not exist only when we look at it!
Today some physicists suspect that, whether or not consciousness influences quantum mechanics, it might in fact arise because of it. They think that quantum theory might be needed to fully understand how the brain works.
Might it be that, just as quantum objects can apparently be in two places at once, so a quantum brain can hold onto two mutually-exclusive ideas at the same time?
These ideas are speculative, and it may turn out that quantum physics has no fundamental role either for or in the workings of the mind. But if nothing else, these possibilities show just how strangely quantum theory forces us to think.
The famous double-slit experiment (Credit: Victor de Schwanberg/Science Photo Library)
The most famous intrusion of the mind into quantum mechanics comes in the “double-slit experiment”. Imagine shining a beam of light at a screen that contains two closely-spaced parallel slits. Some of the light passes through the slits, whereupon it strikes another screen.
Light can be thought of as a kind of wave, and when waves emerge from two slits like this they can interfere with each other. If their peaks coincide, they reinforce each other, whereas if a peak and a trough coincide, they cancel out. This wave interference is called diffraction, and it produces a series of alternating bright and dark stripes on the back screen, where the light waves are either reinforced or cancelled out.
The implication seems to be that each particle passes simultaneously through both slits
This experiment was understood to be a characteristic of wave behaviour over 200 years ago, well before quantum theory existed.
The double slit experiment can also be performed with quantum particles like electrons; tiny charged particles that are components of atoms. In a counter-intuitive twist, these particles can behave like waves. That means they can undergo diffraction when a stream of them passes through the two slits, producing an interference pattern.
Now suppose that the quantum particles are sent through the slits one by one, and their arrival at the screen is likewise seen one by one. Now there is apparently nothing for each particle to interfere with along its route – yet nevertheless the pattern of particle impacts that builds up over time reveals interference bands.
The implication seems to be that each particle passes simultaneously through both slits and interferes with itself. This combination of “both paths at once” is known as a superposition state.
But here is the really odd thing.
The double-slit experiment (Credit: GIPhotoStock/Science Photo Library)
If we place a detector inside or just behind one slit, we can find out whether any given particle goes through it or not. In that case, however, the interference vanishes. Simply by observing a particle’s path – even if that observation should not disturb the particle’s motion – we change the outcome.
The physicist Pascual Jordan, who worked with quantum guru Niels Bohr in Copenhagen in the 1920s, put it like this: “observations not only disturb what has to be measured, they produce it… We compel [a quantum particle] to assume a definite position.” In other words, Jordan said, “we ourselves produce the results of measurements.”
If that is so, objective reality seems to go out of the window.
And it gets even stranger.
Particles can be in two states (Credit: Victor de Schwanberg/Science Photo Library)
If nature seems to be changing its behaviour depending on whether we “look” or not, we could try to trick it into showing its hand. To do so, we could measure which path a particle took through the double slits, but only after it has passed through them. By then, it ought to have “decided” whether to take one path or both.
The sheer act of noticing, rather than any physical disturbance caused by measuring, can cause the collapse
An experiment for doing this was proposed in the 1970s by the American physicist John Wheeler, and this “delayed choice” experiment was performed in the following decade. It uses clever techniques to make measurements on the paths of quantum particles (generally, particles of light, called photons) after they should have chosen whether to take one path or a superposition of two.
It turns out that, just as Bohr confidently predicted, it makes no difference whether we delay the measurement or not. As long as we measure the photon’s path before its arrival at a detector is finally registered, we lose all interference.
It is as if nature “knows” not just if we are looking, but if we are planning to look.
Eugene Wigner (Credit: Emilio Segre Visual Archives/American Institute of Physics/Science Photo Library)
Whenever, in these experiments, we discover the path of a quantum particle, its cloud of possible routes “collapses” into a single well-defined state. What’s more, the delayed-choice experiment implies that the sheer act of noticing, rather than any physical disturbance caused by measuring, can cause the collapse. But does this mean that true collapse has only happened when the result of a measurement impinges on our consciousness?
It is hard to avoid the implication that consciousness and quantum mechanics are somehow linked
That possibility was admitted in the 1930s by the Hungarian physicist Eugene Wigner. “It follows that the quantum description of objects is influenced by impressions entering my consciousness,” he wrote. “Solipsism may be logically consistent with present quantum mechanics.”
Wheeler even entertained the thought that the presence of living beings, which are capable of “noticing”, has transformed what was previously a multitude of possible quantum pasts into one concrete history. In this sense, Wheeler said, we become participants in the evolution of the Universe since its very beginning. In his words, we live in a “participatory universe.”
To this day, physicists do not agree on the best way to interpret these quantum experiments, and to some extent what you make of them is (at the moment) up to you. But one way or another, it is hard to avoid the implication that consciousness and quantum mechanics are somehow linked.
Beginning in the 1980s, the British physicist Roger Penrosesuggested that the link might work in the other direction. Whether or not consciousness can affect quantum mechanics, he said, perhaps quantum mechanics is involved in consciousness.
Physicist and mathematician Roger Penrose (Credit: Max Alexander/Science Photo Library)
What if, Penrose asked, there are molecular structures in our brains that are able to alter their state in response to a single quantum event. Could not these structures then adopt a superposition state, just like the particles in the double slit experiment? And might those quantum superpositions then show up in the ways neurons are triggered to communicate via electrical signals?
Maybe, says Penrose, our ability to sustain seemingly incompatible mental states is no quirk of perception, but a real quantum effect.
Perhaps quantum mechanics is involved in consciousness
After all, the human brain seems able to handle cognitive processes that still far exceed the capabilities of digital computers. Perhaps we can even carry out computational tasks that are impossible on ordinary computers, which use classical digital logic.
Penrose first proposed that quantum effects feature in human cognition in his 1989 book The Emperor’s New Mind. The idea is called Orch-OR, which is short for “orchestrated objective reduction”. The phrase “objective reduction” means that, as Penrose believes, the collapse of quantum interference and superposition is a real, physical process, like the bursting of a bubble.
Orch-OR draws on Penrose’s suggestion that gravity is responsible for the fact that everyday objects, such as chairs and planets, do not display quantum effects. Penrose believes that quantum superpositions become impossible for objects much larger than atoms, because their gravitational effects would then force two incompatible versions of space-time to coexist.
Penrose developed this idea further with American physician Stuart Hameroff. In his 1994 book Shadows of the Mind, he suggested that the structures involved in this quantum cognition might be protein strands called microtubules. These are found in most of our cells, including the neurons in our brains. Penrose and Hameroff argue that vibrations of microtubules can adopt a quantum superposition.
But there is no evidence that such a thing is remotely feasible.
Microtubules inside a cell (Credit: Dennis Kunkel Microscopy/Science Photo Library)
It has been suggested that the idea of quantum superpositions in microtubules is supported by experiments described in 2013, but in fact those studies made no mention of quantum effects.
Besides, most researchers think that the Orch-OR idea was ruled out by a study published in 2000. Physicist Max Tegmark calculated that quantum superpositions of the molecules involved in neural signaling could not survive for even a fraction of the time needed for such a signal to get anywhere.
Other researchers have found evidence for quantum effects in living beings
Quantum effects such as superposition are easily destroyed, because of a process called decoherence. This is caused by the interactions of a quantum object with its surrounding environment, through which the “quantumness” leaks away.
Decoherence is expected to be extremely rapid in warm and wet environments like living cells.
Nerve signals are electrical pulses, caused by the passage of electrically-charged atoms across the walls of nerve cells. If one of these atoms was in a superposition and then collided with a neuron, Tegmark showed that the superposition should decay in less than one billion billionth of a second. It takes at least ten thousand trillion times as long for a neuron to discharge a signal.
As a result, ideas about quantum effects in the brain are viewed with great skepticism.
Besides, the idea that the brain might employ quantum tricks shows no sign of going away. For there is now another, quite different argument for it.
Could phosphorus sustain a quantum state? (Credit: Phil Degginger/Science Photo Library)
In a study published in 2015, physicist Matthew Fisher of the University of California at Santa Barbara argued that the brain might contain molecules capable of sustaining more robust quantum superpositions. Specifically, he thinks that the nuclei of phosphorus atoms may have this ability.
Phosphorus atoms are everywhere in living cells. They often take the form of phosphate ions, in which one phosphorus atom joins up with four oxygen atoms.
Such ions are the basic unit of energy within cells. Much of the cell’s energy is stored in molecules called ATP, which contain a string of three phosphate groups joined to an organic molecule. When one of the phosphates is cut free, energy is released for the cell to use.
Cells have molecular machinery for assembling phosphate ions into groups and cleaving them off again. Fisher suggested a scheme in which two phosphate ions might be placed in a special kind of superposition called an “entangled state”.
Phosphorus spins could resist decoherence for a day or so, even in living cells
The phosphorus nuclei have a quantum property called spin, which makes them rather like little magnets with poles pointing in particular directions. In an entangled state, the spin of one phosphorus nucleus depends on that of the other.
Put another way, entangled states are really superposition states involving more than one quantum particle.
Fisher says that the quantum-mechanical behaviour of these nuclear spins could plausibly resist decoherence on human timescales. He agrees with Tegmark that quantum vibrations, like those postulated by Penrose and Hameroff, will be strongly affected by their surroundings “and will decohere almost immediately”. But nuclear spins do not interact very strongly with their surroundings.
All the same, quantum behaviour in the phosphorus nuclear spins would have to be “protected” from decoherence.
Quantum particles can have different spins (Credit: Richard Kail/Science Photo Library)
This might happen, Fisher says, if the phosphorus atoms are incorporated into larger objects called “Posner molecules”. These are clusters of six phosphate ions, combined with nine calcium ions. There is some evidence that they can exist in living cells, though this is currently far from conclusive.
I decided… to explore how on earth the lithium ion could have such a dramatic effect in treating mental conditions
In Posner molecules, Fisher argues, phosphorus spins could resist decoherence for a day or so, even in living cells. That means they could influence how the brain works.
The idea is that Posner molecules can be swallowed up by neurons. Once inside, the Posner molecules could trigger the firing of a signal to another neuron, by falling apart and releasing their calcium ions.
Because of entanglement in Posner molecules, two such signals might thus in turn become entangled: a kind of quantum superposition of a “thought”, you might say. “If quantum processing with nuclear spins is in fact present in the brain, it would be an extremely common occurrence, happening pretty much all the time,” Fisher says.
He first got this idea when he started thinking about mental illness.
A capsule of lithium carbonate (Credit: Custom Medical Stock Photo/Science Photo Library)
“My entry into the biochemistry of the brain started when I decided three or four years ago to explore how on earth the lithium ion could have such a dramatic effect in treating mental conditions,” Fisher says.
At this point, Fisher’s proposal is no more than an intriguing idea
Lithium drugs are widely used for treating bipolar disorder. They work, but nobody really knows how.
“I wasn’t looking for a quantum explanation,” Fisher says. But then he came across a paper reporting that lithium drugs had different effects on the behaviour of rats, depending on what form – or “isotope” – of lithium was used.
On the face of it, that was extremely puzzling. In chemical terms, different isotopes behave almost identically, so if the lithium worked like a conventional drug the isotopes should all have had the same effect.
Nerve cells are linked at synapses (Credit: Sebastian Kaulitzki/Science Photo Library)
But Fisher realised that the nuclei of the atoms of different lithium isotopes can have different spins. This quantum property might affect the way lithium drugs act. For example, if lithium substitutes for calcium in Posner molecules, the lithium spins might “feel” and influence those of phosphorus atoms, and so interfere with their entanglement.
We do not even know what consciousness is
If this is true, it would help to explain why lithium can treat bipolar disorder.
At this point, Fisher’s proposal is no more than an intriguing idea. But there are several ways in which its plausibility can be tested, starting with the idea that phosphorus spins in Posner molecules can keep their quantum coherence for long periods. That is what Fisher aims to do next.
All the same, he is wary of being associated with the earlier ideas about “quantum consciousness”, which he sees as highly speculative at best.
Consciousness is a profound mystery (Credit: Sciepro/Science Photo Library)
Physicists are not terribly comfortable with finding themselves inside their theories. Most hope that consciousness and the brain can be kept out of quantum theory, and perhaps vice versa. After all, we do not even know what consciousness is, let alone have a theory to describe it.
We all know what red is like, but we have no way to communicate the sensation
As a result, physicists are often embarrassed to even mention the words “quantum” and “consciousness” in the same sentence.
But setting that aside, the idea has a long history. Ever since the “observer effect” and the mind first insinuated themselves into quantum theory in the early days, it has been devilishly hard to kick them out. A few researchers think we might never manage to do so.
In 2016, Adrian Kent of the University of Cambridge in the UK, one of the most respected “quantum philosophers”, speculated that consciousness might alter the behaviour of quantum systems in subtle but detectable ways.
We do not understand how thoughts work (Credit: Andrzej Wojcicki/Science Photo Library)
Kent is very cautious about this idea. “There is no compelling reason of principle to believe that quantum theory is the right theory in which to try to formulate a theory of consciousness, or that the problems of quantum theory must have anything to do with the problem of consciousness,” he admits.
Every line of thought on the relationship of consciousness to physics runs into deep trouble
But he says that it is hard to see how a description of consciousness based purely on pre-quantum physics can account for all the features it seems to have.
One particularly puzzling question is how our conscious minds can experience unique sensations, such as the colour red or the smell of frying bacon. With the exception of people with visual impairments, we all know what red is like, but we have no way to communicate the sensation and there is nothing in physics that tells us what it should be like.
Sensations like this are called “qualia”. We perceive them as unified properties of the outside world, but in fact they are products of our consciousness – and that is hard to explain. Indeed, in 1995 philosopher David Chalmers dubbed it “the hard problem” of consciousness.
How does our consciousness work? (Credit: Victor Habbick Visions/Science Photo Library)
“Every line of thought on the relationship of consciousness to physics runs into deep trouble,” says Kent.
This has prompted him to suggest that “we could make some progress on understanding the problem of the evolution of consciousness if we supposed that consciousnesses alters (albeit perhaps very slightly and subtly) quantum probabilities.”
“Quantum consciousness” is widely derided as mystical woo, but it just will not go away
In other words, the mind could genuinely affect the outcomes of measurements.
It does not, in this view, exactly determine “what is real”. But it might affect the chance that each of the possible actualities permitted by quantum mechanics is the one we do in fact observe, in a way that quantum theory itself cannot predict. Kent says that we might look for such effects experimentally.
He even bravely estimates the chances of finding them. “I would give credence of perhaps 15% that something specifically to do with consciousness causes deviations from quantum theory, with perhaps 3% credence that this will be experimentally detectable within the next 50 years,” he says.
If that happens, it would transform our ideas about both physics and the mind. That seems a chance worth exploring.
Physicists are putting themselves out of a job, using artificial intelligence to run a complex experiment. The experiment created an extremely cold gas trapped in a laser beam, known as a Bose-Einstein condensate, replicating the experiment that won the 2001 Nobel Prize.
The experiment, featuring the small red glow of a BEC trapped in infrared laser beams. Credit: Stuart Hay, ANU
Physicists are putting themselves out of a job, using artificial intelligence to run a complex experiment.
The experiment, developed by physicists from The Australian National University (ANU) and UNSW ADFA, created an extremely cold gas trapped in a laser beam, known as a Bose-Einstein condensate, replicating the experiment that won the 2001 Nobel Prize.
“I didn’t expect the machine could learn to do the experiment itself, from scratch, in under an hour,” said co-lead researcher Paul Wigley from the ANU Research School of Physics and Engineering.
“A simple computer program would have taken longer than the age of the Universe to run through all the combinations and work this out.”
Bose-Einstein condensates are some of the coldest places in the Universe, far colder than outer space, typically less than a billionth of a degree above absolute zero.
They could be used for mineral exploration or navigation systems as they are extremely sensitive to external disturbances, which allows them to make very precise measurements such as tiny changes in the Earth’s magnetic field or gravity.
The artificial intelligence system’s ability to set itself up quickly every morning and compensate for any overnight fluctuations would make this fragile technology much more useful for field measurements, said co-lead researcher Dr Michael Hush from UNSW ADFA.
“You could make a working device to measure gravity that you could take in the back of a car, and the artificial intelligence would recalibrate and fix itself no matter what,” he said.
“It’s cheaper than taking a physicist everywhere with you.”
The team cooled the gas to around 1 microkelvin, and then handed control of the three laser beams over to the artificial intelligence to cool the trapped gas down to nanokelvin.
Researchers were surprised by the methods the system came up with to ramp down the power of the lasers.
“It did things a person wouldn’t guess, such as changing one laser’s power up and down, and compensating with another,” said Mr Wigley.
“It may be able to come up with complicated ways humans haven’t thought of to get experiments colder and make measurements more precise.
The new technique will lead to bigger and better experiments, said Dr Hush.
“Next we plan to employ the artificial intelligence to build an even larger Bose-Einstein condensate faster than we’ve seen ever before,” he said.
The research is published in the Nature group journal Scientific Reports.
P. B. Wigley, P. J. Everitt, A. van den Hengel, J. W. Bastian, M. A. Sooriyabandara, G. D. McDonald, K. S. Hardman, C. D. Quinlivan, P. Manju, C. C. N. Kuhn, I. R. Petersen, A. N. Luiten, J. J. Hope, N. P. Robins, M. R. Hush. Fast machine-learning online optimization of ultra-cold-atom experiments. Scientific Reports, 2016; 6: 25890 DOI: 10.1038/srep25890
The Large Hadron Collider uses superconducting magnets to smash sub-atomic particles together at enormous energies. CERN
A small mammal has sabotaged the world’s most powerful scientific instrument.
The Large Hadron Collider, a 17-mile superconducting machine designed to smash protons together at close to the speed of light, went offline overnight. Engineers investigating the mishap found the charred remains of a furry creature near a gnawed-through power cable.
A small mammal, possibly a weasel, gnawed-through a power cable at the Large Hadron Collider. Ashley Buttle/Flickr
“We had electrical problems, and we are pretty sure this was caused by a small animal,” says Arnaud Marsollier, head of press for CERN, the organization that runs the $7 billion particle collider in Switzerland. Although they had not conducted a thorough analysis of the remains, Marsollier says they believe the creature was “a weasel, probably.” (Update: An official briefing document from CERN indicates the creature may have been a marten.)
The shutdown comes as the LHC was preparing to collect new data on the Higgs Boson, a fundamental particle it discovered in 2012. The Higgs is believed to endow other particles with mass, and it is considered to be a cornerstone of the modern theory of particle physics.
Researchers have seen some hints in recent data that other, yet-undiscovered particles might also be generated inside the LHC. If those other particles exist, they could revolutionize researcher’s understanding of everything from the laws of gravity, to quantum mechanics.
Unfortunately, Marsollier says, scientists will have to wait while workers bring the machine back online. Repairs will take a few days, but getting the machine fully ready to smash might take another week or two. “It may be mid-May,” he says.
These sorts of mishaps are not unheard of, says Marsollier. The LHC is located outside of Geneva. “We are in the countryside, and of course we have wild animals everywhere.” There have been previous incidents, including one in 2009, when a bird is believed to have dropped a baguette onto critical electrical systems.
Nor are the problems exclusive to the LHC: In 2006, raccoons conducted a “coordinated” attack on a particle accelerator in Illinois.
It is unclear whether the animals are trying to stop humanity from unlocking the secrets of the universe.
Uma ilustração de como deve ser a nave – Divulgação
NOVA YORK, EUA – O físico britânico Stephen Hawking e o bilionário russo Yuri Milner anunciaram nesta terça-feira um projeto de US$ 100 milhões para enviar uma nave até o sistema estelar mais próximo da Terra, o Alpha Centauri, que fica a 4,37 anos-luz de distância. Um dos principais objetivos é encontrar planetas habitáveis fora do nosso sistema solar.
A ideia do projeto “Breakthrough Starshot”, de diretoria composta por Milner e Hawking, além do CEO do Facebook, Mark Zuckerberg, é enviar uma nave minúscula, ou uma “nano nave”, numa viagem de 20 anos, atingindo, segundo eles, um quinto da velocidade da luz. O programa vai testar o know-how e as tecnologias necessárias para o projeto.
Da esquerda para a direita: o investidor Yuri Milner, Stephen Hawking, e os físicos Freeman Dyson e Avi Loeb – LUCAS JACKSON / REUTERS
O programa prevê a criação de uma nave automatizada pesando pouco mais do que uma folha de papel e impulsionada por uma vela solar não muito maior que uma pipa de criança, mas com uma fibra de apenas algumas centenas de átomos em grossura. Enquanto uma vela normal é impulsionada pelo vento, uma vela solar para uso espacial é impulsionada pela radiação emitida pelo Sol.
A ideia inicial é usar milhares de naves assim, que teriam um “empurrão” de um laser montado na Terra, que emitiria ainda mais radiação para ajudar na impulsão. Os desafios do projeto são muitos, entre eles juntar vários emissores num “grande canhão laser”, montar velas com nanotecnologia e juntar todos os componentes da nave num pequeno pacote de silicone.
“A história humana é feita de grandes saltos. Hoje estamos preparando o próximo grande salto, para as estrelas”, disse Yuri Milner em Londres. Já Hawking afirma que “a Terra é um lugar maravilho, mas pode não durar para sempre. Mais cedo ou mais tarde, devemos olhar para as estrelas. Esse projeto é um importante primeiro passo nessa jornada”.
Se for confirmada a existência de uma nova partícula, especialistas acreditam que poderá ser aberta uma porta para um mundo ‘desconhecido e inexplorado’ (Reuters)
O Grande Colisor de Hádrons (LHC, na sigla em inglês) – um acelerador de partículas gigantesco que fica na fronteira entre a França e a Suíça – causou fortes emoções entre físicos teóricos, uma comunidade que geralmente é muito cautelosa quando se trata de novas descobertas.
O motivo: “batidinhas” detectadas pelo Grande Colisor de Hádrons. Essas batidas, evidenciadas nos dados que resultam da aceleração dos prótons, podem sinalizar a existência de uma nova e desconhecida partícula seis vezes maior do que o Bóson de Higgs (a chamada “partícula de Deus”).
E isso, para o físico teórico Gian Giudice, significaria “uma porta para um mundo desconhecido e inexplorado”.
“Não é a confirmação de uma teoria já estabelecida”, disse à revista New Scientisto pesquisador, que também é trabalha na Organização Europeia para Investigação Nuclear (CERN).
A emoção dos cientistas começou quando, em dezembro de 2015, os dois laboratórios que trabalham no LHC de forma independente registraram os mesmos dados depois de colocar o colisor para funcionar praticamente na capacidade máxima (o dobro de energia necessária para detectar o Bóson de Higgs).
Os dados registrados não podem ser explicados com o que se sabe até hoje das leis da física.
Depois do anúncio desses novos dados foram publicados cerca de 280 ensaios que tentam explicar o que pode ser esse sinal – e nenhum deles descartou a teoria de que se trata de uma nova partícula.
Alguns cientistas sugerem que a partícula pode ser uma prima pesada do Bóson de Higgs, descoberto em 2012 e que explica por que a matéria tem massa.
Outros apresentaram a hipótese de o Bóson de Higgs ser feito de partículas menores. E ainda há o grupo dos que pensam que essas “batidinhas” podem ser de um gráviton, a partícula encarregada de transmitir a força da gravidade.
Se realmente for um gráviton, essa descoberta será um marco, porque até hoje não tinha sido possível conciliar a gravidade com o modelo padrão da física de partículas.
Para os especialistas, o fato de que ninguém conseguiu refutar o que os físicos detectaram é um sinal de que podemos estar perto de descobrir algo extraordinário.
“Se isso se provar verdadeiro, será uma (nota) dez na escala Richter dos físicos de partículas”, disse ao jornal britânico The Guardian o especialista John Ellis, do King’s College de Londres. Ele também já foi chefe do departamento de teoria da Organização Europeia para a Investigação Nuclear. “Seria a ponta de um iceberg de novas formas de matéria.”
Mesmo com toda a animação de Ellis, os cientistas não querem se precipitar.
Image captionEsta nova partícula seria seis vezes maior que o Bóson de Higgs (AFP)
Quando o anúncio foi feito pela primeira vez, alguns pensaram que tudo não passava de uma terrível coincidência que aconteceu devido à forma como o LHC funciona.
Duas máquinas de raios de prótons são aceleradas chegando quase à velocidade da luz. Elas vão em direções diferentes e se chocam em quatro pontos, criando padrões de dados diferentes.
Essas diferenças, batidas ou perturbações na estatística são o que permitem demonstrar a presença de partículas.
Mas estamos falando de bilhões de perturbações registradas a cada experimento, o que torna provável um erro estatístico.
Porém, o fato de que os dois laboratórios tenham detectado a mesma batida é o que faz com que os cientistas prestem mais atenção ao tema.
O Grande Colisor de Hádrons volta a funcionar nesta semana
Além disso, recentemente os cientistas dos laboratórios CMC e Atlas apresentaram novas provas depois de refinar e recalibrar seus resultados.
E nenhuma das equipes pôde atribuir a anomalia detectada a um eventual erro estatístico.
São boas notícias para os especialistas que acreditam que essa descoberta seja o início de algo muito grande.
O lado ruim é que nenhum dos laboratórios conseguiu explicar o que é esta misteriosa partícula. São necessárias mais experiências para qualificar o evento como um “descobrimento”.
O lado bom é que não será preciso esperar muito para ver o fim da história.
Nesta semana, o Grande Colisor de Hádrons sairá de seu período de hibernação para voltar a disparar prótons em direções diferentes.
Uma das hipóteses é que esta nova partícula estaria relacionada com a gravidade (Thinkstock)
Nos próximos meses o colisor oferecerá o dobro de informação em comparação ao que os cientistas têm até agora.
E se estima que, em agosto, eles poderão saber o que é essa nova e promissora partícula.
New study describes what could be the 18th known form of ice
February 12, 2016
University of Nebraska-Lincoln
A research team has predicted a new molecular form of ice with a record-low density. If the ice can be synthesized, it would become the 18th known crystalline form of water and the first discovered in the US since before World War II.
This illustration shows the ice’s molecular configuration. Credit: Courtesy photo/Yingying Huang and Chongqin Zhu
Amid the season known for transforming Nebraska into an outdoor ice rink, a University of Nebraska-Lincoln-led research team has predicted a new molecular form of the slippery stuff that even Mother Nature has never borne.
The proposed ice, which the researchers describe in a Feb. 12, 2016 study in the journal Science Advances, would be about 25 percent less dense than a record-low form synthesized by a European team in 2014.
If the ice can be synthesized, it would become the 18th known crystalline form of water — and the first discovered in the United States since before World War II.
“We performed a lot of calculations (focused on) whether this is not just a low-density ice, but perhaps the lowest-density ice to date,” said Xiao Cheng Zeng, an Ameritas University Professor of chemistry who co-authored the study. “A lot of people are interested in predicting a new ice structure beyond the state of the art.”
This newest finding represents the latest in a long line of ice-related research from Zeng, who previously discovered a two-dimensional “Nebraska Ice” that contracts rather than expands when frozen under certain conditions.
Zeng’s newest study, which was co-led by Dalian University of Technology’s Jijun Zhao, used a computational algorithm and molecular simulation to determine the ranges of extreme pressure and temperature under which water would freeze into the predicted configuration. That configuration takes the form of a clathrate — essentially a series of water molecules that form an interlocking cage-like structure.
It was long believed that these cages could maintain their structural integrity only when housing “guest molecules” such as methane, which fills an abundance of natural clathrates found on the ocean floor and in permafrost. Like the European team before them, however, Zeng and his colleagues have calculated that their clathrate would retain its stability even after its guest molecules have been evicted.
Actually synthesizing the clathrate will take some effort. Based on the team’s calculations, the new ice will form only when water molecules are placed inside an enclosed space that is subjected to ultra-high, outwardly expanding pressure.
Just how much? At minus-10 Fahrenheit, the enclosure would need to be surrounded by expansion pressure about four times greater than what is found at the Pacific Ocean’s deepest trench. At minus-460, that pressure would need to be even greater — roughly the same amount experienced by a person shouldering 300 jumbo jets at sea level.
The guest molecules would then need to be extracted via a vacuuming process pioneered by the European team, which Zeng credited with inspiring his own group to conduct the new study.
Yet Zeng said the wonders of ordinary ice — the type that has covered Earth for billions of years — have also motivated his team’s research.
“Water and ice are forever interesting because they have such relevance to human beings and life,” Zeng said. “If you think about it, the low density of natural ice protects the water below it; if it were denser, water would freeze from the bottom up, and no living species could survive. So Mother Nature’s combination is just so perfect.”
If confirmed, the new form of ice will be called “Ice XVII,” a naming quirk that resulted from scientists terming the first two identified forms “Ice I.”
Zeng and Zhao co-authored the Science Advances study with UNL postdoctoral researcher Chongqin Zhu; Yingying Huang, a visiting research fellow from the Dalian University of Technology; and researchers from the Chinese Academy of Sciences and the University of Science and Technology of China.
The team’s research was funded in part by the National Science Foundation and conducted with the assistance of UNL’s Holland Computing Center.
Y. Huang, C. Zhu, L. Wang, X. Cao, Y. Su, X. Jiang, S. Meng, J. Zhao, X. C. Zeng. A new phase diagram of water under negative pressure: The rise of the lowest-density clathrate s-III. Science Advances, 2016; 2 (2): e1501010 DOI: 10.1126/sciadv.1501010
Pole dancing water molecules: Researchers have seen this remarkable phenomenon on the surface of an important technological material
Date: December 21, 2015
Source: Vienna University of Technology
Summary: From pole dancing to square dance: Water molecules on perovskite surfaces show interesting patterns of motion. Surface scientists have now managed to image the dance of the atoms.
This is a visualization of the dance of the atoms on a crystal surface. Credit: TU Wien
Perovskites are materials used in batteries, fuel cells, and electronic components, and occur in nature as minerals. Despite their important role in technology, little is known about the reactivity of their surfaces. Professor Ulrike Diebold’s team at TU Wien (Vienna) has answered a long-standing question using scanning tunnelling microscopes and computer simulations: How do water molecules behave when they attach to a perovskite surface? Normally only the outermost atoms at the surface influence this behaviour, but on perovskites the deeper layers are important, too. The results have been published in the journal Nature Materials.
Perovskite dissociates water molecules
“We studied strontium ruthenate — a typical perovskite material,” says Ulrike Diebold. It has a crystalline structure containing oxygen, strontium and ruthenium. When the crystal is broken apart, the outermost layer consists of only strontium and oxygen atoms; the ruthenium is located underneath, surrounded by oxygen atoms.
A water molecule that lands on this surface splits into two parts: A hydrogen atom is stripped off the molecule and attaches to an oxygen atom on the crystal’s surface. This process is known as dissociation. However, although they are physically separated, the pieces continue to interact through a weak “hydrogen bond.”
It is this interaction that causes a strange effect: The OH group cannot move freely, and circles the hydrogen atom like a dancer spinning on a pole. Although this is the first observation of such behaviour, it was not entirely unexpected: “This effect was predicted a few years ago based on theoretical calculations, and we have finally confirmed it with our experiments” said Diebold.
Dancing requires space
When more water is put on to the surface, the stage becomes too crowded and spinning stops. “The OH group can only move freely in a circle if none of the neighbouring spaces are occupied,” explains Florian Mittendorfer, who performed the calculations together with PhD student Wernfried Mayr-Schmölzer. At first, when two water molecules are in neighbouring sites, the spinning OH groups collide and get stuck together, forming pairs. Then, as the amount of water is increased, the pairs stick together and form long chains. Eventually, water molecular cannot find the pair of sites it needs to split up, and attaches instead as a complete molecule.
The new methods that have been developed and applied by the TU Wien research team have made significant advances in surface research. Whereas researchers were previously reliant on indirect measurements, they can now — with the necessary expertise — directly map and observe the behaviour of individual atoms on the surface. This opens up new possibilities for modern materials research, for example for developing and improving catalysts.
Daniel Halwidl, Bernhard Stöger, Wernfried Mayr-Schmölzer, Jiri Pavelec, David Fobes, Jin Peng, Zhiqiang Mao, Gareth S. Parkinson, Michael Schmid, Florian Mittendorfer, Josef Redinger, Ulrike Diebold. Adsorption of water at the SrO surface of ruthenates. Nature Materials, 2015; DOI: 10.1038/nmat4512
Climate scientists are tiring of governance that does not lead to action. But democracy must not be weakened in the fight against global warming, warns Nico Stehr.
Illustration by David Parkins
There are many threats to democracy in the modern era. Not least is the risk posed by the widespread public feeling that politicians are not listening. Such discontent can be seen in the political far right: the Tea Party movement in the United States, the UK Independence Party, the Pegida (Patriotic Europeans Against the Islamization of the West) demonstrators in Germany, and the National Front in France.
More surprisingly, a similar impatience with the political elite is now also present in the scientific community. Researchers are increasingly concerned that no one is listening to their diagnosis of the dangers of human-induced climate change and its long-lasting consequences, despite the robust scientific consensus. As governments continue to fail to take appropriate political action, democracy begins to look to some like an inconvenient form of governance. There is a tendency to want to take decisions out of the hands of politicians and the public, and, given the ‘exceptional circumstances’, put the decisions into the hands of scientists themselves.
This scientific disenchantment with democracy has slipped under the radar of many social scientists and commentators. Attention is urgently needed: the solution to the intractable ‘wicked problem’ of global warming is to enhance democracy, not jettison it.
Voices of discontent
Democratic nations seem to have failed us in the climate arena so far. The past decade’s climate summits in Copenhagen, Cancun, Durban and Warsaw were political washouts. Expectations for the next meeting in Paris this December are low.
Academics increasingly point to democracy as a reason for failure. NASA climate researcher James Hansen was quoted in 2009 in The Guardian as saying: “the democratic process doesn’t quite seem to be working”1. In a special issue of the journal Environmental Politics in 2010, political scientist Mark Beeson argued2 that forms of ‘good’ authoritarianism “may become not only justifiable, but essential for the survival of humanity in anything approaching a civilised form”. The title of an opinion piece published earlier this year in The Conversation, an online magazine funded by universities, sums up the issue: ‘Hidden crisis of liberal democracy creates climate change paralysis’ (see go.nature.com/pqgysr).
The depiction of contemporary democracies as ill-equipped to deal with climate change comes from a range of considerations. These include a deep-seated pessimism about the psychological make-up of humans; the disinclination of people to mobilize on issues that seem far removed; and the presumed lack of intellectual competence of people to grasp complex issues. On top of these there is the presumed scientific illiteracy of most politicians and the electorate; the inability of governments locked into short-term voting cycles to address long-term problems; the influence of vested interests on political agendas; the addiction to fossil fuels; and the feeling among the climate-science community that its message falls on the deaf ears of politicians.
“It is dangerous to blindly believe that science and scientists alone can tell us what to do.”
Such views can be heard from the highest ranks of climate science. Hans Joachim Schellnhuber, founding director of the Potsdam Institute for Climate Impact Research and chair of the German Advisory Council on Global Change, said of the inaction in a 2011 interview with German newspaper Der Spiegel: “comfort and ignorance are the biggest flaws of human character. This is a potentially deadly mix”.
What, then, is the alternative? The solution hinted at by many people leans towards a technocracy, in which decisions are made by those with technical knowledge. This can be seen in a shift in the statements of some co-authors of Intergovernmental Panel on Climate Change reports, who are moving away from a purely advisory role towards policy prescription (see, for example, ref. 3).
We must be careful what we wish for. Nations that have followed the path of ‘authoritarian modernization’, such as China and Russia, cannot claim to have a record of environmental accomplishments. In the past two or three years, China’s system has made it a global leader in renewables (it accounts for more than one-quarter of the planet’s investment in such energies4). Despite this, it is struggling to meet ambitious environmental targets and will continue to lead the world for some time in greenhouse-gas emissions. As Chinese citizens become wealthier and more educated, they will surely push for more democratic inclusion in environmental policymaking.
Broad-based support for environmental concerns and subsequent regulations came about in open democratic argument on the value of nature for humanity. Democracies learn from mistakes; autocracies lack flexibility and adaptability5. Democratic nations have forged the most effective international agreements, such as the Montreal Protocol against ozone-depleting substances.
Impatient scientists often privilege hegemonic players such as world powers, states, transnational organizations, and multinational corporations. They tend to prefer sweeping policies of global mitigation over messier approaches of local adaptation; for them, global knowledge triumphs over local know-how. But societal trends are going in the opposite direction. The ability of large institutions to impose their will on citizens is declining. People are mobilizing around local concerns and efforts6.
The pessimistic assessment of the ability of democratic governance to cope with and control exceptional circumstances is linked to an optimistic assessment of the potential of large-scale social and economic planning. The uncertainties of social, political and economic events are treated as minor obstacles that can be overcome easily by implementing policies that experts prescribe. But humanity’s capacity to plan ahead effectively is limited. The centralized social and economic planning concept, widely discussed decades ago, has rightly fallen into disrepute7.
The argument for an authoritarian political approach concentrates on a single effect that governance ought to achieve: a reduction of greenhouse-gas emissions. By focusing on that goal, rather than on the economic and social conditions that go hand-in-hand with it, climate policies are reduced to scientific or technical issues. But these are not the sole considerations. Environmental concerns are tightly entangled with other political, economic and cultural issues that both broaden the questions at hand and open up different ways of approaching it. Scientific knowledge is neither immediately performative nor persuasive.
There is but one political system that is able to rationally and legitimately cope with the divergent political interests affected by climate change and that is democracy. Only a democratic system can sensitively attend to the conflicts within and among nations and communities, decide between different policies, and generally advance the aspirations of different segments of the population. The ultimate and urgent challenge is that of enhancing democracy, for example by reducing social inequality8.
If not, the threat to civilization will be much more than just changes to our physical environment. The erosion of democracy is an unnecessary suppression of social complexity and rights.
The philosopher Friedrich Hayek, who led the debate against social and economic planning in the mid-twentieth century9, noted a paradox that applies today. As science advances, it tends to strengthen the idea that we should “aim at more deliberate and comprehensive control of all human activities”. Hayek pessimistically added: “It is for this reason that those intoxicated by the advance of knowledge so often become the enemies of freedom”10. We should heed his warning. It is dangerous to blindly believe that science and scientists alone can tell us what to do.
Nature 525, 449–450 (24 September 2015) dos:10.1038/525449a
Posted on September 24, 2015 by …and Then There’s Physics
I thought I would briefly discuss this Nature comment called Climate policy: Democracy is not an inconvenience. I initially read it and tweeted it, thinking “yes, democracy is important and not an inconvenience”. I then read it again and thought, “hold on, is this a massive strawman?”
The main premise seems to be based on:
Researchers are increasingly concerned that no one is listening to their diagnosis of the dangers of human-induced climate change and its long-lasting consequences, despite the robust scientific consensus. As governments continue to fail to take appropriate political action, democracy begins to look to some like an inconvenient form of governance. There is a tendency to want to take decisions out of the hands of politicians and the public, and, given the ‘exceptional circumstances’, put the decisions into the hands of scientists themselves.
Really? I realise that there are extreme elements everywhere, but I don’t think I’ve seen any scientists actually argue that we should subvert democracy. I’ve certainly seen people suggest that our democracies are not suited to solving this type of global problem, but this – as far as I can tell – is typically said in the context of democracy being the worst form of government, apart from all others. Also, it is often in reference to the influence of the media, vested interests, and short-term political thinking, rather than an argument against democracy itself.
In fact, what I think most scientists are frustrated with (me, certainly) is a sense that we have all this evidence, it is very strong, and yet it appears to be largely being ignored or dismissed. I think most scientists recognise that the evidence alone doesn’t tell us what should be done, and that there are other important factors that will – and should – influence decision making. The argument is more to do with robust, evidence-based policy-making, not an implicit suggestion that we should undermine democracy and put the decisions into the hands of scientists themselves. Not only would putting decision making into the hands of scientists be an exceptionally poor idea (I should know, I am one and work with many others), but I’d also like to see an example of someone making this argument, because I really can’t think of one.
Maybe the most ironic thing about this article is that it almost seems to be doing what it is criticising others for apparently doing. It is essentially trying to delegitimise some by suggesting that their concerns are an attempt to subvert democracy. Well, as far as I’m concerned, free speech and the right to criticise policy makers is a fundamental part of our modern democracies. Suggesting that something that is fundamentally democratic is an attempt to undermine democracy, seems rather confused; maybe intentionally. Of course, we live in democracies where such arguments are allowed, even if they don’t make much sense.
Equipe selecionou 61 placas do Observatório Nacional que documentam eclipse de 1919. Observação em Sobral (CE) ajudou a demonstrar conclusões de Albert Einstein
Uma equipe do Observatório Nacional (ON/MCTI) fez um levantamento das placas fotográficas que fazem parte do resultado da expedição que observou o eclipse total do Sol na cidade de Sobral (CE) em 1919 e contribuiu para a comprovação da Teoria da Relatividade Geral, de Albert Einstein.
Composto pelo astrônomo Carlos H. Veiga, pelas bibliotecárias Katia T. dos Santos e M. Luiza Dias e pelo analista Renaldo N. da S. Junior, o grupo avaliou 900 placas fotográficas do acervo da biblioteca do Observatório. Pela importância científica, foram selecionadas as 61 placas das observações do famoso eclipse, que ainda guardam fielmente a imagem da lua nova encobrindo perfeitamente a imagem do Sol, registradas num dia muito especial para a ciência.
A partir da segunda metade do século XIX, as imagens fotográficas eram registradas em placas de vidro. Esse dispositivo, coberto por uma emulsão contendo sais de prata sensíveis à luz, era usado não só para registrar o cotidiano, mas também pela comunidade astronômica, até a última década do século passado, para observação de corpos celestes. Por ter um baixo coeficiente de dilatação térmica, as placas de vidro garantiam, ao longo do tempo, a precisão e confiabilidade das medidas astronômicas.
A fotografia permitiu um grande avanço para a astronomia e para o desenvolvimento da astrofísica, passando a ter um papel de detector, comparando os dados observacionais com o distanciamento temporal de grandes estruturas, como as galáxias. Em 1873 foi iniciado um programa sistemático de observação da atividade das manchas solares e eclipses e da coroa solar.
Uma manhã que mudou a ciência
Na manhã de 29 de maio de 1919, um fenômeno celeste trocaria, por alguns minutos, o dia pela noite numa pacata cidade do Nordeste brasileiro. Os minutos de duração do fenômeno deveriam ser aproveitados ao máximo. Era a oportunidade para comprovar experimentalmente uma nova afirmação científica prevista por uma teoria idealizada por Einstein (1879-1955), físico de origem alemã: a relatividade geral, que pode ser entendida como uma teoria que explica os fenômenos gravitacionais.
Sobral, a cidade cearense, seria o palco que ajudaria a confirmar um efeito previsto pela relatividade geral: a deflexão da luz, na qual um feixe de luz (neste caso, vindo de uma estrela) deveria ter sua trajetória encurvada (ou desviada) ao passar nas proximidades de um forte campo gravitacional (no caso, gerado pelo Sol).
Esse desvio da luz faz com que a estrela observada seja vista em uma posição aparentemente diferente de sua posição real. O objetivo dos astrônomos era medir um pequeno ângulo formado por essas duas posições.
Naquele dia, aconteceria um eclipse solar total. Os cálculos previam que deveria haver, pelo menos, uma estrela localizada no fundo de céu cuja luz passasse próxima ao bordo solar. Com essa configuração e boas condições meteorológicas, haveria grande chance de comprovar a nova teoria.
Leia mais e veja outras imagens do evento histórico, além de indicações bibliográficas.
Scalable 3-D silicon chip architecture based on single atom quantum bits provides a blueprint to build operational quantum computers
October 30, 2015
University of New South Wales
Researchers have designed a full-scale architecture for a quantum computer in silicon. The new concept provides a pathway for building an operational quantum computer with error correction.
This picture shows from left to right Dr Matthew House, Sam Hile (seated), Sciential Professor Sven Rogge and Scientia Professor Michelle Simmons of the ARC Centre of Excellence for Quantum Computation and Communication Technology at UNSW. Credit: Deb Smith, UNSW Australia
Australian scientists have designed a 3D silicon chip architecture based on single atom quantum bits, which is compatible with atomic-scale fabrication techniques — providing a blueprint to build a large-scale quantum computer.
Scientists and engineers from the Australian Research Council Centre of Excellence for Quantum Computation and Communication Technology (CQC2T), headquartered at the University of New South Wales (UNSW), are leading the world in the race to develop a scalable quantum computer in silicon — a material well-understood and favoured by the trillion-dollar computing and microelectronics industry.
Teams led by UNSW researchers have already demonstrated a unique fabrication strategy for realising atomic-scale devices and have developed the world’s most efficient quantum bits in silicon using either the electron or nuclear spins of single phosphorus atoms. Quantum bits — or qubits — are the fundamental data components of quantum computers.
One of the final hurdles to scaling up to an operational quantum computer is the architecture. Here it is necessary to figure out how to precisely control multiple qubits in parallel, across an array of many thousands of qubits, and constantly correct for ‘quantum’ errors in calculations.
Now, the CQC2T collaboration, involving theoretical and experimental researchers from the University of Melbourne and UNSW, has designed such a device. In a study published today in Science Advances, the CQC2T team describes a new silicon architecture, which uses atomic-scale qubits aligned to control lines — which are essentially very narrow wires — inside a 3D design.
“We have demonstrated we can build devices in silicon at the atomic-scale and have been working towards a full-scale architecture where we can perform error correction protocols — providing a practical system that can be scaled up to larger numbers of qubits,” says UNSW Scientia Professor Michelle Simmons, study co-author and Director of the CQC2T.
“The great thing about this work, and architecture, is that it gives us an endpoint. We now know exactly what we need to do in the international race to get there.”
In the team’s conceptual design, they have moved from a one-dimensional array of qubits, positioned along a single line, to a two-dimensional array, positioned on a plane that is far more tolerant to errors. This qubit layer is “sandwiched” in a three-dimensional architecture, between two layers of wires arranged in a grid.
By applying voltages to a sub-set of these wires, multiple qubits can be controlled in parallel, performing a series of operations using far fewer controls. Importantly, with their design, they can perform the 2D surface code error correction protocols in which any computational errors that creep into the calculation can be corrected faster than they occur.
“Our Australian team has developed the world’s best qubits in silicon,” says University of Melbourne Professor Lloyd Hollenberg, Deputy Director of the CQC2T who led the work with colleague Dr Charles Hill. “However, to scale up to a full operational quantum computer we need more than just many of these qubits — we need to be able to control and arrange them in such a way that we can correct errors quantum mechanically.”
“In our work, we’ve developed a blueprint that is unique to our system of qubits in silicon, for building a full-scale quantum computer.”
In their paper, the team proposes a strategy to build the device, which leverages the CQC2T’s internationally unique capability of atomic-scale device fabrication. They have also modelled the required voltages applied to the grid wires, needed to address individual qubits, and make the processor work.
“This architecture gives us the dense packing and parallel operation essential for scaling up the size of the quantum processor,” says Scientia Professor Sven Rogge, Head of the UNSW School of Physics. “Ultimately, the structure is scalable to millions of qubits, required for a full-scale quantum processor.”
In classical computers, data is rendered as binary bits, which are always in one of two states: 0 or 1. However, a qubit can exist in both of these states at once, a condition known as a superposition. A qubit operation exploits this quantum weirdness by allowing many computations to be performed in parallel (a two-qubit system performs the operation on 4 values, a three-qubit system on 8, and so on).
As a result, quantum computers will far exceed today’s most powerful super computers, and offer enormous advantages for a range of complex problems, such as rapidly scouring vast databases, modelling financial markets, optimising huge metropolitan transport networks, and modelling complex biological molecules.
Dean Radin, author of The Conscious Universe: The Scientific Truth of Psychic Phenomena (HarperSanFrancisco 1997), says that “psi researchers have resolved a century of skeptical doubts through thousands of replicated laboratory studies” (289) regarding the reality of psychic phenomena such as ESP(extrasensory perception) and PK (psychokinesis). Of course, Radin also considers meta-analysis as the most widely accepted method of measuring replication in science (51). Few scientists would agree with either of these claims. In any case, most American adults—about 75%, according to a 2005 Gallup poll—believe in at least one paranormal phenomenon. Forty-one percent believe in ESP. Fifty-five percent believe in the power of the mind to heal the body. One doesn’t need to be psychic to know that the majority of believers in psi have come to their beliefs through experience or anecdotes, rather than through studying the scientific evidence Radin puts forth in his book.
Radin doesn’t claim that the scientific evidence is going to make more believers. He realizes that the kind of evidence psi researchers have put forth hasn’t persuaded most scientists that there is anything of value in parapsychology. He thinks there is “a general uneasiness about parapsychology” and that because of the “insular nature of scientific disciplines, the vast majority of psi experiments are unknown to most scientists.” He also dismisses critics as skeptics who’ve conducted “superficial reviews.” Anyone familiar with the entire body of research, he says, would recognize he is correct and would see that there are “fantastic theoretical implications” (129) to psi research. Nevertheless, in 2005 the Nobel Committee once again passed over the psi scientists when handing out awards to those who have made significant contributions to our scientific knowledge.
The evidence Radin presents, however, is little more than a hodgepodge of occult statistics. Unable to find a single person who can correctly guess a three-letter word or move a pencil an inch without trickery, the psi researchers have resorted to doing complex statistical analyses of data. In well-designed studies they assume that whenever they have data that, by some statistical formula, is not likely due to chance, they attribute the outcome to psi. A well-designed study is one that carefully controls for such things as cheating, sensory leakage (unintentional transfer of information by non-psychic means), inadequate randomization, and other factors that might lead to an artifact (something that looks like it’s due to psi when it’s actually due to something else).
The result of this enormous data that Radin cites is that there is statistical evidence (for what it’s worth) that indicates (however tentatively) that some very weak psi effects are present (so weak that not a single individual who participates in a successful study has any inkling of possessing psychic power). Nevertheless, Radin thinks it is appropriate to speculate about the enormous implications of psi for biology, psychology, sociology, philosophy, religion, medicine, technology, warfare, police work, business, and politics. Never mind that nobody has any idea as to how psi might work. That is a minor detail to someone who can write with a straight face (apparently) that:
lots of independent, simple glimpses of the future may one day innocently crash the future. It’s not clear what it means to “crash the future,” but it doesn’t sound good. (297)
No, it certainly doesn’t sound good. But, as somebody once said, “the future will be better tomorrow.”
According to Radin, we may look forward to a future with “psychic garage-door openers” and the ability to “push atoms around” with our minds (292). Radin is not the least bit put off by the criticism that all the other sciences have led us away from superstition andmagical thinking, while parapsychology tries to lead us into those pre-scientific modes. Radin notes that “the concept that mind is primary over matter is deeply rooted in Eastern philosophy and ancient beliefs about magic.” However, instead of saying that it is now time to move forward, he rebuffs “Western science” for rejecting such beliefs as “mere superstition.” Magical thinking, he says, “lies close beneath the veneer of the sophisticated modern mind” (293). He even claims that “the fundamental issues [of consciousness] remain as mysterious today as they did five thousand years ago.” We may not have arrived at a final theory of the mind, but a lot of the mystery has evaporated with the progress made in the neurosciences over the past century. None of our advancing knowledge of the mind, however, has been due to contributions from parapsychologists. (Cf. Blackmore 2001).
Radin doesn’t grasp the fact that the concept of mind can be an illusion without being a “meaningless illusion” (294). He seems to have read David Chalmers, but I suggest he and his followers read Daniel Dennett. I’d begin with Sweet Dreams(2005). Consciousness is not “a complete mystery,” as Radin claims (294). The best that Radin can come up with as evidence that psi research has something to offer consciousness studies is the claim that “information can be obtained in ways that bypass the ordinary sensory system altogether” (295). Let’s ignore the fact that this claim begs the question. What neuroscience has uncovered is just how interesting and complex this “ordinary sensory system” turns out to be.
Radin would have us believe that magical thinking is essential to our psychological well being (293). If he’s right, we’ll one day be able to solve all social problems by “mass-mind healings.” And religious claims will get new meaning as people come to understand the psychic forces behind miracles and talking to the dead. According to Radin, when a medium today talks to a spirit “perhaps he is in contact with someone who is alive in the past.From the ‘departed’ person’s perspective, she may find herself communicating with someone from the future, although it is not clear that she would know that” (295). Yes, I don’t think that would be clear, either.
In medicine, Radin expects distant mental healing (which he argues has been scientifically established) to expand to something that “might be called techno-shamanism” (296). He describes this new development as “an exotic, yet rigorously schooled combination of ancient magical principles and future technologies” (296). He expects psi to join magnetic resonance imaging and blood tests as common stock in the world of medicine. “This would translate into huge savings and improved quality of life for millions of people” (192) as “untold billions of dollars in medical costs could be saved” (193).
Then, of course, there will be the very useful developments that include the ability to telepathically “call a friend in a distant spacecraft, or someone in a deeply submerged submarine” (296). On the other hand, the use of psychic power by the military and by police investigators will depend, Radin says, on “the mood of the times.” If what is popular on television is an indicator of the mood of the times, I predict that there will be full employment for psychic detectives and remote viewers in the future.
Radin looks forward to the day when psi technology “might allow thought control of prosthetics for paraplegics” and “mind-melding techniques to provide people with vast, computer-enhanced memories, lightning-fast mathematical capabilities, and supersensitive perceptions” (197). He even suggests we employ remote viewer Joe McMoneagle to reveal future technological devices he “has sensed in his remote-viewing sessions” (100).
Radin considers a few other benefits that will come from our increased ability to use psi powers: “to guide archeological digs and treasure-hunting expeditions, enhance gambling profits, and provide insight into historical events” (202). However, he does not consider some of the obvious problems and benefits that would occur should psychic ability become common. Imagine the difficulties for the junior high teacher in a room full of adolescents trained in PK. Teachers and parents would be spending most of their psychic energy controlling the hormones of their charges. The female garment and beauty industries would be destroyed as many attractive females would be driven to try to make themselves look ugly to avoid having their clothes being constantly removed by psychic perverts and pranksters.
Ben Radford has noted the potential for “gross and unethical violations of privacy,” as people would be peeping into each other’s minds. On the other hand, infidelity and all forms of deception might die out, since nobody could deceive anyone about anything if we were all psychic. Magic would become pointless and “professions that involve deception would be worthless” (Radford 2000). There wouldn’t be any need for undercover work or spies. Every child molester would be identified immediately. No double agent could ever get away with it. There wouldn’t be any more lotteries, since everybody could predict the winning numbers. We wouldn’t need trials of accused persons and the polygraph would be a thing of the past.
Hurricanes, tsunamis, earthquakes, floods, and other signs of intelligent design will become things of the past as billions of humans unite to focus their thoughts on predicting and controlling the forces of nature. We won’t need to build elaborate systems to turn away errant asteroids or comets heading for our planet: billons of us will unite to will the objects on their merry way toward some other oblivion. It is unlikely that human nature will change as we become more psychically able, so warfare will continue but will be significantly changed. Weapons won’t be needed because we’ll be able to rearrange our enemies’ atoms and turn them into mush from the comfort of our living rooms. (Who knows? It might only take a few folks with super psi powers to find Osama bin Laden and turn him into a puddle of irradiated meat.) Disease and old age will become things of the past as we learn to use our thoughts to kill cancer cells and control our DNA.
Space travel will become trivial and heavy lifting will be eliminated as we will be able to teleport anything to anywhere at anytime through global consciousness. We’ll be able to transport all the benefits of earthly consciousness to every planet in the universe. There are many other likely effects of global psychic ability that Radin has overlooked but this is understandable given his heavy workload as Senior Scientist at IONS (The Institute of Noetic Sciences) and as a blogger.
Radin notes only one problem should psi ability become common: we’ll all be dipping into the future and we might “crash the future,” whatever that means. The bright side of crashing the future will be the realization of “true freedom” as we will no longer be doomed to our predestined fate. We will all have the power “to create the future as we wish, rather than blindly follow a predetermined course through our ignorance” (297). That should make even the most cynical Islamic fundamentalist or doomsday Christian take heed. This psi stuff could be dangerous to one’s delusions even as it tickles one’s funny bone and stimulates one’s imagination to aspire to the power of gods and demons.
****** ****** ******
update: Radin has a follow-up book out called Entangled Minds: Extrasensory Experiences in a Quantum Reality. Like The Conscious Universe, this one lays out the scientific evidence for psi as seen from the eyes of a true believer. As noted above, in The Conscious Universe, Radin uses statistics and meta-analysisto prove that psychic phenomena really do exist even if those who have the experiences in the labs are unaware of them. Statistical data show that the world has gone psychic, according to the latest generation of parapsychologists. You may be unconscious of it, but your mind is affecting random number generators all over the world as you read this. The old psychic stuff—thinking about aunt Hildie moments before she calls to tell you to bugger off—is now demonstrated to be true by statistical methods that were validated in 1937 by Burton Camp and meta-validated by Radin 60 years later when he asserted that meta-analysis was the replication parapsychologists had been looking for. The only difference is that now when you think of aunt Hildie it might be moments before she calls her car mechanic and that, too, may be linked to activity in your mind that you are unaware of.
Radin’s second book sees entanglement as a key to understanding extrasensory phenomena. Entanglement is a concept from quantum physics that refers to connections between subatomic particles that persist regardless of being separated by various distances. He notes that some physicists have speculated that the entire universe might be entangled and that the Eastern mystics of old might have been on to something cosmic. His speculations are rather wild but his assertions are rather modest. For example: “I believe that entanglement suggests a scenario that may ultimately lead to a vastly improved understanding of psi” (p. 14) and “I propose that the fabric of reality is comprised [sic] of ‘entangled threads’ that are consistent with the core of psi experience” (p. 19). Skeptics might suggest that studying self-deception and wishful thinking would lead to a vastly improved understanding of psi research and that being consistent with a model is a minimal, necessary condition for taking any model seriously, but hardly sufficient to warrant much faith.
Readers of The Conscious Universe will be pleased to know that Radin has outdone himself on the meta-analysis front. In his second book, he provides a meta-meta-analysis of over 1,000 studies on dream psi, ganzfeld psi, staring, distant intention, dice PK, and RNG PK. He concludes that the odds against chance of getting these results are 10104 against 1 (p. 276). As Radin says, “there can be little doubt that something interesting is going on” (p. 275). Yes, but I’m afraid it may be going on only in some entangled minds.
Radin predicts that some day “psi research will be taught in universities with the same aplomb as today’s elementary economics and biology” (p. 295). Perhaps psi research will be taught in the same classroom as intelligent design, though this seems unlikely as parapsychology attempts to reduce all supernatural and paranormal phenomena to physics. Maybe they could both be taught in the same curriculum: things that explain everything but illuminate nothing.
note: If the reader wants to see a more complete review of Radin’s work, please read my reviews of his books. Links are given below.
Physical scientists aren’t trained for all the political and moral issues.
Oct 2 2015 – 10:00am
By: Joel N. Shurkin, Contributor
(Inside Science) — The notion that Earth’s climate is changing—and that the threat to the world is serious—goes back to the 1980s, when a consensus began to form among climate scientists as temperatures began to rise noticeably. Thirty years later, that consensus is solid, yet climate change and the disruption it may cause remain divisive political issues, and millions of people remain unconvinced.
A new book argues that social scientists should play a greater role in helping natural scientists convince people of the reality of climate change and drive policy.
Climate Change and Society consists of 13 essays on why the debate needs the voices of social scientists, including political scientists, psychologists, anthropologists, and sociologists. It is edited by Riley E. Dunlap, professor of sociology at Oklahoma State University in Stillwater, and Robert J. Brulle, of Drexel University, professor of sociology and environmental science in Philadelphia.
Brulle said the physical scientists tend to frame climate change “as a technocratic and managerial problem.”
“Contrast that to the Pope,” he said.
Pope Francis sees it as a “political, moral issue that won’t be settled by a group of experts sitting in a room,” said Brulle, who emphasized that it will be settled by political process. Sociologists agree.
Sheila Jasanoff also agrees. She is the Pforzheimer professor of science and technology studies at the Harvard Kennedy School in Cambridge, Massachusetts, and did not participate in the book.
She said that understanding how people behave differently depending on their belief system is important.
“Denial is a somewhat mystical thing in people’s heads,” Jasanoff said. “One can bring tools of sociology of knowledge and belief—or social studies—to understand how commitments to particular statements of nature are linked with understanding how you would feel compelled to behave if nature were that way.”
Parts of the world where climate change is considered a result of the colonial past may resist taking drastic action at the behest of the former colonial rulers. Jasanoff said that governments will have to convince these groups that climate change is a present danger and attention must be paid.
Some who agree there is a threat are reluctant to advocate for drastic economic changes because they believe the world will be rescued by innovation and technology, Jasanoff said. Even among industrialized countries, views about the potential of technology differ.
Understanding these attitudes is what social scientists do, the book’s authors maintain.
“One of the most pressing contributions our field can make is to legitimate big questions, especially the ability of the current global economic system to take the steps needed to avoid catastrophic climate change,” editors of the book wrote.
The issue also is deeply embedded in the social science of economics and in the problem of “have” and “have-not” societies in consumerism and the economy.
For example, Bangladesh sits at sea level, and if the seas rise enough, nearly the entire country could disappear in the waters. Hurricane Katrina brought hints of the consequences of that reality to New Orleans, a city that now sits below sea level. The heaviest burden of the storm’s effects fell on the poor neighborhoods, Brulle said.
“The people of Bangladesh will suffer more than the people on the Upper East Side of Manhattan,” Brulle said. He said they have to be treated differently, which is not something many physical scientists studying the processes behind sea level rise have to factor into their research.
“Those of us engaged in the climate fight need valuable insight from political scientists and sociologists and psychologists and economists just as surely as from physicists,” agreed Bill McKibben, an environmentalist and author who is a scholar in residence at Middlebury College in Vermont. “It’s very clear carbon is warming the planet; it’s very unclear what mix of prods and preferences might nudge us to use much less.”
Joel Shurkin is a freelance writer in Baltimore. He was former science writer at the Philadelphia Inquirer and was part of the team that won a Pulitzer Prize for covering Three Mile Island. He has nine published books and is working on a tenth. He has taught journalism at Stanford University, the University of California at Santa Cruz and the University of Alaska Fairbanks. He tweets at@shurkin.
I was struck this morning by the similarity between two twentieth-century passages about entropy. The first is from W.H. Auden’s poem “As I Walked Out One Evening,” and the second from Philip K. Dick’s Do Androids Dream of Electric Sheep. If I was a betting man, I’d put money on PKD having read Auden. The cupboard and the teacup, especially, drew my attention, but it is also worth noting that the passage in PKD immediately precedes J.R. Isidore’s vision of the “tomb world,” a variation on Auden’s “land of the dead.”
Whether or not the passage in PKD is a explicit allusion or homage to Auden, I find it interesting that PKD’s passage, which several times mentions the irradiated dust of nuclear fallout, so closely resembles Auden’s pre-nuclear poem. The psychological issue, in each case, is not humanity’s ability to destroy itself (despite the post-apocalyptic setting of Androids) but the problem of being, as Carl Sagan puts it, “a way for the cosmos to know itself.” How do we live with our knowledge of geologic or cosmological time–scales on which all of human history occupy a mere blip–and, simultaneously, assert the meaningfulness of individual lives? More after the break, but, first the passages:
W.H. Auden, from “As I Walked Out One Evening” (1940):
But all the clocks in the city
Began to whirr and chime:
‘O let not Time deceive you,
You cannot conquer Time.
‘In the burrows of the Nightmare
Where Justice naked is,
Time watches from the shadow
And coughs when you would kiss.
‘In headaches and in worry
Vaguely life leaks away,
And Time will have his fancy
To-morrow or to-day.
‘Into many a green valley
Drifts the appalling snow;
Time breaks the threaded dances
And the diver’s brilliant bow.
‘O plunge your hands in water,
Plunge them in up to the wrist;
Stare, stare in the basin
And wonder what you’ve missed.
‘The glacier knocks in the cupboard,
The desert sighs in the bed,
And the crack in the tea-cup opens
A lane to the land of the dead.
‘Where the beggars raffle the banknotes
And the Giant is enchanting to Jack,
And the Lily-white Boy is a Roarer,
And Jill goes down on her back.
Philip K. Dick, from Do Androids Dream of Electric Sheep (1968):
“he saw the dust and the ruin of the apartment as it lay spreading out everywhere–he heard the kipple coming, the final disorder of all forms, the absence which would win out. It grew around him as he stood holding the empty ceramic cup; the cupboards of the kitchen creaked and split and he felt the floor beneath his feet give.
Reaching out, he touched the wall. His hand broke the surface; gray particles trickled and hurried down, fragments of plaster resembling the radioactive dust outside. He seated himself at the table and, like rotten, hollow tubes the legs of the chair bent; standing quickly, he set down the cup and tried to reform the chair, tried to press it back into its right shape. The chair came apart in his hands, the screws which had previously connected its several sections ripping out and hanging loose. He saw, on the table, the ceramic cup crack; webs of fine lines grew like the shadows of a vine, and then a chip dropped from the edge of the cup, exposing the rough, unglazed interior.”
Nietzsche frequently and disparately writes about this problem in terms of “eternal recurrence”: the natural cycles of life and death that repeat themselves across long stretches of time dwarf the appearance of any individual member of a single species on one planet. In The Birth of Tragedy (an early work that Nietzsche distances himself from, but still a valuable touchstone in his thought), Nietzsche frames this as a problem of identification. We identify with our individual selves, but those selves are also part of the large natural cycles whose inevitable continuation will destroy the individual. We can attempt to identify with the cycle itself as a claim to immortality. As Sagan says, “Some part of our being knows this is where we came from. We long to return, and wecan, because the cosmos is also within us. We‘re made of star stuff.”
On the other hand, identifying with the cosmos as a whole diminishes the significance of our own disappearance within the natural cycle. As homo sapiens sapiens we may be part of the terran biosphere in the solar system (itself a secondary star system formed from the stuff of previous supernovas), but as Carl or Friedrich or Wiston or Dick, our individual deaths, like our lives, are not interchangeable. Hannah Arendt, in The Human Condition (1958), refers to this quality as “uniqueness”: “In man, otherness, which he shares with everything that is, and distinctness, which he shares with everything alive, become uniqueness, and human plurality is the paradoxical plurality of unique beings.” We act together, speak together, and, in the process, we forge identities that are irreducible to our membership in a class of objects or a biological species. We exercise what Nietzsche calls the “principle of individuation”: we create individual selves that will never be repeated in the eternal recurrence of natural cycles.
Taking this a step farther, our potential identification with the cosmos as a whole is only possible because we have individual consciousnesses that can identify/form identities. Nietzsche argues that simply disavowing our individual selves in favor of universal being/becoming prevents the cosmos from knowing or being known. The individual (what he calls Apollonian) may be a temporary, fleeting form, but for us to experience our place within the universal (what he calls Dionysian), we must hold our individual selves in tension with those larger processes.
The highest forms of art are born, Nietzsche argues, when Apollo and Dionysus are locked in conflict. We are individuals who will die, and our unique lives will be gone. We are also part of, constitutive of, and coextensive with the dynamic unfolding of the universe as a whole. A few billion years from now, the sun will die and take the Earth (and Mercury and Venus) with it, but even that will not be the end of our story. The productive problem we face is finding meaning that can emerge from both biography and cosmology and their vast differences in scale.
Arendt has some very interesting things to say about entropy and the apparently miraculous rescue of human life and worldliness from the seemingly inevitable destruction of natural cycles. I am tempted to end with her, but, for this post, I want to give Auden the final word. His poem begins with lovers declaring that they will love forever, and the entropic wisdom of the cities chiming clocks interrupts those declarations. The meaning of that interruption, however, is not a simple rejection of subjective folly in favor of a more objective, longer view. It leaves the lovers (and the listeners who are left long after the lovers leave) with a peculiar form of responsibility:
photo credit: Pieter Kuiper via Wikimedia Commons. A comparison of double slit interference patterns with different widths. Similar patterns produced by atoms have confirmed the dominant model of quantum mechanics
Physicists have succeeded in confirming one of the theoretical aspects of quantum physics: Subatomic objects switch between particle and wave states when observed, while remaining in a dual state beforehand.
This raises the question of what determines when a photon or electron will behave like a wave or a particle. How, anthropomorphizing madly, do these things “decide” which they will be at a particular time?
The dominant modelof quantum mechanics holds that it is when a measurement is taken that the “decision” takes place. Erwin Schrodinger came up with his famous thought experiment using a cat to ridicule this idea. Physicists think that quantum behavior breaks down on a large scale, so Schrödinger’s cat would not really be both alive and dead—however, in the world of the very small, strange theories like this seem to be the only way to explain what we we see.
In 1978, John Wheeler proposed a series of thought experiments to make sense of what happens when a photon has to either behave in a wave-like or particle-like manner. At the time, it was considered doubtful that these could ever be implemented in practice, but in 2007 such an experiment was achieved.
“A photon is in a sense quite simple,” Truscott told IFLScience. “An atom has significant mass and couples to magnetic and electric fields, so it is much more in tune with its environment. It is more of a classical particle in a sense, so this was a test of whether a more classical particle would behave in the same way.”
Trustcott’s experiment involved creating a Bose-Einstein Condensate of around a hundred helium atoms. He conducted the experiment first with this condensate, but says the possibility that atoms were influencing each other made it important to repeat after ejecting all but one. The atom was passed through a “grate” made by two laser beams that can scatter an atom in a similar manner to a solid grating that can scatter light. These have been shown to cause atoms to either pass through one arm, like a particle, or both, like a wave.
A random number generator was then used to determine whether a second grating would appear further along the atom’s path. Crucially, the number was only generated after the atom had passed the first grate.
The second grating, when applied, caused an interference pattern in the measurement of the atom further along the path. Without the second grating, the atom had no such pattern.
An optical version of Wheeler’s delayed choice experiment (left) and an atomic version as used by Truscott (right). Credit: Manning et al.
Truscott says that there are two possible explanations for the behavior observed. Either, as most physicists think, the atom decided whether it was a wave or a particle when measured, or “a future event (the method of detection) causes the photon to decide its past.”
In the bizarre world of quantum mechanics, events rippling back in time may not seem that much stranger than things like “spooky action at a distance” or even something being a wave and a particle at the same time. However, Truscott said, “this experiment can’t prove that that is the wrong interpretation, but it seems wrong, and given what we know from elsewhere, it is much more likely that only when we measure the atoms do their observable properties come into reality.”
The ‘holographic principle,’ the idea that a universe with gravity can be described by a quantum field theory in fewer dimensions, has been used for years as a mathematical tool in strange curved spaces. New results suggest that the holographic principle also holds in flat spaces. Our own universe could in fact be two dimensional and only appear three dimensional — just like a hologram.
Is our universe a hologram? Credit: TU Wien
At first glance, there is not the slightest doubt: to us, the universe looks three dimensional. But one of the most fruitful theories of theoretical physics in the last two decades is challenging this assumption. The “holographic principle” asserts that a mathematical description of the universe actually requires one fewer dimension than it seems. What we perceive as three dimensional may just be the image of two dimensional processes on a huge cosmic horizon.
Up until now, this principle has only been studied in exotic spaces with negative curvature. This is interesting from a theoretical point of view, but such spaces are quite different from the space in our own universe. Results obtained by scientists at TU Wien (Vienna) now suggest that the holographic principle even holds in a flat spacetime.
The Holographic Principle
Everybody knows holograms from credit cards or banknotes. They are two dimensional, but to us they appear three dimensional. Our universe could behave quite similarly: “In 1997, the physicist Juan Maldacena proposed the idea that there is a correspondence between gravitational theories in curved anti-de-sitter spaces on the one hand and quantum field theories in spaces with one fewer dimension on the other,” says Daniel Grumiller (TU Wien).
Gravitational phenomena are described in a theory with three spatial dimensions, the behaviour of quantum particles is calculated in a theory with just two spatial dimensions — and the results of both calculations can be mapped onto each other. Such a correspondence is quite surprising. It is like finding out that equations from an astronomy textbook can also be used to repair a CD-player. But this method has proven to be very successful. More than ten thousand scientific papers about Maldacena’s “AdS-CFT-correspondence” have been published to date.
Correspondence Even in Flat Spaces
For theoretical physics, this is extremely important, but it does not seem to have much to do with our own universe. Apparently, we do not live in such an anti-de-sitter-space. These spaces have quite peculiar properties. They are negatively curved, any object thrown away on a straight line will eventually return. “Our universe, in contrast, is quite flat — and on astronomic distances, it has positive curvature,” says Daniel Grumiller.
However, Grumiller has suspected for quite some time that a correspondence principle could also hold true for our real universe. To test this hypothesis, gravitational theories have to be constructed, which do not require exotic anti-de-sitter spaces, but live in a flat space. For three years, he and his team at TU Wien (Vienna) have been working on that, in cooperation with the University of Edinburgh, Harvard, IISER Pune, the MIT and the University of Kyoto. Now Grumiller and colleagues from India and Japan have published an article in the journal Physical Review Letters, confirming the validity of the correspondence principle in a flat universe.
Calculated Twice, Same Result
“If quantum gravity in a flat space allows for a holographic description by a standard quantum theory, then there must by physical quantities, which can be calculated in both theories — and the results must agree,” says Grumiller. Especially one key feature of quantum mechanics -quantum entanglement — has to appear in the gravitational theory.
When quantum particles are entangled, they cannot be described individually. They form a single quantum object, even if they are located far apart. There is a measure for the amount of entanglement in a quantum system, called “entropy of entanglement.” Together with Arjun Bagchi, Rudranil Basu and Max Riegler, Daniel Grumiller managed to show that this entropy of entanglement takes the same value in flat quantum gravity and in a low dimension quantum field theory.
“This calculation affirms our assumption that the holographic principle can also be realized in flat spaces. It is evidence for the validity of this correspondence in our universe,” says Max Riegler (TU Wien). “The fact that we can even talk about quantum information and entropy of entanglement in a theory of gravity is astounding in itself, and would hardly have been imaginable only a few years back. That we are now able to use this as a tool to test the validity of the holographic principle, and that this test works out, is quite remarkable,” says Daniel Grumiller.
This however, does not yet prove that we are indeed living in a hologram — but apparently there is growing evidence for the validity of the correspondence principle in our own universe.
Arjun Bagchi, Rudranil Basu, Daniel Grumiller, Max Riegler. Entanglement Entropy in Galilean Conformal Field Theories and Flat Holography. Physical Review Letters, 2015; 114 (11) DOI: 10.1103/PhysRevLett.114.111602
A demon lives behind my left eye. As a migraine sufferer, I have developed a very personal relationship with my pain and its perceived causes. On a bad day, with a crippling sensitivity to light, nausea, and the feeling that the blood flowing to my brain has slowed to a crawl and is the poisoned consistency of pancake batter, I feel the presence of this demon keenly.
On the first day of the Q2 Symposium, however, which I was delighted to attend recently, the demon was in a tricksy mood, rather than out for blood: this was a vestibular migraine. The symptoms of this particular neurological condition are dizziness, loss of balance, and sensitivity to motion. Basically, when the demon manifests in this way, I feel constantly as though I am falling: falling over, falling out of place. The Q Symposium, hosted by James Der Derian and the marvellous team at the University of Sydney’s Centre for International Security Studies, was intended, over the course of two days and a series of presentations, interventions, and media engagements, to unsettle, to make participants think differently about space/time and security, thinking through quantum rather than classical theory, but I do not think that this is what the organisers had in mind.
At the Q Station, located in Sydney where the Q Symposium was held, my pain and my present aligned: I felt out of place, I felt I was falling out of place. I did not expect to like the Q Station. It is the former quarantine station used by the colonial administration to isolate immigrants they suspected of carrying infectious diseases. Its location, on the North Head of Sydney and now within the Sydney Harbour National Park, was chosen for strategic reasons – it is secluded, easy to manage, a passageway point on the journey through to the inner harbour – but it has a much longer historical relationship with healing and disease. The North Head is a site of Aboriginal cultural significance; the space was used by the spiritual leaders (koradgee) of the Guringai peoples for healing and burial ceremonies.
So I did not expect to like it, as such an overt symbol of the colonisation of Aboriginal lands, but it disarmed me. It is a place of great natural beauty, and it has been revived with respect, I felt, for the rich spiritual heritage of the space that extended long prior to the establishment of the Quarantine Station in 1835. When we Q2 Symposium participants were welcomed to country by and invited to participate in a smoking ceremony to protect us as we passed through the space, we were reminded of this history and thus reminded – gently, respectfully (perhaps more respectfully than we deserved) – that this is not ‘our’ place. We were out of place.
We were all out of place at the Q2 Symposium. That is the point. Positioning us thus was deliberate; we were to see whether voluntary quarantine would produce new interactions and new insights, guided by the Q Vision, to see how quantum theory ‘responds to global events like natural and unnatural disasters, regime change and diplomatic negotiations that phase-shift with media interventions from states to sub-states, local to global, public to private, organised to chaotic, virtual to real and back again, often in a single news cycle’. It was two days of rich intellectual exploration and conversation, and – as is the case when these experiments work – beautiful connections began to develop between those conversations and the people conversing, conversations about peace, security, and innovation, big conversations about space, and time.
I felt out of place. Mine is not the language of quantum theory. I learned so much from listening to my fellow participants, but I was insecure; as the migraine took hold on the first day, I was not only physically but intellectually feeling as though I was continually falling out of the moment, struggling to maintain the connections between what I was hearing and what I thought I knew.
This principle states the impossibility of simultaneously specifying the precise position and momentum of any particle. In other words, physicists cannot measure the position of a particle, for example, without causing a disturbance in the velocity of that particle. Knowledge about position and velocity are said to be complementary, that is, they cannot be precise at the same time.
I do not know anything about quantum theory – I found it hard to follow even the beginner’s guides provided by the eloquent speakers at the Symposium – but I know a lot about uncertainty. I also feel that I know something about entanglement, perhaps not as it is conceived of within quantum physics, but perhaps that is the point of events such as the Q Symposium: to encourage us to allow the unfamiliar to flow through and around us until the stream snags, to produce an idea or at least a moment of alternative cognition.
My moment of alternative cognition was caused by foetal microchimerism, a connection that flashed for me while I was listening to a physicist talk about entanglement. Scientists have shown that during gestation, foetal cells migrate into the body of the mother and can be found in the brain, spleen, liver, and elsewhere decades later. There are (possibly) parts of my son in my brain, literally as well as simply metaphorically (as the latter was already clear). I am entangled with him in ways that I cannot comprehend. Listening to the speakers discuss entanglement, all I could think was, This is what entanglement means to me, it is in my body.
Perhaps I am not proposing entanglement as Schrödinger does, as ‘the characteristic trait of quantum mechanics, the one that enforces its entire departure from classical lines of thought’. Perhaps I am just using the concept of entanglement to denote the inextricable, inexplicable, relationality that I have with my son, my family, my community, humanity. It is this entanglement that undoes me, to use Judith Butler’s most eloquent phrase, in the face of grief, violence, and injustice. Perhaps this is the value of the quantum: to make connections that are not possible within the confines of classical thought.
I am not a scientist. I am a messy body out of place, my ‘self’ apparently composed of bodies out of place. My world is not reducible. My uncertainty is vast. All of these things make me insecure, challenge how I move through professional time and space as I navigate the academy. But when I return home from my time in quarantine and joyfully reconnect with my family, I am grounded by how I perceive my entanglement. It is love, not science, that makes me a better scholar.
I was inspired by what I heard, witnessed, discussed at the Q2 Symposium. I was – and remain – inspired by the vision of the organisers, the refusal to be bound by classical logics in any field that turns into a drive, a desire to push our exploration of security, peace, and war in new directions. We need new directions; our classical ideas have failed us, and failed humanity, a point made by Colin Wight during his remarks on the final panel at the Symposium. Too often we continue to act as though the world is our laboratory; we have ‘all these theories yet the bodies keep piling up…‘.
But if this is the case, I must ask: do we need a quantum turn to get us to a space within which we can admit entanglement, admit uncertainty, admit that we are out of place? We are never (only) our ‘selves’: we are always both wave and particle and all that is in between and it is our being entangled that renders us human. We know this from philosophy, from art and the humanities. Can we not learn this from art? Must we turn to science (again)? I felt diminished by the asking of these questions, insecure, but I did not feel that these questions were out of place.
Para o físico Heitor Scalambrini Costa, denúncias de propinas na construção da usina e objeções técnicas quanto à obsolescência dos equipamentos tecnologicamente defasados, são fatos graves que devem ser apurados com urgência
Apesar de toda a movimentação no cenário internacional acerca dos problemas e riscos de instalações nucleares, que ficou exacerbada após o desastre de Fukushima (11/3/2011), surpreende a posição das autoridades do Ministério de Minas e Energia, dos “lobistas” da área nuclear,das empreiteiras e fornecedoras de equipamentos ― pois todos continuam insistindo na instalação de mais quatro usinas nucleares no país até 2030, sendo duas delas no Nordeste brasileiro. Além da construção de Angra 3 ― já aprovada.
No caso de Angra 3, a estimativa de custos da obra era de R$ 7,2 bilhões, em 2008; pulou para R$ 10,4 bilhões,no final de 2010;em julho de 2013, de acordo com a Eletronuclear, superava os R$ 13 bilhões; e, até 2018, ano de sua conclusão, devem alcançar R$ 14,9 bilhões. Obviamente a duplicação nos custos de construção desta usina nuclear impactam decisivamente o preço médio de venda de eletricidade no país.
A história da indústria nuclear no Brasil mostra que ela sempre foi ― e continua sendo ― uma indústria altamente dependente de subsídios públicos. Sem dúvida, são perversas as condições de financiamento de Angra 3, com subsídios governamentais ocultos, a serem posteriormente disfarçados nas contas de luz. E quem vai pagar essa conta seremos nós, os usuários, que já pagamos uma das mais altas tarifas de energia elétrica do mundo.
Com a Operação Lava Jato, deflagrada em março de 2014, para investigar um grande esquema de lavagem e desvio de dinheiro envolvendo a Petrobras, grandes empreiteiras do país e diversos políticos, começam a ter desnudados os reais interesses, nada republicanos, da decisão de construção das grandes obras energéticas, como a usina hidroelétrica de Belo Monte e a usina nuclear Angra 3.
Desde a decisão de construí-la no âmbito do conturbado acordo nuclear Brasil-Alemanha, a usina de Angra 3foi cercada de mistério, controvérsias, incertezas e falta de transparência, comuns no setor nuclear brasileiro.
As obras civis da usina foram licitadas à Construtora Andrade Gutierrez mediante contrato assinado em 16 de junho de 1983(governo Figueiredo, 1979-1985). Em abril de 1986, as obras foram paralisadas por falta de recursos, alto custo e dúvidas quanto à conveniência e riscos desta fonte de energia. Mesmo assim a construtora recebeu durante décadas um pagamento de aproximadamente US$ 20 milhões/ano.
Depois de 23 anos parada, as obras de Angra 3 foram retomadas em 2009 (governo Lula, 2003-2010). O governo Lula optou por não fazer licitações, e revalidou a concorrência ganha pela construtora Andrade Gutierrez, em 1983. Embora não tenha feito novas licitações, a Eletronuclear negociou atualizações de valores com todos os fornecedores e prestadores de serviços. A obra e seus equipamentos ficaram bem mais caros. Em dólares, seu valor pulou de US$ 1,8 bilhão para aproximadamente cerca de US$ 3,3 bilhões.
Diante da decisão de manter o contrato com a Andrade Gutierrez, construtoras concorrentes, especialmente a Camargo Corrêa, tentaram em vão convencer o governo a rever sua decisão, alegando que neste período houve uma revolução tecnológica que reduziu em até 40% o custo de obras civis de usinas nucleares. Também o plenário do Tribunal de Contas da União, em setembro de 2008, ao avaliar o assunto não impediu a revalidação dos contratos. Porém considerou que Angra 3 apresentava “indícios de irregularidade grave” sem recomendar, todavia, a paralisação do empreendimento.
O contrato das obras civis não foi o único a ser tirado do congelador pelo governo Lula. Para o fornecimento de bens e serviços importados foi definida a fabricante Areva, empresa resultante da fusão entre a alemã Siemens KWU e a francesa Framatome. A rigor, a Areva nem assinou o contrato. Ela foi escolhida porque herdou da KWU o acordo original.
Já os contratos da montagem foram assinados em 2 de setembro de 2014 com os seguintes consórcios: consórcio ANGRA 3, para a realização dos serviços de montagens eletromecânicas dos sistemas associados ao circuito primário da usina (sistemas associados ao circuito de geração de vapor por fonte nuclear),constituído pela empresas Construtora Queiroz Galvão S.A., EBE – Empresa Brasileira de Engenharia S.A. e Techint Engenharia S.A. E consórcio UNA 3, para a execução das montagens associadas aos sistemas convencionais da usina, constituído pelas empresas Construtora Andrade Gutierrez S.A., Construtora Norberto Odebrecht S.A., Construções e Comércio Camargo Corrêa S.A. e UTC Engenharia S.A.
O atual planejamento da Eletronuclear prevê a entrada em operação de Angra 3 em maio de 2018. Mas esta meta deverá ser revista depois de a obra ser praticamente paralisada no final de abril de 2014, devido à alegação de dívidas não pagas a empreiteira (governo Dilma, 2011-2014).
Depois de todos estes percalços, para uma obra tão polêmica, tomamos conhecimento das denúncias feitas por um dos executivos da empreiteira Camargo Correa, que passou a colaborar com as investigações da Operação Lava Jato e relatou aos procuradores, durante negociações para o acordo de delação premiada, uma suposta propina para o ex-ministro das Minas e Energia, Edson Lobão, na contratação da Camargo Correa para a execução de obras da usina de Angra 3.
Caso se confirmem tais acusações ficará claro para a sociedade brasileira que os reais interesses pela construção de Angra 3 e de mais 4 usinas nucleares tiveram como principal motivação as altas somas que autoridades públicas receberam como suborno. É bom lembrar que neste caso o ministro Lobão tinha poder de comando sobre a empresa pública responsável pela obra, a Eletronuclear ― subsidiária da Eletrobrás.
A partir deste episódio não podemos mais ignorar as objeções técnicas, como as denúncias com relação à obsolescência dos equipamentos tecnologicamente defasados (comprometendo o seu funcionamento e aumentando o risco de um desastre nuclear). Nem as denúncias de que o custo desta obra poderia encarecer durante a sua construção ― o que,de fato, já aconteceu.Tampouco o questionamento sobre o empréstimo realizado pela Caixa Econômica Federal, para a construção de Angra 3.
A expectativa é que todas as denúncias sejam investigadas e apuradas as responsabilidades. O fato em si é gravíssimo, e suficiente para a interrupção das atividades nucleares no país, em particular a construção de Angra 3, com o congelamento de novas instalações. Não se pode admitir que a decisão de construir centrais nucleares no país tenha sido feita em um mero balcão de negócios.
Heitor Scalambrini Costa é graduado em Física pela Universidade de Campinas/SP, mestrado em Ciências e Tecnologias Nucleares na Universidade Federal de Pernambuco, e doutorado em Energética – Université dAix-Marseille III (Droit, Econ. et Sciences (1992). Atualmente é professor associado da Universidade Federal de Pernambuco.
I just came across Massimo Pigliucci’s interesting review of Mangabeira Unger and Lee Smolin’s book The Singular Universe and the Reality of Time. There are more than a few Whiteheadian themes explored throughout the review, including Unger and Smolin’s (U&S) view that time should be read as an abstraction from events and that the “laws” of the universe are better conceptualized as habits or contingent causal connections secured by the ongoingness of those events rather than as eternal, abstract formalisms. (This entangling of laws with phenomena, of events with time, is one of the ways we can think towards an ecological metaphysics.)
But what I am particularly interested in is the short discussion on Platonism and mathematical realism. I sometimes think of mathematical realism as the view that numbers, and thus the abstract formalisms they create, are real, mind-independent entities, and that, given this view, mathematical equations are discovered (i.e., they actually exist in the world) rather than created (i.e., humans made them up to fill this or that pragmatic need). The review makes it clear, though, that this definition doesn’t push things far enough for the mathematical realist. Instead, the mathematical realist argues for not just the mind-independent existence of numbers but also their nature-independence—math as independent not just of all knowers but of all natural phenomena, past, present, or future.
U&S present an alternative to mathematical realisms of this variety that I find compelling and more consistent with the view that laws are habits and that time is an abstraction from events. Here’s the reviewer’s take on U&S’s argument (the review starts with a quote from U&S and then unpacks it a bit):
“The third idea is the selective realism of mathematics. (We use realism here in the sense of relation to the one real natural world, in opposition to what is often described as mathematical Platonism: a belief in the real existence, apart from nature, of mathematical entities.) Now dominant conceptions of what the most basic natural science is and can become have been formed in the context of beliefs about mathematics and of its relation to both science and nature. The laws of nature, the discerning of which has been the supreme object of science, are supposed to be written in the language of mathematics.” (p. xii)
But they are not, because there are no “laws” and because mathematics is a human (very useful) invention, not a mysterious sixth sense capable of probing a deeper reality beyond the empirical. This needs some unpacking, of course. Let me start with mathematics, then move to the issue of natural laws.
I was myself, until recently, intrigued by mathematical Platonism . It is a compelling idea, which makes sense of the “unreasonable effectiveness of mathematics” as Eugene Wigner famously put it . It is a position shared by a good number of mathematicians and philosophers of mathematics. It is based on the strong gut feeling that mathematicians have that they don’t invent mathematical formalisms, they “discover” them, in a way analogous to what empirical scientists do with features of the outside world. It is also supported by an argument analogous to the defense of realism about scientific theories and advanced by Hilary Putnam: it would be nothing short of miraculous, it is suggested, if mathematics were the arbitrary creation of the human mind, and yet time and again it turns out to be spectacularly helpful to scientists .
But there are, of course, equally (more?) powerful counterarguments, which are in part discussed by Unger in the first part of the book. To begin with, the whole thing smells a bit too uncomfortably of mysticism: where, exactly, is this realm of mathematical objects? What is its ontological status? Moreover, and relatedly, how is it that human beings have somehow developed the uncanny ability to access such realm? We know how we can access, however imperfectly and indirectly, the physical world: we evolved a battery of sensorial capabilities to navigate that world in order to survive and reproduce, and science has been a continuous quest for expanding the power of our senses by way of more and more sophisticated instrumentation, to gain access to more and more (and increasingly less relevant to our biological fitness!) aspects of the world.
Indeed, it is precisely this analogy with science that powerfully hints to an alternative, naturalistic interpretation of the (un)reasonable effectiveness of mathematics. Math too started out as a way to do useful things in the world, mostly to count (arithmetics) and to measure up the world and divide it into manageable chunks (geometry). Mathematicians then developed their own (conceptual, as opposed to empirical) tools to understand more and more sophisticated and less immediate aspects of the world, in the process eventually abstracting entirely from such a world in pursuit of internally generated questions (what we today call “pure” mathematics).
U&S do not by any means deny the power and effectiveness of mathematics. But they also remind us that precisely what makes it so useful and general — its abstraction from the particularities of the world, and specifically its inability to deal with temporal asymmetries (mathematical equations in fundamental physics are time-symmetric, and asymmetries have to be imported as externally imposed background conditions) — also makes it subordinate to empirical science when it comes to understanding the one real world.
This empiricist reading of mathematics offers a refreshing respite to the resurgence of a certain Idealism in some continental circles (perhaps most interestingly spearheaded by Quentin Meillassoux). I’ve heard mention a few times now that the various factions squaring off within continental philosophy’s avant garde can be roughly approximated as a renewed encounter between Kantian finitude and Hegelian absolutism. It’s probably a bit too stark of a binary, but there’s a sense in which the stakes of these arguments really do center on the ontological status of mathematics in the natural world. It’s not a direct focus of my own research interests, really, but it’s a fascinating set of questions nonetheless.