Stuart Hameroff has faced three decades of criticism for his quantum consciousness theory, but new studies show the idea may not be as fringe as once believed.
By Darren Orf – Published: Dec 18, 2024 5:13 PM EST
For nearly his entire life,Dr. Stuart Hameroff has been fascinated with the bedeviling question of consciousness. But instead of studying neurology or another field commonly associated with the inner workings of the brain, it was Hameroff’s familiarity with anesthetics, a family of drugs that famously induces the opposite of consciousness, that fueled his curiosity.
“I thought about neurology, psychology, and neurosurgery, but none of those . . . seemed to be dealing with the problem of consciousness,” says Hameroff, a now-retired professor of anesthesiology from the University of Arizona. Hameroff recalls a particularly eye-opening moment when he first arrived at the university and met the chairman of the anesthesia department. “He says ‘hey, if you want to understand consciousness, figure out how anesthesia works because we don’t have a clue.’”
Hameroff’s work in anesthesia showed that unconsciousness occurred due to some effect on microtubules and wondered if perhaps these structures somehow played a role in forming consciousness. So instead of using the neuron, or the brain’s nerve cells, as the “base unit” of consciousness, Hameroff’s ideas delved deeper and looked at the billions of individual tubulins inside microtubules themselves. He quickly became obsessed.
Found in a cell’s cytoskeleton—the structure that helps a cell keep its shape and undergo mitosis—microtubules are made up of tubulin proteins and can be found in cells throughout the body. Hameroff describes the overall shape of microtubules as a “hollow ear of corn” where the kernels represent the alpha- and beta-tubulin proteins. Hameroff first found out about these structures in medical school in the 1970s, learning how microtubules duplicate chromosomes during cell division. If the spindles of the microtubules don’t pull this dance off perfectly (a process known as missegregation), you get cancerous cells or other forms of maldevelopment.
Wikimedia/National Institutes of Health. In a eukaryotic cell, the cytoskeleton provides structure and support. In this image, microtubules, which are part of the cytoskeleton, are shown in green. These narrow, tube-like structures help support the shape of the cell. Scientists like Stuart Hameroff also believe these polymers could hold the secrets to consciousness.
While Hameroff knew that anesthetics impacted these structures, he couldn’t explain how microtubules might produce consciousness. “How would all that information processing explain consciousness? How could it explain envy, greed, pain, love, joy, emotion, the color green,” Hameroff says. “I had no idea.”
That is, until he had a chance encounter with an influential book by Nobel Prize laureate Sir Roger Penrose, Ph.D.
Within the pages of 1989’s The Emperor’s New Mind, Penrose argued that consciousness is actually quantum in nature—not computational as many theories of the mind had so far put forth. However, the famous physicist didn’t have any biological mechanism for the possible collapse of the quantum wave function—when a multi-state quantum superposition collapses to a definitive classical state—that induces conscious experiences.
“Damn straight, Roger. It’s freaking microtubules,” Hameroff remembers saying. Soon after, Hameroff struck up a partnership with Penrose, and together they set off to create one of the most fascinating—and controversial—ideas in the field of consciousness study. This idea became known as Orchestrated Objective Reduction theory, or Orch OR, and it states that microtubules in neurons cause the quantum wave function to collapse, a process known as objective reduction, which gives rise to consciousness.
Hameroff readily admits that since its inception in the mid-90s, it’s became a popular pastime in the field to bash his idea. But in recent years, a growing body of research has reported some evidence of quantum processes being possible in the brain. And while this in itself isn’t confirmation of the Orch OR theory Hameroff and Penrose came up with, it’s leading some scientists to reconsider the possibility that consciousness could be quantum in nature. Not only would this be a huge breakthrough in the understanding of human consciousness, it would mean that purely algorithmic—or computer-based—artificial intellligence could never truly be conscious.
● ● ●
In 1989, Roger Penrose was already a superstar in the world of mathematics and physics. By this time, he was already years removed from his groundbreaking work describing black hole formations (which eventually earned him the Nobel Prize in Physics in 2020), as well as his discovery of mathematical tilings, known as Penrose tilings, that are crucial to the study of quasicrystals—structures that are ordered but not periodic. With the publication of The Emperor’s New Mind, Penrose dove headfirst into the theoretical realm of human consciousness.
In the book, Penrose leveraged Kurt Gödel’s incompleteness theorem, which (in very simplified terms) argued that because the human mind can exceed existing systems to make new discoveries, then consciousness must be non-algorithmic. Instead, Penrose argues that human consciousness is fundamentally quantum in nature, and in TheEmperor’s New Mind, he lays out his case over hundreds of pages, detailing how the collapse of the wave function creates a moment of consciousness. However, similar to Hameroff’s dilemma, Penrose admits in the closing pages that profound pieces of this quantum consciousness puzzle were still unknown:
I hold also to the hope that it is through science and mathematics that some profound advances in the understanding of mind must eventually come to light. There is an apparent dilemma here, but I have tried to show that there is a genuine way out.
When Hammeroff first read the book in 1991, he believed he knew what Penrose was missing.
Hameroff dashed off a letter that included some of his research and offered to visit Penrose at Oxford during one of his conferences in England. Penrose agreed, and the two soon began probing the non-algorithmic problem of human consciousness. While the duo developed their quantum consciousness theory, Hameroff also brought together minds from across disciplines—including philosophy, neuroscience, cognitive science, math, and physics—to explore ideas surrounding consciousness in the form of a biannual Science of Consciousness Conference.
The University of Arizona Center for Consciousness Studies. Dr. Stuart Hameroff (left) and Sir Roger Penrose (right) giving a lecture on consciousness and the physics of the brain at the Sanford Consortium for Regenerative Medicine in La Jolla, California, January 2020.
And from its very inception, the conference broke new ground. In 1994, philosopher David Chalmers described how neuroscience was well-suited for figuring out how the brain controlled physical processes, but the “hard problem” was figuring out why humans (and all other living things) had subjective experiences.
Roughly two years after Chalmers gave this famous talk in a hospital auditorium in Tucson, Penrose and Hameroff revealed their own possible answer to this famous hard problem.
It wasn’t well-received.
● ● ●
Penrose and Hameroff revealed their Orchestrated Objective Reduction theory in the April 1996 issue of Mathematics and Computers in Simulation. It detailed how microtubules orchestrate consciousness from “objective reduction,” which describes (with complicated physics) Penrose’s thoughts on quantum gravity interaction and how the collapse of the wave function produces consciousness.
The idea has since faced nearly 30 years of criticism.
Famous theoretical physicist Stephen Hawking once wrote that Penrose fell for a kind of Holmsian fallacy, stating that “his argument seemed to be that consciousness is a mystery and quantum gravity is another mystery so they must be related.” Another main criticism is that the brain’s warm and noisy environment is ill-suited for the existence of any kind of quantum interaction. Read any scientific literature about quantum computers, and lab conditions are always extra pristine and approaching-absolute-zero cold (−273.15 degrees Celsius).
“You know how long I’ve been hearing the brain is warm and noisy?” Hameroff says, dismissing the criticism of the brain as too warm and wet for quantum processes to flourish. “I think our theory is sound from the physics, biology, and anesthesia standpoint.”
In a 2022 interview with New Scientist, Penrose admitted that the original Orch OR theory was “rough around the edges,” but maintains all these decades later that consciousness lies beyond computation and perhaps even beyond our current understanding of quantum mechanics. “People used to say it is completely crazy,” Penrose told New Scientist, “but I think people take it seriously now.”
“I think our theory is sound from the physics, biology, and anesthesia standpoint.”
A lot of that slow acceptance comes from a steady tide of research showing that biological systems contain evidence of quantum interactions. Since the publication of Orch OR, scientists have found evidence of quantum mechanics at work during photosynthesis, for example, and just this year, a study from researchers at Howard University detailed quantum effects involving microtubules. This research doesn’t prove Orch OR directly; that’d be like discovering water on an exoplanet and declaring it’s home to intelligent life—not an impossibility, but very far from a certainty. The findings at least have some critics reconsidering the role quantum mechanics plays, if not in consciousness, then at least the inner workings of the brain more broadly.
However, the rise of quantum biology in the past few decades also coincided with the explosion of AI and large language models (LLMs), which has brought new urgency to the question of consciousness—both human and artificial. Hameroff believes that an influx of money for consciousness research involving AI has only biased the field further into the “consciousness is a computation” camp.
“People have thrown in the towel on the ‘hard problem’ in my view and sold out to AI,” Hameroff says. “These LLMs . . . haven’t reached their limit yet but that doesn’t mean they’ll be conscious.”
● ● ●
As the years—and eventually decades—passed, Hameroff relentlessly defended Orch OR in scientific papers, at consciousness conferences, and perhaps most energetically on his X (formerly Twitter) feed, where he regularly participates in microtubule-related debates. But when asked if he likes the arguments, he answers pretty bluntly.
“Apparently I do because I keep doing it,” Hameroff says. “I’ve always been the contrarian but it’s not on purpose—I just follow my nose.”
And that scientific sense has led Hameroff to explore potentially profound implications when you consider that consciousness doesn’t necessarily rely on the brain or even neurons. Earlier this year, Hameroff, along with colleagues at the University of Arizona and Japan’s National Institute for Materials Science, co-authored an non-peer-reviewed article asking the question of whether consciousness could possibly predate life itself.
“It never made sense to me that life started and evolved for millions of years without genes—why would organisms develop cognitive machinery? What’s their motivation?” Hameroff says, admitting that theory traipses beyond the typical confines of science. “It’s kind of spiritual—my spiritual friends like this alot.”
Hameroff admits that some of his ideas are “out there,” and even stops himself short when describing some ideas involving UFOs, saying “I’m already out on enough limbs.” While most of his ideas may have taken up residence in the fringes of mainstream science, it’s a place where he seems comfortable—at least for now. “I don’t think everybody’s going to agree . . . but I think [Orch OR] is going to be considered seriously,” Hameroff says.
Hameroff retired from his decades-long career as an anesthesiologist at the University of Arizona, and now he has even more time to dedicate to his lifelong fascination.
“I had a great career, and now I have another great career,” he says. “Plus I don’t have to get up so damn early.”
By Darren Orf – Contributing Editor. Darren lives in Portland, has a cat, and writes/edits about sci-fi and how our world works. You can find his previous stuff at Gizmodo and Paste if you look hard enough.
My life among the elementary particles has made me question whether reality exists at all.
By Vijay Balasubramanian
August 19, 2024
Iremember the day when, at the age of 7, I realized that I wanted to figure out how reality worked. My mother and father had just taken us shopping at a market in Calcutta. On the way back home, we passed through a dimly lit arcade where a sidewalk bookseller was displaying his collection of slim volumes. I spotted an enigmatic cover with a man looking through a microscope; the words “Famous Scientists” were emblazoned on it, and when I asked my parents to get it for me, they agreed. As I read the chapters, I learned about discoveries by Antonie van Leeuwenhoek of the world of microscopic life, by Marie Curie about radioactivity, by Albert Einstein about relativity, and I thought, “My God, I could do this, too!” By the time I was 8, I was convinced that everything could be explained, and that I, personally, was going to do it.
Decades have passed, and I am now a theoretical physicist. My job is to work out how all of reality works, and I take that mission seriously, working on subjects ranging from the quantum theory of gravity to theoretical neuroscience. But I must confess to an increasing sense of uncertainty, even bafflement. I am no longer sure that working out what is “real” is possible, or that the reality that my 7-year-old self conceived of even exists, rather than being simply unknown. Perhaps reality is genuinely unknowable: Things exist and there is a truth about them, but we have no way of finding it out. Or perhaps the things we call “real” are called into being by their descriptions but do not independently exist.
The theories and concepts we build are like ladders we use to reach the truth.
I am steeped in the cultural traditions of physics, a field that is my calling and trade, and in the philosophies of India with which I was raised. As a physicist, I remain committed to a system of thought which posits that: (1) things we observe are definitely real, (2) the details may be unknown, (3) bounded resources may slow progress, but (4) physical inquiry can lead us to the real truth, as long as we have time and proceed with patience. On the other hand, I am also acutely aware of philosophical traditions to the effect that: (1) there may be a reality, but (2) measurements from the world are inherently misleading and partial, so that (3) the real may be formally indescribable, and that (4) we may not have a systematic way to reach the fundamentally real and true.
The idea that the real may be unknowable is very old. Consider the creation hymn in the Rig Veda, composed around 1500 to 1000 B.C., called the “Nasadiya Sukta.” This verse addresses fundamental questions of cosmology and the origin of the universe. In a beautiful translation by Juan Mascaró, it asks:
Who knows the truth? Who can tell whence and how arose this universe? The gods are later than its beginning: Who therefore knows whence comes this creation? Only he who sees in the highest heaven: He only knows whence came this universe and whether it was made or un-created. He only knows, or perhaps he knows not.
The poet who wrote this verse points out the fundamental problem of epistemology: We don’t know some things and may not even have any way of determining what we don’t know. Some questions may be intrinsically unanswerable. Or the answers may be contradictory. The “Isha Upanishad,” a Sanskrit text from the first millennium B.C., attempts to describe a reality that escapes common sense: “It moves and it moves not, it is far and it is near, it is inside and it is outside.”
A second problem is that perception fundamentally limits our ability to apprehend reality. A prosaic example is the perception of color. Eagles, turtles, bees, and shrimp sense more and different colors than we humans do; in effect, they see different worlds. Different perceptual realities can create different cognitive or conceptual realities.
Jorge Luis Borges pushed this idea to the limit in his story “Funes the Memorious,” about a man who acquires a sort of infinite perceptual capacity. Borges writes: “A circle drawn on a blackboard, a right triangle … are forms that we can fully grasp; … [Funes] could do the same with the stormy mane of a pony … with the changing fire and its innumerable ashes.” Funes’ superpower sounds wondrous, but there is a catch. Borges writes that Funes was “almost incapable of ideas of a general, Platonic sort. Not only was it difficult for him to comprehend that the generic symbol dog embraces so many unlike individuals of diverse size and form; it bothered him that the dog … (seen from the side) should have the same name as the dog … (seen from the front).” The precision of Funes’ perception of reality prevents him from thinking in the coarse-grained categories that we associate with thought and cognition—categories which, necessarily rough, texture our imagined reality.
The arbitrariness of categories was the subject of another Borges story, “The Analytical Language of John Wilkins,” in which Wilkins imagined dividing animals into those belonging to the Emperor, those that are crazy-acting, those painted with the finest brush made of camel hair, those which have just broken a vase, those that from a long way off look like flies, and other oddly specific groupings.
SKETCHY: Students and the public are often told the world consists of real particles called quarks and leptons. Yet these are only concepts that approximate a certain sketch of the structure of the world. Image by Fouad A. Saad / Shutterstock.
The philosopher Michel Foucault, in his book The Order of Things, drew inspiration from Borges’ stories to reflect on the nature of categorization. He suggested that the categories and concepts that we define control our bounded cognition and, in their intrinsic arbitrariness, structure the realities that reside in our minds.
Foucault’s analysis resonates with me because it reminds me of categories in physics. For example, we routinely tell our students and the public that the world consists of particles called quarks and leptons, along with subatomic force fields. Yet these are concepts that reify a certain approximate sketch of the structure of the world. Physicists once thought that these categories were fundamental and real, but we now understand them as necessarily inexact because they ignore the finer details that our instruments have just not been able to measure.
If our categories determine the reality we perceive, can having an idea call a reality into being? This question is a version of the “simulation hypothesis,” whereby all of reality as we know it is simply a simulation in some computational engine, or perhaps a version of the idealism of Plato, where things that we can conceive in the world echo imperfectly an ideal that is the true reality.
Consider, for example, Mymosh the Self-begotten, the tragic hero of a story by Polish writer, Stanislaw Lem, in his volume The Cyberiad. Mymosh, a sentient machine self-organized by accident from a cosmic garbage heap, conjures up entire worlds and peoples just by imagining them. Are those people real, or are they all in his head? In fact, is there a difference? After all, Mymosh’s imagination is a physical process—electrical impulses in his brain. So perhaps the people he imagines are real in some sense.
Things exist and there is a truth about them, but we have no way of finding it out.
Some of these philosophical conundrums have concrete avatars in theoretical physics. Consider the notion of “duality” between physical theories. In this context, a “theory” means a mathematical description of a hypothetical universe, which we develop as a stepping-stone to understanding the actual universe in which we live. Two theories are said to be “dual,” or equivalent, if every observable in one matches some observable in the other. In other words, the two theories are different representations of the same physical system. Often in these dualities the elementary variables, or particles, of one theory become the collective variables, or lumps of particles, of the other, and vice versa. Dual theories scramble some of the most basic categories in physics, such as the difference between “bosonic” particles (any number of which can be in the same place at the same time) and “fermionic” particles (no two of which can be in the same place at the same time). These two kinds of particles have entirely different physical properties, so you would think that they could not be equivalent. But through dualities, it turns out that lumps of bosons can act like fermions, and vice versa. So, what’s the reality here?
Even more dramatic are dualities involving the force of gravity. On one side, we have theories of matter and all the forces except gravity; on the other, we have theories of matter and forces including gravity. These theories look very different. They are couched in terms of different forms of matter, different types of forces, and even different numbers of spacetime dimensions. Yet they describe precisely the same fundamental physics. So, what is “real” here? If one theory says the force of gravity operates and the other says it doesn’t, what do we conclude about the reality of gravity? Perhaps we can use my sketch to visualize the situation—we are able to tell stories about the corners of this diagram of possible worlds, where simplifications and approximations suffice, but the categories and concepts that we have been capable of, at least to date, fail to describe the interior where reality is actually located.
Sketch by Vijay Balasubramanian
Quantum mechanics makes things even more confusing. Quantum-mechanical states of a system can be combined, or superposed, in seemingly contradictory ways. So, the spin of an electron can be in a superposition of pointing up and down—an idea that might seem akin to suggesting that, say, a cat can be in a superposition of alive and dead. Does that mean these objects are in both states or neither state? Some theories suggest that measuring a cat (to continue with this metaphor) could cause it to collapse into a state of aliveness or deadness; others, evoking something like the many-worlds theory, suggest that the combined superposition continues through time. This is a casse-tête, a head breaker.
Where does this leave me? Perhaps we can reconcile all these ideas by following Ludwig Wittgenstein, who proposed in his Tractatus Logico-Philosophicus, possibly referencing previous ideas of Søren Kierkegaard, that the theories and concepts we build are like ladders or nets we use to reach the truth, but we must throw them away upon getting there. I myself am trying to find my way by working in multiple fields, both physics and neuroscience, studying both the world and the mind that perceives it, because I believe that the quest to understand the reality of the universe must contend with the truncations imposed by the perceptual and cognitive limitations of the mind.
Should we bother seeking truths about the world in light of the doubts I have set out? I am hardly the first to ask. Socrates, according to Plato, remarked to Meno: “I would contend … that we will be better [people], braver and less idle, if we believe that one must search for the things one does not know, rather than if we believe that it is not possible to find out what we do not know and that we must not look for it.”
I am with Socrates on this one—his attitude is wise and pragmatic. If there is a reality and a truth about it, we will be better off and more likely to find it by searching, rather than assuming that it is not there. And even if the search, and the ladders we use to climb obstacles, do not lead us to the truth, we will enjoy the journey.
Lead image by Tasnuva Elahi; with photos by Vijay Balasubramanian
Nem mesmo a física mais avançada pode revelar tudo o que queremos saber sobre a história e o futuro do cosmos, ou sobre nós mesmos
Para que servem as leis da física, se não podemos resolver as equações que as descrevem?
Essa foi a pergunta que me ocorreu ao ler um artigo no The Guardian escrito por Andrew Pontzen, um cosmólogo do University College London que passa os dias realizando simulações computacionais de buracos negros, estrelas, galáxias e do nascimento e crescimento do universo. O que ele queria dizer era que ele e todos nós estamos fadados ao fracasso.
“Mesmo que imaginemos que a humanidade acabará descobrindo uma ‘teoria de tudo’ que abrange todas as partículas e forças individuais, o valor explicativo dessa teoria para o universo como um todo será provavelmente marginal”, escreveu Pontzen.
Não importa o quanto pensemos conhecer as leis básicas da física e a lista cada vez maior de partículas elementares, não há poder computacional suficiente no universo para acompanhar todas elas. E nunca poderemos saber o bastante para prever com segurança o que acontece quando todas essas partículas colidem ou interagem de outra forma. Um ponto decimal adicionado a uma estimativa da localização ou velocidade de uma partícula, digamos, pode repercutir ao longo da história e alterar o resultado bilhões de anos depois, por meio do chamado “efeito borboleta” da teoria do caos.
Considere algo tão simples quanto, por exemplo, a órbita da Terra em torno do sol, diz Pontzen. Deixado à sua própria conta, nosso mundo, ou seu fóssil crocante, continuaria para sempre na mesma órbita. Mas na amplidão do tempo cósmico os empurrões gravitacionais de outros planetas do sistema solar podem alterar seu curso. Dependendo da precisão com que caracterizamos esses empurrões e do material que está sendo empurrado, os cálculos gravitacionais podem produzir previsões extremamente divergentes sobre onde a Terra e seus irmãos estarão daqui a centenas de milhões de anos.
Como resultado, na prática, não podemos prever o futuro nem o passado. Cosmólogos como Pontzen podem proteger suas apostas diminuindo o zoom e considerando o panorama geral —grandes aglomerações de materiais, como nuvens de gás, ou sistemas cujo comportamento coletivo é previsível e não depende de variações individuais. Podemos ferver macarrão sem monitorar cada molécula de água.
Mas existe o risco de se presumir muita ordem. Veja um formigueiro, sugere Pontzen. Os movimentos de qualquer formiga parecem aleatórios. Mas se você olhar o todo, o formigueiro parece fervilhar com propósito e organização. É tentador ver uma consciência coletiva em ação, escreve Pontzen, mas “são apenas formigas solitárias” que seguem regras simples. “A sofisticação emerge do grande número de indivíduos que seguem essas regras”, observa ele, citando o físico Philip W. Anderson, de Princeton: “Mais é diferente”.
Na cosmologia, formou-se uma explicação plausível da história do universo através de suposições simples sobre coisas sobre as quais nada sabemos —matéria escura e energia escura—, mas que, no entanto, constituem 95% do universo. Supostamente, esse “lado negro” do universo interage com 5% da matéria conhecida —átomos— apenas através da gravidade. Depois do Big Bang, conta a história, formaram-se poças de matéria escura, que puxaram a matéria atômica, que se condensou em nuvens, que se aqueceram e se transformaram em estrelas e galáxias. À medida que o universo se expandiu, a energia escura que o permeia também se expandiu e começou a afastar as galáxias cada vez mais rapidamente.
Mas essa narrativa falha logo no início, nas primeiras centenas de milhões de anos, quando estrelas, galáxias e buracos negros se formavam num processo confuso e pouco compreendido que os investigadores chamam de “gastrofísica”.
Sua mecânica é espantosamente difícil de prever, envolvendo campos magnéticos, a natureza e composição das primeiras estrelas e outros efeitos desconhecidos. “Certamente ninguém pode fazer isso agora, partindo simplesmente das leis confiáveis da física, independentemente da quantidade de potência de computação oferecida”, disse Pontzen por e-mail.
Dados recentes do Telescópio Espacial James Webb, revelando galáxias e buracos negros que parecem demasiado maciços e demasiado precoces no universo para serem explicados pelo “modelo padrão” da cosmologia, parecem ampliar o problema. Isso é suficiente para fazer os cosmólogos voltarem às suas pranchetas?
Pontzen não está convencido de que chegou a hora de os cosmólogos abandonarem seu modelo de universo duramente conquistado. A história cósmica é complexa demais para ser simulada em detalhes. Só o nosso sol, salienta ele, contém 1057 átomos, e existem trilhões e trilhões dessas estrelas por aí.
Há meio século, astrônomos descobriram que o universo, com suas estrelas e galáxias, estava repleto de radiação de micro-ondas que sobrou do Big Bang. O mapeamento dessa radiação permitiu que eles criassem uma imagem do cosmos bebê, como existia apenas 380 mil anos após o início dos tempos.
Em princípio, toda a história poderia estar incorporada ali nos caracóis sutis da energia primordial. Na prática, é impossível ler o desdobrar do tempo nessas micro-ondas suficientemente bem para discernir a ascensão e a queda dos dinossauros, o alvorecer da era atômica ou o aparecimento de um ponto de interrogação no céu bilhões de anos mais tarde. Quase 14 bilhões de anos de incerteza quântica, acidentes e detritos cósmicos permanecem entre então e agora.
Na última contagem, os físicos identificaram cerca de 17 tipos de partículas elementares que constituem o universo físico e pelo menos quatro formas de interação —através da gravidade, do eletromagnetismo e das chamadas forças nucleares fortes e fracas.
A aposta cósmica que a ciência ocidental empreendeu é mostrar que essas quatro forças, e talvez outras ainda não descobertas, agindo sobre um vasto conjunto de átomos e seus constituintes, são suficientes para explicar as estrelas, o arco-íris, as flores, nós mesmos e, de fato, a existência do universo como um todo. É uma enorme montanha intelectual e filosófica para escalar.
Na verdade, apesar de toda a nossa fé no materialismo, diz Pontzen, talvez nunca saibamos se tivemos sucesso. “Nossas origens estão escritas no céu”, disse ele, “e estamos apenas aprendendo a lê-las.”
A new paper explores how the opinions of an electorate may be reflected in a mathematical model ‘inspired by models of simple magnetic systems’
Date: October 8, 2021
Source: University at Buffalo
Summary: A study leverages concepts from physics to model how campaign strategies influence the opinions of an electorate in a two-party system.
A study in the journal Physica A leverages concepts from physics to model how campaign strategies influence the opinions of an electorate in a two-party system.
Researchers created a numerical model that describes how external influences, modeled as a random field, shift the views of potential voters as they interact with each other in different political environments.
The model accounts for the behavior of conformists (people whose views align with the views of the majority in a social network); contrarians (people whose views oppose the views of the majority); and inflexibles (people who will not change their opinions).
“The interplay between these behaviors allows us to create electorates with diverse behaviors interacting in environments with different levels of dominance by political parties,” says first author Mukesh Tiwari, PhD, associate professor at the Dhirubhai Ambani Institute of Information and Communication Technology.
“We are able to model the behavior and conflicts of democracies, and capture different types of behavior that we see in elections,” says senior author Surajit Sen, PhD, professor of physics in the University at Buffalo College of Arts and Sciences.
Sen and Tiwari conducted the study with Xiguang Yang, a former UB physics student. Jacob Neiheisel, PhD, associate professor of political science at UB, provided feedback to the team, but was not an author of the research. The study was published online in Physica A in July and will appear in the journal’s Nov. 15 volume.
The model described in the paper has broad similarities to the random field Ising model, and “is inspired by models of simple magnetic systems,” Sen says.
The team used this model to explore a variety of scenarios involving different types of political environments and electorates.
Among key findings, as the authors write in the abstract: “In an electorate with only conformist agents, short-duration high-impact campaigns are highly effective. … In electorates with both conformist and contrarian agents and varying level(s) of dominance due to local factors, short-term campaigns are effective only in the case of fragile dominance of a single party. Strong local dominance is relatively difficult to influence and long-term campaigns with strategies aimed to impact local level politics are seen to be more effective.”
“I think it’s exciting that physicists are thinking about social dynamics. I love the big tent,” Neiheisel says, noting that one advantage of modeling is that it could enable researchers to explore how opinions might change over many election cycles — the type of longitudinal data that’s very difficult to collect.
Mathematical modeling has some limitations: “The real world is messy, and I think we should embrace that to the extent that we can, and models don’t capture all of this messiness,” Neiheisel says.
But Neiheisel was excited when the physicists approached him to talk about the new paper. He says the model provides “an interesting window” into processes associated with opinion dynamics and campaign effects, accurately capturing a number of effects in a “neat way.”
“The complex dynamics of strongly interacting, nonlinear and disordered systems have been a topic of interest for a long time,” Tiwari says. “There is a lot of merit in studying social systems through mathematical and computational models. These models provide insight into short- and long-term behavior. However, such endeavors can only be successful when social scientists and physicists come together to collaborate.”
Journal Reference:
Mukesh Tiwari, Xiguang Yang, Surajit Sen. Modeling the nonlinear effects of opinion kinematics in elections: A simple Ising model with random field based study. Physica A: Statistical Mechanics and its Applications, 2021; 582: 126287 DOI: 10.1016/j.physa.2021.126287
Pesquisadores Syukuro Manabe, Klaus Hasselmann e Giorgio Parisi vão dividir prêmio de 10 milhões de coroas suecas
O prêmio Nobel em Física deste ano foi dedicado ao estudo de sistemas complexos, dentre eles os que permitem a compreensão das mudanças climáticas que afetam nosso planeta. A escolha coloca um carimbo definitivo de consenso sobre a ciência do clima.
Os pesquisadores Syukuro Manabe, dos Estados Unidos, e Klaus Hasselmann, da Alemanha, foram premiados especificamente por modelarem o clima terrestre e fazerem predições sobre o aquecimento global. A outra metade do prêmio foi para Giorgio Parisi, da Itália, que revelou padrões ocultos em materiais complexos desordenados, das escalas atômica à planetária, em uma contribuição essencial à teoria de sistemas complexos, com relevância também para o estudo do clima.
“Muitas pessoas pensam que a física lida com fenômenos simples, como a órbita perfeitamente elíptica da Terra ao redor do Sol ou átomos em estruturas cristalinas”, disse Thors Hans Hansson, membro do comitê de escolha do Nobel, na coletiva que apresentou a escolha.
”Mas a física é muito mais que isso. Uma das tarefas básicas da física é usar teorias básicas da matéria para explicar fenômenos e processos complexos, como o comportamento de materiais e qual é o desenvolvimento no clima da Terra. Isso exige intuição profunda por quais estruturas e quais progressões são essenciais, e também engenhosidade matemática para desenvolver os modelos e as teorias que as descrevem, coisas em que os laureados deste ano são poderosos.”
“Eu acho que é urgente que tomemos decisões muito fortes e nos movamos em um passo forte, porque estamos numa situação em que podemos ter uma retroalimentação positiva e isso pode acelerar o aumento de temperatura”, disse Giorgio Parisi, um dos vencedores, na coletiva de apresentação do evento. “É claro que para as gerações futuras nós temos de agir agora de uma forma muito rápida.”
COMO É ESCOLHIDO O GANHADOR DO NOBEL
A tradicional premiação do Nobel teve início com a morte do químico sueco Alfred Nobel (1833-1896), inventor da dinamite. Em 1895, em seu último testamento, Nobel registrou que sua fortuna deveria ser destinada para a construção de um prêmio —o que foi recebido por sua família com contestação. O primeiro prêmio só foi dado em 1901.
O processo de escolha do vencedor do prêmio da área de física começa no ano anterior à premiação. Em setembro, o Comitê do Nobel de Física envia convites (cerca de 3.000) para a indicação de nomes que merecem a homenagem. As respostas são enviadas até o dia 31 de janeiro.
Podem indicar nomes os membros da Academia Real Sueca de Ciências; membros do Comitê do Nobel de Física; ganhadores do Nobel de Física; professores física em universidades e institutos de tecnologia da Suécia, Dinamarca, Finlândia, Islândia e Noruega, e do Instituto Karolinska, em Estocolmo; professores em cargos semelhantes em pelo menos outras seis (mas normalmente em centenas de) universidades escolhidas pela Academia de Ciências, com o objetivo de assegurar a distribuição adequada pelos continentes e áreas de conhecimento; e outros cientistas que a Academia entenda adequados para receber os convites.
Autoindicações não são aceitas.
Começa então um processo de análise das centenas de nomes apontados, com consulta a especialistas e o desenvolvimento de relatórios, a fim de afunilar a seleção. Finalmente, em outubro, a Academia, por votação majoritária, decide quem receberá o reconhecimento.
Peebles ajudou a entender como o Universo evoluiu após o Big Bang, e Mayor e Queloz descobriram um exoplaneta (planeta fora do Sistema Solar) que orbitava uma estrela do tipo solar.
Pesquisas com laser foram premiadas em 2018, com láureas para Arthur Ashkin, Donna Strickland e Gérard Mourou.
Indo um pouco mais longe, o prêmio já esteve nas mãos de Max Planck (1918), por ter lançado as bases da física quântica e de Albert Einstein (1921), pela descoberta do efeito fotoelétrico. Niels Bohr (1922), por suas contribuições para o entendimento da estrutura atômica, e Paul Dirac e Erwin Schrödinger (1933), pelo desenvolvimento de novas versões da teoria quântica, também foram premiados.
Two new books on quantum theory could not, at first glance, seem more different. The first, Something Deeply Hidden, is by Sean Carroll, a physicist at the California Institute of Technology, who writes, “As far as we currently know, quantum mechanics isn’t just an approximation of the truth; it is the truth.” The second, Einstein’s Unfinished Revolution, is by Lee Smolin of the Perimeter Institute for Theoretical Physics in Ontario, who insists that “the conceptual problems and raging disagreements that have bedeviled quantum mechanics since its inception are unsolved and unsolvable, for the simple reason that the theory is wrong.”
Given this contrast, one might expect Carroll and Smolin to emphasize very different things in their books. Yet the books mirror each other, down to chapters that present the same quantum demonstrations and the same quantum parables. Carroll and Smolin both agree on the facts of quantum theory, and both gesture toward the same historical signposts. Both consider themselves realists, in the tradition of Albert Einstein. They want to finish his work of unifying physical theory, making it offer one coherent description of the entire world, without ad hoc exceptions to cover experimental findings that don’t fit. By the end, both suggest that the completion of this project might force us to abandon the idea of three-dimensional space as a fundamental structure of the universe.
But with Carroll claiming quantum mechanics as literally true and Smolin claiming it as literally false, there must be some underlying disagreement. And of course there is. Traditional quantum theory describes things like electrons as smeary waves whose measurable properties only become definite in the act of measurement. Sean Carroll is a supporter of the “Many Worlds” interpretation of this theory, which claims that the multiple measurement possibilities all simultaneously exist. Some proponents of Many Worlds describe the existence of a “multiverse” that contains many parallel universes, but Carroll prefers to describe a single, radically enlarged universe that contains all the possible outcomes running alongside each other as separate “worlds.” But the trouble, says Lee Smolin, is that in the real world as we observe it, these multiple possibilities never appear — each measurement has a single outcome. Smolin takes this fact as evidence that quantum theory must be wrong, and argues that any theory that supersedes quantum mechanics must do away with these multiple possibilities.
So how can such similar books, informed by the same evidence and drawing upon the same history, reach such divergent conclusions? Well, anyone who cares about politics knows that this type of informed disagreement happens all the time, especially, as with Carroll and Smolin, when the disagreements go well beyond questions that experiments could possibly resolve.
But there is another problem here. The question that both physicists gloss over is that of just how much we should expect to get out of our best physical theories. This question pokes through the foundation of quantum mechanics like rusted rebar, often luring scientists into arguments over parables meant to illuminate the obscure.
With this in mind, let’s try a parable of our own, a cartoon of the quantum predicament. In the tradition of such parables, it’s a story about knowing and not knowing.
We fade in on a scientist interviewing for a job. Let’s give this scientist a name, Bobby Alice, that telegraphs his helplessness to our didactic whims. During the part of the interview where the Reality Industries rep asks him if he has any questions, none of them are answered, except the one about his starting salary. This number is high enough to convince Bobby the job is right for him.
Knowing so little about Reality Industries, everything Bobby sees on his first day comes as a surprise, starting with the campus’s extensive security apparatus of long gated driveways, high tree-lined fences, and all the other standard X-Files elements. Most striking of all is his assigned building, a structure whose paradoxical design merits a special section of the morning orientation. After Bobby is given his project details (irrelevant for us), black-suited Mr. Smith–types tell him the bad news: So long as he works at Reality Industries, he may visit only the building’s fourth floor. This, they assure him, is standard, for all employees but the top executives. Each project team has its own floor, and the teams are never allowed to intermix.
The instructors follow this with what they claim is the good news. Yes, they admit, this tightly tiered approach led to worker distress in the old days, back on the old campus, where the building designs were brutalist and the depression rates were high. But the new building is designed to subvert such pressures. The trainers lead Bobby up to the fourth floor, up to his assignment, through a construction unlike any research facility he has ever seen. The walls are translucent and glow on all sides. So do the floor and ceiling. He is guided to look up, where he can see dark footprints roving about, shadows from the project team on the next floor. “The goal here,” his guide remarks, “is to encourage a sort of cultural continuity, even if we can’t all communicate.”
Over the next weeks, Bobby Alice becomes accustomed to the silent figures floating above him. Eventually, he comes to enjoy the fourth floor’s communal tracking of their fifth-floor counterparts, complete with invented names, invented personalities, invented purposes. He makes peace with the possibility that he is himself a fantasy figure for the third floor.
Then, one day, strange lights appear in a corner of the ceiling.
Naturally phlegmatic, Bobby Alice simply takes notes. But others on the fourth floor are noticeably less calm. The lights seem not to follow any known standard of the physics of footfalls, with lights of different colors blinking on and off seemingly at random, yet still giving the impression not merely of a constructed display but of some solid fixture in the fifth-floor commons. Some team members, formerly of the same anti-philosophical bent as most hires, now spend their coffee breaks discussing increasingly esoteric metaphysics. Productivity declines.
Meanwhile, Bobby has set up a camera to record data. As a work-related extracurricular, he is able in the following weeks to develop a general mathematical description that captures an unexpected order in the flashing lights. This description does not predict exactly which lights will blink when, but, by telling a story about what’s going on between the frames captured by the camera, he can predict what sorts of patterns are allowed, how often, and in what order.
Does this solve the mystery? Apparently it does. Conspiratorial voices on the fourth floor go quiet. The “Alice formalism” immediately finds other applications, and Reality Industries gives Dr. Alice a raise. They give him everything he could want — everything except access to the fifth floor.
In time, Bobby Alice becomes a fourth-floor legend. Yet as the years pass — and pass with the corner lights as an apparently permanent fixture — new employees occasionally massage the Alice formalism to unexpected ends. One worker discovers that he can rid the lights of their randomness if he imagines them as the reflections from a tank of iridescent fish, with the illusion of randomness arising in part because it’s a 3-D projection on a 2-D ceiling, and in part because the fish swim funny. The Alice formalism offers a series of color maps showing the different possible light patterns that might appear at any given moment, and another prominent interpreter argues, with supposed sincerity (although it’s hard to tell), that actually not one but all of the maps occur at once — each in parallel branching universes generated by that spooky alien light source up on the fifth floor.
As the interpretations proliferate, Reality Industries management occasionally finds these side quests to be a drain on corporate resources. But during the Alice decades, the fourth floor has somehow become the company’s most productive. Why? Who knows. Why fight it?
The history of quantum mechanics, being a matter of record, obviously has more twists than any illustrative cartoon can capture. Readers interested in that history are encouraged to read Adam Becker’s recent retelling, What Is Real?, which was reviewed in these pages (“Make Physics Real Again,” Winter 2019). But the above sketch is one attempt to capture the unusual flavor of this history.
Like the fourth-floor scientists in our story who, sight unseen, invented personas for all their fifth-floor counterparts, nineteenth-century physicists are often caricatured as having oversold their grasp on nature’s secrets. But longstanding puzzles — puzzles involving chemical spectra and atomic structure rather than blinking ceiling lights — led twentieth-century pioneers like Niels Bohr, Wolfgang Pauli, and Werner Heisenberg to invent a new style of physical theory. As with the formalism of Bobby Alice, mature quantum theories in this tradition were abstract, offering probabilistic predictions for the outcomes of real-world measurements, while remaining agnostic about what it all meant, about what fundamental reality undergirded the description.
From the very beginning, a counter-tradition associated with names like Albert Einstein, Louis de Broglie, and Erwin Schrödinger insisted that quantum models must ultimately capture something (but probably not everything) about the real stuff moving around us. This tradition gave us visions of subatomic entities as lumps of matter vibrating in space, with the sorts of orbital visualizations one first sees in high school chemistry.
But once the various quantum ideas were codified and physicists realized that they worked remarkably well, most research efforts turned away from philosophical agonizing and toward applications. The second generation of quantum theorists, unburdened by revolutionary angst, replaced every part of classical physics with a quantum version. As Max Planck famously wrote, “A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die.” Since this inherited framework works well enough to get new researchers started, the question of what it all means is usually left alone.
Of course, this question is exactly what most non-experts want answered. For past generations, books with titles like The Tao of Physics and Quantum Reality met this demand, with discussions that wildly mixed conventions of scientific reportage with wisdom literature. Even once quantum theories themselves became familiar, interpretations of them were still new enough to be exciting.
Today, even this thrill is gone. We are now in the part of the story where no one can remember what it was like not to have the blinking lights on the ceiling. Despite the origins of quantum theory as an empirical framework — a container flexible enough to wrap around whatever surprises experiments might uncover — its success has led today’s theorists to regard it as fundamental, a base upon which further speculations might be built.
Regaining that old feeling of disorientation now requires some extra steps.
As interlopers in an ongoing turf war, modern explainers of quantum theory must reckon both with arguments like Niels Bohr’s, which emphasize the theory’s limits on knowledge, and with criticisms like Albert Einstein’s, which demand that the theory represent the real world. Sean Carroll’s Something Deeply Hidden pitches itself to both camps. The title stems from an Einstein anecdote. As “a child of four or five years,” Einstein was fascinated by his father’s compass. He concluded, “Something deeply hidden had to be behind things.” Carroll agrees with this, but argues that the world at its roots is quantum. We only need courage to apply that old Einsteinian realism to our quantum universe.
Carroll is a prolific popularizer — alongside his books, his blog, and his Twitter account, he has also recorded three courses of lectures for general audiences, and for the last year has released a weekly podcast. His new book is appealingly didactic, providing a sustained defense of the Many Worlds interpretation of quantum mechanics, first offered by Hugh Everett III as a graduate student in the 1950s. Carroll maintains that Many Worlds is just quantum mechanics, and he works hard to convince us that supporters aren’t merely perverse. In the early days of electrical research, followers of James Clerk Maxwell were called Maxwellians, but today all physicists are Maxwellians. If Carroll’s project pans out, someday we’ll all be Everettians.
Standard applications of quantum theory follow a standard logic. A physical system is prepared in some initial condition, and modeled using a mathematical representation called a “wave function.” Then the system changes in time, and these changes, governed by the Schrödinger equation, are tracked in the system’s wave function. But when we interpret the wave function in order to generate a prediction of what we will observe, we get only probabilities of possible experimental outcomes.
Carroll insists that this quantum recipe isn’t good enough. It may be sufficient if we care only to predict the likelihood of various outcomes for a given experiment, but it gives us no sense of what the world is like. “Quantum mechanics, in the form in which it is currently presented in physics textbooks,” he writes, “represents an oracle, not a true understanding.”
Most of the quantum mysteries live in the process of measurement. Questions of exactly how measurements force determinate outcomes, and of exactly what we sweep under the rug with that bland word “measurement,” are known collectively in quantum lore as the “measurement problem.” Quantum interpretations are distinguished by how they solve this problem. Usually, solutions involve rejecting some key element of common belief. In the Many Worlds interpretation, the key belief we are asked to reject is that of one single world, with one single future.
The version of the Many Worlds solution given to us in Something Deeply Hidden sidesteps the history of the theory in favor of a logical reconstruction. What Carroll enunciates here is something like a quantum minimalism: “There is only one wave function, which describes the entire system we care about, all the way up to the ‘wave function of the universe’ if we’re talking about the whole shebang.”
Putting this another way, Carroll is a realist about the quantum wave function, and suggests that this mathematical object simply is the deep-down thing, while everything else, from particles to planets to people, are merely its downstream effects. (Sorry, people!) The world of our experience, in this picture, is just a tiny sliver of the real one, where all possible outcomes — all outcomes for which the usual quantum recipe assigns a non-zero probability — continue to exist, buried somewhere out of view in the universal wave function. Hence the “Many Worlds” moniker. What we experience as a single world, chock-full of foreclosed opportunities, Many Worlders understand as but one swirl of mist foaming off an ever-breaking wave.
The position of Many Worlds may not yet be common, but neither is it new. Carroll, for his part, is familiar enough with it to be blasé, presenting it in the breezy tone of a man with all the answers. The virtue of his presentation is that whether or not you agree with him, he gives you plenty to consider, including expert glosses on ongoing debates in cosmology and field theory. But Something Deeply Hidden still fails where it matters. “If we train ourselves to discard our classical prejudices, and take the lessons of quantum mechanics at face value,” Carroll writes near the end, “we may eventually learn how to extract our universe from the wave function.”
But shouldn’t it be the other way around? Why should we have to work so hard to “extract our universe from the wave function,” when the wave function itself is an invention of physicists, not the inerrant revelation of some transcendental truth? Interpretations of quantum theory live or die on how well they are able to explain its success, and the most damning criticism of the Many Worlds interpretation is that it’s hard to see how it improves on the standard idea that probabilities in quantum theory are just a way to quantify our expectations about various measurement outcomes.
Carroll argues that, in Many Worlds, probabilities arise from self-locating uncertainty: “You know everything there is to know about the universe, except where you are within it.” During a measurement, “a single world splits into two, and there are now two people where I used to be just one.” “For a brief while, then, there are two copies of you, and those two copies are precisely identical. Each of them lives on a distinct branch of the wave function, but neither of them knows which one it is on.” The job of the physicist is then to calculate the chance that he has ended up on one branch or another — which produces the probabilities of the various measurement outcomes.
If, alongside Carroll, you convince yourself that it is reasonable to suppose that these worlds exist outside our imaginations, you still might conclude, as he does, that “at the end of the day it doesn’t really change how we should go through our lives.” This conclusion comes in a chapter called “The Human Side,” where Carroll also dismisses the possibility that humans might have a role in branching the wave function, or indeed that we have any ultimate agency: “While you might be personally unsure what choice you will eventually make, the outcome is encoded in your brain.” These views are rewarmed arguments from his previous book, The Big Picture, which I reviewed in these pages (“Pop Goes the Physics,” Spring 2017) and won’t revisit here.
Although this book is unlikely to turn doubters of Many Worlds into converts, it is a credit to Carroll that he leaves one with the impression that the doctrine is probably consistent, whether or not it is true. But internal consistency has little power against an idea that feels unacceptable. For doctrines like Many Worlds, with key claims that are in principle unobservable, some of us will always want a way out.
Lee Smolin is one such seeker for whom Many Worlds realism — or “magical realism,” as he likes to call it — is not real enough. In his new book, Einstein’s Unfinished Revolution, Smolin assures us that “however weird the quantum world may be, it need not threaten anyone’s belief in commonsense realism. It is possible to be a realist while living in the quantum universe.” But if you expect “commonsense realism” by the end of his book, prepare for a surprise.
Smolin is less congenial than Carroll, with a brooding vision of his fellow scientists less as fellow travelers and more as members of an “orthodoxy of the unreal,” as Smolin stirringly puts it. Smolin is best known for his role as doomsayer about string theory — his 2006 book The Trouble with Physics functioned as an entertaining jeremiad. But while his books all court drama and are never boring, that often comes at the expense of argumentative care.
Einstein’s Unfinished Revolution can be summarized briefly. Smolin states early on that quantum theory is wrong: It gives probabilities for many and various measurement outcomes, whereas the world of our observation is solid and singular. Nevertheless, quantum theory can still teach us important lessons about nature. For instance, Smolin takes at face value the claim that entangled particles far apart in the universe can communicate information to each other instantaneously, unbounded by the speed of light. This ability of quantum entities to be correlated while separated in space is technically called “nonlocality,” which Smolin enshrines as a fundamental principle. And while he takes inspiration from an existing nonlocal quantum theory, he rejects it for violating other favorite physical principles. Instead, he elects to redo physics from scratch, proposing partial theories that would allow his favored ideals to survive.
This is, of course, an insane act of hubris. But no red line separates the crackpot from the visionary in theoretical physics. Because Smolin presents himself as a man up against the status quo, his books are as much autobiography as popular science, with personality bleeding into intellectual commitments. Smolin’s last popular book, Time Reborn (2013), showed him changing his mind about the nature of time after doing bedtime with his son. This time around, Smolin tells us in the preface about how he came to view the universe as nonlocal:
I vividly recall that when I understood the proof of the theorem, I went outside in the warm afternoon and sat on the steps of the college library, stunned. I pulled out a notebook and immediately wrote a poem to a girl I had a crush on, in which I told her that each time we touched there were electrons in our hands which from then on would be entangled with each other. I no longer recall who she was or what she made of my poem, or if I even showed it to her. But my obsession with penetrating the mystery of nonlocal entanglement, which began that day, has never since left me.
The book never seriously questions whether the arguments for nonlocality should convince us; Smolin’s experience of conviction must stand in for our own. These personal detours are fascinating, but do little to convince skeptics.
Once you start turning the pages of Einstein’s Unfinished Revolution, ideas fly by fast. First, Smolin gives us a tour of the quantum fundamentals — entanglement, nonlocality, and all that. Then he provides a thoughtful overview of solutions to the measurement problem, particularly those of David Bohm, whose complex legacy he lingers over admiringly. But by the end, Smolin abandons the plodding corporate truth of the scientist for the hope of a private perfection.
Many physicists have never heard of Bohm’s theory, and some who have still conclude that it’s worthless. Bohm attempted to salvage something like the old classical determinism, offering a way to understand measurement outcomes as caused by the motion of particles, which in turn are guided by waves. This conceptual simplicity comes at the cost of brazen nonlocality, and an explicit dualism of particles and waves. Einstein called the theory a “physical fairy-tale for children”; Robert Oppenheimer declared about Bohm that “we must agree to ignore him.”
Bohm’s theory is important to Smolin mainly as a prototype, to demonstrate that it’s possible to situate quantum mechanics within a single world — unlike Many Worlds, which Smolin seems to dislike less for physical than for ethical reasons: “It seems to me that the Many Worlds Interpretation offers a profound challenge to our moral thinking because it erases the distinction between the possible and the actual.” In his survey, Smolin sniffs each interpretation as he passes it, looking for a whiff of the real quantum story, which will preserve our single universe while also maintaining the virtues of all the partial successes.
When Smolin finally explains his own idiosyncratic efforts, his methods — at least in the version he has dramatized here — resemble some wild descendant of Cartesian rationalism. From his survey, Smolin lists the principles he would expect from an acceptable alternative to quantum theory. He then reports back to us on the incomplete models he has found that will support these principles.
Smolin’s tour leads us all over the place, from a review of Leibniz’s Monadology (“shockingly modern”), to a new law of physics he proposes (the “principle of precedence”), to a solution to the measurement problem involving nonlocal interactions among all similar systems everywhere in the universe. Smolin concludes with the grand claim that “the universe consists of nothing but views of itself, each from an event in its history.” Fine. Maybe there’s more to these ideas than a casual reader might glean, but after a few pages of sentences like, “An event is something that happens,” hope wanes.
For all their differences, Carroll and Smolin similarly insist that, once the basic rules governing quantum systems are properly understood, the rest should fall into place. “Once we understand what’s going on for two particles, the generalization to 1088 particles is just math,” Carroll assures us. Smolin is far less certain that physics is on the right track, but he, too, believes that progress will come with theoretical breakthroughs. “I have no better answer than to face the blank notebook,” Smolin writes. This was the path of Bohr, Einstein, Bohm and others. “Ask yourself which of the fundamental principles of the present canon must survive the coming revolution. That’s the first page. Then turn again to a blank page and start thinking.”
Physicists are always tempted to suppose that successful predictions prove that a theory describes how the world really is. And why not? Denying that quantum theory captures something essential about the character of those entities outside our heads that we label with words like “atoms” and “molecules” and “photons” seems far more perverse, as an interpretive strategy, than any of the mainstream interpretations we’ve already discussed. Yet one can admit that something is captured by quantum theory without jumping immediately to the assertion that everything must flow from it. An invented language doesn’t need to be universal to be useful, and it’s smart to keep on honing tools for thinking that have historically worked well.
As an old mentor of mine, John P. Ralston, wrote in his book How to Understand Quantum Mechanics, “We don’t know what nature is, and it is not clear whether quantum theory fully describes it. However, it’s not the worst thing. It has not failed yet.” This seems like the right attitude to take. Quantum theory is a fabulously rich subject, but the fact that it has not failed yet does not allow us to generalize its results indefinitely.
There is value in the exercises that Carroll and Smolin perform, in their attempts to imagine principled and orderly universes, to see just how far one can get with a straitjacketed imagination. But by assuming that everything is captured by the current version of quantum theory, Carroll risks credulity, foreclosing genuinely new possibilities. And by assuming that everything is up for grabs, Smolin risks paranoia, ignoring what is already understood.
Perhaps the agnostics among us are right to settle in as permanent occupants of Reality Industries’ fourth floor. We can accept that scientists have a role in creating stories that make sense, while also appreciating the possibility that the world might not be made of these stories. To the big, unresolved questions — questions about where randomness enters in the measurement process, or about how much of the world our physical theories might capture — we can offer only a laconic who knows? The world is filled with flashing lights, and we should try to find some order in them. Scientific success often involves inventing a language that makes the strange sensible, warping intuitions along the way. And while this process has allowed us to make progress, we should never let our intuitions get so strong that we stop scanning the ceiling for unexpected dazzlements.
David Kordahl is a graduate student in physics at Arizona State University. David Kordahl, “Inventing the Universe,” The New Atlantis, Number 61, Winter 2020, pp. 114-124.
Marcella Duarte Colaboração para Tilt – 05/01/2021 17h02 4-5 minutos
Parecia que 2020 nunca ia acabar, mas, tecnicamente, ele passou mais depressa que o normal. E este ano será ainda mais ligeiro. O motivo? A Terra tem “girado” estranhamente depressa ultimamente. Por isso, pode ser que a gente precise adiantar nossos relógios, mas você nem vai perceber.
No ano passado, foi registrado o dia mais curto da história, desde que foram iniciadas as medições, há 50 anos. Em 19 de julho de 2020, o planeta completou sua rotação 1,4602 milésimo de segundo mais rápido que os costumeiros 86.400 segundos (24 horas).
O dia mais curto que até então se tinha registro aconteceu em 2005, e foi superado 28 vezes em 2020. E este ano deve ser o mais rápido da história, porque os dias de 2021 deverão ser, em média, 0,5 milissegundo mais curtos que o normal.
Essas pequenas mudanças na duração dos dias só foram descobertas após o desenvolvimento de relógios atômicos superprecisos, na década de 1960. Inicialmente, percebeu-se que a velocidade de rotação da Terra, quando gira em torno de seu próprio eixo resultando nos dias e noites, estava diminuindo ano após ano.
Desde a década de 1970, foi necessário “adicionar” 27 segundos no tempo atômico internacional, para manter nossa contagem de tempo sincronizada com o planeta mais lento. É o chamado “leap second” ou “inserção de segundo intercalado”.
Essas correções acontecem sempre ao final de um semestre, em 31 de dezembro ou 30 de junho. Assim, garante-se que o Sol sempre esteja exatamente no meio do céu ao meio-dia.
A última vez que ocorreu foi no Ano Novo de 2016, quando relógios no mundo todo pausaram por um segundo para “esperar” a Terra.
Mas recentemente, está acontecendo o oposto: a rotação está acelerando. E pode ser que a gente precise “saltar” o tempo para “alcançar” o movimento do planeta. Seria a primeira vez na história que um segundo seria deletado dos relógios internacionais.
Há um debate internacional sobre a necessidade deste ajuste e o futuro do cálculo do tempo. Cientistas acreditam que, ao longo de 2021, os relógios atômicos acumularão um atraso de 19 milésimos de segundos.
Se os ajustes não forem feitos, levaria centenas de anos para uma pessoa comum notar a diferença. Mas sistemas de navegação e de comunicação por satélite —que usam a posição da Terra, do Sol e das estrelas para funcionar— podem ser impactados mais brevemente.
Nossos “guardiões do tempo” são os oficiais do Serviço Internacional de Sistemas de Referência e Rotação da Terra (Iers), em Paris, França. São eles que monitoram a rotação da Terra e os 260 relógios atômicos espalhados pelo mundo e avisam quando é necessário adicionar —ou eventualmente deletar— algum segundo.
Manipular o tempo pode ter consequências. Quando foi adicionado um “leap second” em 2012, gigantes tecnológicos da época, como Linux, Mozilla, Java, Reddit, Foursquare, Yelp e LinkedIn reportaram falhas.
A velocidade de rotação da Terra varia constantemente, dependendo de diversos fatores, como o complexo movimento de seu núcleo derretido, dos oceanos e da atmosfera, além das interações gravitacionais com outros corpos celestes, como a Lua. O aquecimento global, e consequente derretimento das calotas polares e gelo das montanhas também tem acelerado a movimentação.
Por isso, os dias nunca têm duração exatamente igual. O último domingo (3) teve “apenas” 23 horas, 59 minutos e 59,9998927 segundos. Já a segunda-feira (4) foi mais preguiçosa, com pouco mais de 24 horas.
There’s something mysterious coming up from the frozen ground in Antarctica, and it could break physics as we know it.
Physicists don’t know what it is exactly. But they do know it’s some sort of cosmic ray — a high-energy particle that’s blasted its way through space, into the Earth, and back out again. But the particles physicists know about — the collection of particles that make up what scientists call the Standard Model (SM) of particle physics — shouldn’t be able to do that. Sure, there are low-energy neutrinos that can pierce through miles upon miles of rock unaffected. But high-energy neutrinos, as well as other high-energy particles, have “large cross-sections.” That means that they’ll almost always crash into something soon after zipping into the Earth and never make it out the other side.
And yet, since March 2016, researchers have been puzzling over two events in Antarctica where cosmic rays did burst out from the Earth, and were detected by NASA’s Antarctic Impulsive Transient Antenna (ANITA) — a balloon-borne antenna drifting over the southern continent.
ANITA is designed to hunt cosmic rays from outer space, so the high-energy neutrino community was buzzing with excitement when the instrument detected particles that seemed to be blasting up from Earth instead of zooming down from space. Because cosmic rays shouldn’t do that, scientists began to wonder whether these mysterious beams are made of particles never seen before.
Since then, physicists have proposed all sorts of explanations for these “upward going” cosmic rays, from sterile neutrinos (neutrinos that rarely ever bang into matter) to “atypical dark matter distributions inside the Earth,” referencing the mysterious form of matter that doesn’t interact with light.
All the explanations were intriguing, and suggested that ANITA might have detected a particle not accounted for in the Standard Model. But none of the explanations demonstrated conclusively that something more ordinary couldn’t have caused the signal at ANITA.
A new paper uploaded today (Sept. 26) to the preprint server arXiv changes that. In it, a team of astrophysicists from Penn State University showed that there have been more upward-going high-energy particles than those detected during the two ANITA events. Three times, they wrote, IceCube (another, larger neutrino observatory in Antarctica) detected similar particles, though no one had yet connected those events to the mystery at ANITA. And, combining the IceCube and ANITA data sets, the Penn State researchers calculated that, whatever particle is bursting up from the Earth, it has much less than a 1-in-3.5 million chance of being part of the Standard Model. (In technical, statistical terms, their results had confidences of 5.8 and 7.0 sigma, depending on which of their calculations you’re looking at.)
Breaking physics
Derek Fox, the lead author on the new paper, said that he first came across the ANITA events in May 2018, in one of the earlier papers attempting to explain them.
“I was like, ‘Well this model doesn’t make much sense,'” Fox told Live Science, “but the [ANITA] result is very intriguing, so I started checking up on it. I started talking to my office neighbor Steinn Sigurdsson [the second author on the paper, who is also at Penn State] about whether maybe we could gin up some more plausible explanations than the papers that have been published to date.”
Fox, Sigurdsson and their colleagues started looking for similar events in data collected by other detectors. When they came across possible upward-going events in IceCube data, he said, he realized that he might have come across something really game-changing for physics.
The surface facility for the IceCube experiment, which is located under nearly 1 mile (1.6 kilometers) of ice in Antarctica. IceCube suggests ghostly neutrinos don’t exist, but a new experiment says they do. (Image credit: Courtesy of IceCube Neutrino Observatory)
“That’s what really got me going, and looking at the ANITA events with the utmost seriousness,” he said, later adding, “This is what physicists live for. Breaking models, setting new constraints [on reality], learning things about the universe we didn’t know.”
As Live Science has previously reported, experimental, high-energy particle physics has been at a standstill for the last several years. When the 17-mile (27 kilometers), $10 billion Large Hadron Collider (LHC) was completed on the border between France and Switzerland in 2009, scientists thought it would unlock the mysteries of supersymmetry — the mysterious, theoretical class of particles that scientists suspect might exist outside of current physics, but had never detected. According to supersymmetry, every existing particle in the Standard Model has a supersymmetric partner. Researchers suspect these partners exist because the masses of known particles are out of wack — not symmetric with one another.
“Even though the SM works very well in explaining a plethora of phenomena, it still has many handicaps,” said Seyda Ipek, a particle physicist at UC Irvine, who was not involved in the current research. “For example, it cannot account for the existence of dark matter, [explain mathematical weirdness in] neutrino masses, or the matter-antimatter asymmetry of the universe.”
Instead, the LHC confirmed the Higgs boson, the final undetected part of the Standard Model, in 2012. And then it stopped detecting anything else that important or interesting. Researchers began to question whether any existing physics experiment could ever detect a supersymmetric particle.
“We need new ideas,” Jessie Shelton, a theoretical physicist at the University of Illinois at Urbana-Champaign, told Live Science in May, around the same time that Fox first became interested in the ANITA data.
Now, several scientists not involved in the Penn State paper told Live Science that it offers solid (if incomplete) evidence that something new has really arrived.
“It was clear from the start that if the ANITA anomalous events are due to particles that had propagated through thousands of kilometers of Earth, then those particles were very likely not SM particles,” said Mauricio Bustamante, an astrophysicist at the Niels Bohr Institute at the University of Copenhagen, who was not an author on the new paper.Advertisement
“The paper that appeared today is the first systematic calculation of how unlikely is that these events were due to SM neutrinos,” he added. “Their result strongly disfavors a SM explanation.”
“I think it’s very compelling,” said Bill Louis, a neutrino physicist at Los Alamos National Laboratory who was not involved in the paper and has been following research into the ANITA events for several months.
If standard model particle created these anomalies, they should have been neutrinos. Researchers know that both because of the particles they decayed into, and because no other standard model particle would even have a fragment of a chance in a million of making it through the Earth.
But neutrinos of this energy, Louis said, just shouldn’t make it through the Earth often enough for ANITA or IceCube to detect. It’s not how they work. But neutrino detectors like ANITA and IceCube don’t detect neutrinos directly. Instead, they detect the particles that neutrinos decay into after smashing into Earth’s atmosphere or Antarctic ice. And there are other events that can generate those particles, triggering the detectors. This paper strongly suggests that those events must have been supersymmetric, Louis said, though he added that more data is necessary.
Louis said that at this stage he thinks that level of specificity is “a bit of a stretch.”
The authors make a strong statistical case that no conventional particle would be likely to travel through the Earth in this way, he said, but there isn’t yet enough data to be certain. And there’s certainly not enough that they could definitively figure out what particle made the trip.
Fox didn’t dispute that.
“As an observer, there’s no way that I can know that this is a stau,” he said. “From my perspective, I go trawling around trying to discover new things about the universe, I come upon some really bizarre phenomenon, and then with my colleagues, we do a little literature search to see if anybody has ever thought that this might happen. And then if we find papers in the literature, including one from 14 years ago that predict something just like this phenomenon, then that gets really high weight from me.”
He and his colleagues did find a long chain of papers from theorists predicting that stau sleptons might turn up like this in neutrino observatories. And because those papers were written before the ANITA anomaly, Fox said, that suggests strongly to him that those theorists were onto something.
But there remains a lot of uncertainty on that front, he said. Right now, researchers just know that whatever this particle is, it interacts very weakly with other particles, or else it would have never survived the trip through the planet’s dense mass.
What’s next
Every physicist who spoke with Live Science agreed that researchers need to collect more data to verify that ANITA and IceCube have cracked supersymmetry. It’s possible, Fox said, that when IceCube researchers dig into their data archives they’ll find more, similar events that had previously gone unnoticed. Louis and Bustamante both said that NASA should run more ANITA flights to see if similar upward-going particles turn up.
“For us to be certain that these events are not due to unknown unknowns — say, unmapped properties of the Antarctic ice — we would like other instruments to also detect these sort of events,” Bustamante said.
A team prepares ANITA for flight over the Antarctic ice. (Image credit: NASA)
Over the long-term, if these results are confirmed and the details of what particle is causing them are nailed down, several researchers said that the ANITA anomaly might unlock even more new physics at the LHC.
“Any observation a non-SM particle would be a game changer, because it would tell us which path we should take after the SM,” Ipek said. “The type of [supersymmetric] particle they claim to have produced the signals of, sleptons, are very hard to produce and detect at LHC.”
“So, it is very interesting if they can be observed by other types of experiments. Of course, if this is true, then we will expect a ladder of other [supersymmetric] particles to be observed at the LHC, which would be a complementary test of the claims.”Advertisement
In other words, the ANITA anomalies could offer scientists the key information necessary to properly tune the LHC to unlock more of supersymmetry. Those experiments might even turn up an explanation for dark matter.
Right now, Fox said, he’s just hungry for more data.
Three different studies, done by different teams of scientists proved something really extraordinary. But when a new research connected these 3 discoveries, something shocking was realized, something hiding in plain sight.
Human emotion literally shapes the world around us. Not just our perception of the world, but reality itself.
In the first experiment, human DNA, isolated in a sealed container, was placed near a test subject. Scientists gave the donor emotional stimulus and fascinatingly enough, the emotions affected their DNA in the other room.
In the presence of negative emotions the DNA tightened. In the presence of positive emotions the coils of the DNA relaxed.
The scientists concluded that “Human emotion produces effects which defy conventional laws of physics.”
In the second, similar but unrelated experiment, different group of scientists extracted Leukocytes (white blood cells) from donors and placed into chambers so they could measure electrical changes.
In this experiment, the donor was placed in one room and subjected to “emotional stimulation” consisting of video clips, which generated different emotions in the donor.
The DNA was placed in a different room in the same building. Both the donor and his DNA were monitored and as the donor exhibited emotional peaks or valleys (measured by electrical responses), the DNA exhibited the IDENTICAL RESPONSES AT THE EXACT SAME TIME.
There was no lag time, no transmission time. The DNA peaks and valleys EXACTLY MATCHED the peaks and valleys of the donor in time.
The scientists wanted to see how far away they could separate the donor from his DNA and still get this effect. They stopped testing after they separated the DNA and the donor by 50 miles and STILL had the SAME result. No lag time; no transmission time.
The DNA and the donor had the same identical responses in time. The conclusion was that the donor and the DNA can communicate beyond space and time.
The third experiment proved something pretty shocking!
Scientists observed the effect of DNA on our physical world.
Light photons, which make up the world around us, were observed inside a vacuum. Their natural locations were completely random.
Human DNA was then inserted into the vacuum. Shockingly the photons were no longer acting random. They precisely followed the geometry of the DNA.
Scientists who were studying this, described the photons behaving “surprisingly and counter-intuitively”. They went on to say that “We are forced to accept the possibility of some new field of energy!”
They concluded that human DNA literally shape the behavior of light photons that make up the world around us!
So when a new research was done, and all of these 3 scientific claims were connected together, scientists were shocked.
They came to a stunning realization that if our emotions affect our DNA and our DNA shapes the world around us, than our emotions physically change the world around us.
And not just that, we are connected to our DNA beyond space and time.
We create our reality by choosing it with our feelings.
Science has already proven some pretty MINDBLOWING facts about The Universe we live in. All we have to do is connect the dots.
“I cannot define the real problem, therefore I suspect there’s no real problem, but I’m not sure there’s no real problem.”
The American physicist Richard Feynman said this about the notorious puzzles and paradoxes of quantum mechanics, the theory physicists use to describe the tiniest objects in the Universe. But he might as well have been talking about the equally knotty problem of consciousness.
Some scientists think we already understand what consciousness is, or that it is a mere illusion. But many others feel we have not grasped where consciousness comes from at all.
The perennial puzzle of consciousness has even led some researchers to invoke quantum physics to explain it. That notion has always been met with skepticism, which is not surprising: it does not sound wise to explain one mystery with another. But such ideas are not obviously absurd, and neither are they arbitrary.
For one thing, the mind seemed, to the great discomfort of physicists, to force its way into early quantum theory. What’s more, quantum computers are predicted to be capable of accomplishing things ordinary computers cannot, which reminds us of how our brains can achieve things that are still beyond artificial intelligence. “Quantum consciousness” is widely derided as mystical woo, but it just will not go away.
What is going on in our brains? (Credit: Mehau Kulyk/Science Photo Library)
Quantum mechanics is the best theory we have for describing the world at the nuts-and-bolts level of atoms and subatomic particles. Perhaps the most renowned of its mysteries is the fact that the outcome of a quantum experiment can change depending on whether or not we choose to measure some property of the particles involved.
When this “observer effect” was first noticed by the early pioneers of quantum theory, they were deeply troubled. It seemed to undermine the basic assumption behind all science: that there is an objective world out there, irrespective of us. If the way the world behaves depends on how – or if – we look at it, what can “reality” really mean?
The most famous intrusion of the mind into quantum mechanics comes in the “double-slit experiment”
Some of those researchers felt forced to conclude that objectivity was an illusion, and that consciousness has to be allowed an active role in quantum theory. To others, that did not make sense. Surely, Albert Einstein once complained, the Moon does not exist only when we look at it!
Today some physicists suspect that, whether or not consciousness influences quantum mechanics, it might in fact arise because of it. They think that quantum theory might be needed to fully understand how the brain works.
Might it be that, just as quantum objects can apparently be in two places at once, so a quantum brain can hold onto two mutually-exclusive ideas at the same time?
These ideas are speculative, and it may turn out that quantum physics has no fundamental role either for or in the workings of the mind. But if nothing else, these possibilities show just how strangely quantum theory forces us to think.
The famous double-slit experiment (Credit: Victor de Schwanberg/Science Photo Library)
The most famous intrusion of the mind into quantum mechanics comes in the “double-slit experiment”. Imagine shining a beam of light at a screen that contains two closely-spaced parallel slits. Some of the light passes through the slits, whereupon it strikes another screen.
Light can be thought of as a kind of wave, and when waves emerge from two slits like this they can interfere with each other. If their peaks coincide, they reinforce each other, whereas if a peak and a trough coincide, they cancel out. This wave interference is called diffraction, and it produces a series of alternating bright and dark stripes on the back screen, where the light waves are either reinforced or cancelled out.
The implication seems to be that each particle passes simultaneously through both slits
This experiment was understood to be a characteristic of wave behaviour over 200 years ago, well before quantum theory existed.
The double slit experiment can also be performed with quantum particles like electrons; tiny charged particles that are components of atoms. In a counter-intuitive twist, these particles can behave like waves. That means they can undergo diffraction when a stream of them passes through the two slits, producing an interference pattern.
Now suppose that the quantum particles are sent through the slits one by one, and their arrival at the screen is likewise seen one by one. Now there is apparently nothing for each particle to interfere with along its route – yet nevertheless the pattern of particle impacts that builds up over time reveals interference bands.
The implication seems to be that each particle passes simultaneously through both slits and interferes with itself. This combination of “both paths at once” is known as a superposition state.
But here is the really odd thing.
The double-slit experiment (Credit: GIPhotoStock/Science Photo Library)
If we place a detector inside or just behind one slit, we can find out whether any given particle goes through it or not. In that case, however, the interference vanishes. Simply by observing a particle’s path – even if that observation should not disturb the particle’s motion – we change the outcome.
The physicist Pascual Jordan, who worked with quantum guru Niels Bohr in Copenhagen in the 1920s, put it like this: “observations not only disturb what has to be measured, they produce it… We compel [a quantum particle] to assume a definite position.” In other words, Jordan said, “we ourselves produce the results of measurements.”
If that is so, objective reality seems to go out of the window.
And it gets even stranger.
Particles can be in two states (Credit: Victor de Schwanberg/Science Photo Library)
If nature seems to be changing its behaviour depending on whether we “look” or not, we could try to trick it into showing its hand. To do so, we could measure which path a particle took through the double slits, but only after it has passed through them. By then, it ought to have “decided” whether to take one path or both.
The sheer act of noticing, rather than any physical disturbance caused by measuring, can cause the collapse
An experiment for doing this was proposed in the 1970s by the American physicist John Wheeler, and this “delayed choice” experiment was performed in the following decade. It uses clever techniques to make measurements on the paths of quantum particles (generally, particles of light, called photons) after they should have chosen whether to take one path or a superposition of two.
It turns out that, just as Bohr confidently predicted, it makes no difference whether we delay the measurement or not. As long as we measure the photon’s path before its arrival at a detector is finally registered, we lose all interference.
It is as if nature “knows” not just if we are looking, but if we are planning to look.
Eugene Wigner (Credit: Emilio Segre Visual Archives/American Institute of Physics/Science Photo Library)
Whenever, in these experiments, we discover the path of a quantum particle, its cloud of possible routes “collapses” into a single well-defined state. What’s more, the delayed-choice experiment implies that the sheer act of noticing, rather than any physical disturbance caused by measuring, can cause the collapse. But does this mean that true collapse has only happened when the result of a measurement impinges on our consciousness?
It is hard to avoid the implication that consciousness and quantum mechanics are somehow linked
That possibility was admitted in the 1930s by the Hungarian physicist Eugene Wigner. “It follows that the quantum description of objects is influenced by impressions entering my consciousness,” he wrote. “Solipsism may be logically consistent with present quantum mechanics.”
Wheeler even entertained the thought that the presence of living beings, which are capable of “noticing”, has transformed what was previously a multitude of possible quantum pasts into one concrete history. In this sense, Wheeler said, we become participants in the evolution of the Universe since its very beginning. In his words, we live in a “participatory universe.”
To this day, physicists do not agree on the best way to interpret these quantum experiments, and to some extent what you make of them is (at the moment) up to you. But one way or another, it is hard to avoid the implication that consciousness and quantum mechanics are somehow linked.
Beginning in the 1980s, the British physicist Roger Penrosesuggested that the link might work in the other direction. Whether or not consciousness can affect quantum mechanics, he said, perhaps quantum mechanics is involved in consciousness.
Physicist and mathematician Roger Penrose (Credit: Max Alexander/Science Photo Library)
What if, Penrose asked, there are molecular structures in our brains that are able to alter their state in response to a single quantum event. Could not these structures then adopt a superposition state, just like the particles in the double slit experiment? And might those quantum superpositions then show up in the ways neurons are triggered to communicate via electrical signals?
Maybe, says Penrose, our ability to sustain seemingly incompatible mental states is no quirk of perception, but a real quantum effect.
Perhaps quantum mechanics is involved in consciousness
After all, the human brain seems able to handle cognitive processes that still far exceed the capabilities of digital computers. Perhaps we can even carry out computational tasks that are impossible on ordinary computers, which use classical digital logic.
Penrose first proposed that quantum effects feature in human cognition in his 1989 book The Emperor’s New Mind. The idea is called Orch-OR, which is short for “orchestrated objective reduction”. The phrase “objective reduction” means that, as Penrose believes, the collapse of quantum interference and superposition is a real, physical process, like the bursting of a bubble.
Orch-OR draws on Penrose’s suggestion that gravity is responsible for the fact that everyday objects, such as chairs and planets, do not display quantum effects. Penrose believes that quantum superpositions become impossible for objects much larger than atoms, because their gravitational effects would then force two incompatible versions of space-time to coexist.
Penrose developed this idea further with American physician Stuart Hameroff. In his 1994 book Shadows of the Mind, he suggested that the structures involved in this quantum cognition might be protein strands called microtubules. These are found in most of our cells, including the neurons in our brains. Penrose and Hameroff argue that vibrations of microtubules can adopt a quantum superposition.
But there is no evidence that such a thing is remotely feasible.
Microtubules inside a cell (Credit: Dennis Kunkel Microscopy/Science Photo Library)
It has been suggested that the idea of quantum superpositions in microtubules is supported by experiments described in 2013, but in fact those studies made no mention of quantum effects.
Besides, most researchers think that the Orch-OR idea was ruled out by a study published in 2000. Physicist Max Tegmark calculated that quantum superpositions of the molecules involved in neural signaling could not survive for even a fraction of the time needed for such a signal to get anywhere.
Other researchers have found evidence for quantum effects in living beings
Quantum effects such as superposition are easily destroyed, because of a process called decoherence. This is caused by the interactions of a quantum object with its surrounding environment, through which the “quantumness” leaks away.
Decoherence is expected to be extremely rapid in warm and wet environments like living cells.
Nerve signals are electrical pulses, caused by the passage of electrically-charged atoms across the walls of nerve cells. If one of these atoms was in a superposition and then collided with a neuron, Tegmark showed that the superposition should decay in less than one billion billionth of a second. It takes at least ten thousand trillion times as long for a neuron to discharge a signal.
As a result, ideas about quantum effects in the brain are viewed with great skepticism.
Besides, the idea that the brain might employ quantum tricks shows no sign of going away. For there is now another, quite different argument for it.
Could phosphorus sustain a quantum state? (Credit: Phil Degginger/Science Photo Library)
In a study published in 2015, physicist Matthew Fisher of the University of California at Santa Barbara argued that the brain might contain molecules capable of sustaining more robust quantum superpositions. Specifically, he thinks that the nuclei of phosphorus atoms may have this ability.
Phosphorus atoms are everywhere in living cells. They often take the form of phosphate ions, in which one phosphorus atom joins up with four oxygen atoms.
Such ions are the basic unit of energy within cells. Much of the cell’s energy is stored in molecules called ATP, which contain a string of three phosphate groups joined to an organic molecule. When one of the phosphates is cut free, energy is released for the cell to use.
Cells have molecular machinery for assembling phosphate ions into groups and cleaving them off again. Fisher suggested a scheme in which two phosphate ions might be placed in a special kind of superposition called an “entangled state”.
Phosphorus spins could resist decoherence for a day or so, even in living cells
The phosphorus nuclei have a quantum property called spin, which makes them rather like little magnets with poles pointing in particular directions. In an entangled state, the spin of one phosphorus nucleus depends on that of the other.
Put another way, entangled states are really superposition states involving more than one quantum particle.
Fisher says that the quantum-mechanical behaviour of these nuclear spins could plausibly resist decoherence on human timescales. He agrees with Tegmark that quantum vibrations, like those postulated by Penrose and Hameroff, will be strongly affected by their surroundings “and will decohere almost immediately”. But nuclear spins do not interact very strongly with their surroundings.
All the same, quantum behaviour in the phosphorus nuclear spins would have to be “protected” from decoherence.
Quantum particles can have different spins (Credit: Richard Kail/Science Photo Library)
This might happen, Fisher says, if the phosphorus atoms are incorporated into larger objects called “Posner molecules”. These are clusters of six phosphate ions, combined with nine calcium ions. There is some evidence that they can exist in living cells, though this is currently far from conclusive.
I decided… to explore how on earth the lithium ion could have such a dramatic effect in treating mental conditions
In Posner molecules, Fisher argues, phosphorus spins could resist decoherence for a day or so, even in living cells. That means they could influence how the brain works.
The idea is that Posner molecules can be swallowed up by neurons. Once inside, the Posner molecules could trigger the firing of a signal to another neuron, by falling apart and releasing their calcium ions.
Because of entanglement in Posner molecules, two such signals might thus in turn become entangled: a kind of quantum superposition of a “thought”, you might say. “If quantum processing with nuclear spins is in fact present in the brain, it would be an extremely common occurrence, happening pretty much all the time,” Fisher says.
He first got this idea when he started thinking about mental illness.
A capsule of lithium carbonate (Credit: Custom Medical Stock Photo/Science Photo Library)
“My entry into the biochemistry of the brain started when I decided three or four years ago to explore how on earth the lithium ion could have such a dramatic effect in treating mental conditions,” Fisher says.
At this point, Fisher’s proposal is no more than an intriguing idea
Lithium drugs are widely used for treating bipolar disorder. They work, but nobody really knows how.
“I wasn’t looking for a quantum explanation,” Fisher says. But then he came across a paper reporting that lithium drugs had different effects on the behaviour of rats, depending on what form – or “isotope” – of lithium was used.
On the face of it, that was extremely puzzling. In chemical terms, different isotopes behave almost identically, so if the lithium worked like a conventional drug the isotopes should all have had the same effect.
Nerve cells are linked at synapses (Credit: Sebastian Kaulitzki/Science Photo Library)
But Fisher realised that the nuclei of the atoms of different lithium isotopes can have different spins. This quantum property might affect the way lithium drugs act. For example, if lithium substitutes for calcium in Posner molecules, the lithium spins might “feel” and influence those of phosphorus atoms, and so interfere with their entanglement.
We do not even know what consciousness is
If this is true, it would help to explain why lithium can treat bipolar disorder.
At this point, Fisher’s proposal is no more than an intriguing idea. But there are several ways in which its plausibility can be tested, starting with the idea that phosphorus spins in Posner molecules can keep their quantum coherence for long periods. That is what Fisher aims to do next.
All the same, he is wary of being associated with the earlier ideas about “quantum consciousness”, which he sees as highly speculative at best.
Consciousness is a profound mystery (Credit: Sciepro/Science Photo Library)
Physicists are not terribly comfortable with finding themselves inside their theories. Most hope that consciousness and the brain can be kept out of quantum theory, and perhaps vice versa. After all, we do not even know what consciousness is, let alone have a theory to describe it.
We all know what red is like, but we have no way to communicate the sensation
As a result, physicists are often embarrassed to even mention the words “quantum” and “consciousness” in the same sentence.
But setting that aside, the idea has a long history. Ever since the “observer effect” and the mind first insinuated themselves into quantum theory in the early days, it has been devilishly hard to kick them out. A few researchers think we might never manage to do so.
In 2016, Adrian Kent of the University of Cambridge in the UK, one of the most respected “quantum philosophers”, speculated that consciousness might alter the behaviour of quantum systems in subtle but detectable ways.
We do not understand how thoughts work (Credit: Andrzej Wojcicki/Science Photo Library)
Kent is very cautious about this idea. “There is no compelling reason of principle to believe that quantum theory is the right theory in which to try to formulate a theory of consciousness, or that the problems of quantum theory must have anything to do with the problem of consciousness,” he admits.
Every line of thought on the relationship of consciousness to physics runs into deep trouble
But he says that it is hard to see how a description of consciousness based purely on pre-quantum physics can account for all the features it seems to have.
One particularly puzzling question is how our conscious minds can experience unique sensations, such as the colour red or the smell of frying bacon. With the exception of people with visual impairments, we all know what red is like, but we have no way to communicate the sensation and there is nothing in physics that tells us what it should be like.
Sensations like this are called “qualia”. We perceive them as unified properties of the outside world, but in fact they are products of our consciousness – and that is hard to explain. Indeed, in 1995 philosopher David Chalmers dubbed it “the hard problem” of consciousness.
How does our consciousness work? (Credit: Victor Habbick Visions/Science Photo Library)
“Every line of thought on the relationship of consciousness to physics runs into deep trouble,” says Kent.
This has prompted him to suggest that “we could make some progress on understanding the problem of the evolution of consciousness if we supposed that consciousnesses alters (albeit perhaps very slightly and subtly) quantum probabilities.”
“Quantum consciousness” is widely derided as mystical woo, but it just will not go away
In other words, the mind could genuinely affect the outcomes of measurements.
It does not, in this view, exactly determine “what is real”. But it might affect the chance that each of the possible actualities permitted by quantum mechanics is the one we do in fact observe, in a way that quantum theory itself cannot predict. Kent says that we might look for such effects experimentally.
He even bravely estimates the chances of finding them. “I would give credence of perhaps 15% that something specifically to do with consciousness causes deviations from quantum theory, with perhaps 3% credence that this will be experimentally detectable within the next 50 years,” he says.
If that happens, it would transform our ideas about both physics and the mind. That seems a chance worth exploring.
Physicists are putting themselves out of a job, using artificial intelligence to run a complex experiment. The experiment created an extremely cold gas trapped in a laser beam, known as a Bose-Einstein condensate, replicating the experiment that won the 2001 Nobel Prize.
The experiment, featuring the small red glow of a BEC trapped in infrared laser beams. Credit: Stuart Hay, ANU
Physicists are putting themselves out of a job, using artificial intelligence to run a complex experiment.
The experiment, developed by physicists from The Australian National University (ANU) and UNSW ADFA, created an extremely cold gas trapped in a laser beam, known as a Bose-Einstein condensate, replicating the experiment that won the 2001 Nobel Prize.
“I didn’t expect the machine could learn to do the experiment itself, from scratch, in under an hour,” said co-lead researcher Paul Wigley from the ANU Research School of Physics and Engineering.
“A simple computer program would have taken longer than the age of the Universe to run through all the combinations and work this out.”
Bose-Einstein condensates are some of the coldest places in the Universe, far colder than outer space, typically less than a billionth of a degree above absolute zero.
They could be used for mineral exploration or navigation systems as they are extremely sensitive to external disturbances, which allows them to make very precise measurements such as tiny changes in the Earth’s magnetic field or gravity.
The artificial intelligence system’s ability to set itself up quickly every morning and compensate for any overnight fluctuations would make this fragile technology much more useful for field measurements, said co-lead researcher Dr Michael Hush from UNSW ADFA.
“You could make a working device to measure gravity that you could take in the back of a car, and the artificial intelligence would recalibrate and fix itself no matter what,” he said.
“It’s cheaper than taking a physicist everywhere with you.”
The team cooled the gas to around 1 microkelvin, and then handed control of the three laser beams over to the artificial intelligence to cool the trapped gas down to nanokelvin.
Researchers were surprised by the methods the system came up with to ramp down the power of the lasers.
“It did things a person wouldn’t guess, such as changing one laser’s power up and down, and compensating with another,” said Mr Wigley.
“It may be able to come up with complicated ways humans haven’t thought of to get experiments colder and make measurements more precise.
The new technique will lead to bigger and better experiments, said Dr Hush.
“Next we plan to employ the artificial intelligence to build an even larger Bose-Einstein condensate faster than we’ve seen ever before,” he said.
The research is published in the Nature group journal Scientific Reports.
Journal Reference:
P. B. Wigley, P. J. Everitt, A. van den Hengel, J. W. Bastian, M. A. Sooriyabandara, G. D. McDonald, K. S. Hardman, C. D. Quinlivan, P. Manju, C. C. N. Kuhn, I. R. Petersen, A. N. Luiten, J. J. Hope, N. P. Robins, M. R. Hush. Fast machine-learning online optimization of ultra-cold-atom experiments. Scientific Reports, 2016; 6: 25890 DOI: 10.1038/srep25890
The Large Hadron Collider uses superconducting magnets to smash sub-atomic particles together at enormous energies. CERN
A small mammal has sabotaged the world’s most powerful scientific instrument.
The Large Hadron Collider, a 17-mile superconducting machine designed to smash protons together at close to the speed of light, went offline overnight. Engineers investigating the mishap found the charred remains of a furry creature near a gnawed-through power cable.
A small mammal, possibly a weasel, gnawed-through a power cable at the Large Hadron Collider. Ashley Buttle/Flickr
“We had electrical problems, and we are pretty sure this was caused by a small animal,” says Arnaud Marsollier, head of press for CERN, the organization that runs the $7 billion particle collider in Switzerland. Although they had not conducted a thorough analysis of the remains, Marsollier says they believe the creature was “a weasel, probably.” (Update: An official briefing document from CERN indicates the creature may have been a marten.)
The shutdown comes as the LHC was preparing to collect new data on the Higgs Boson, a fundamental particle it discovered in 2012. The Higgs is believed to endow other particles with mass, and it is considered to be a cornerstone of the modern theory of particle physics.
Researchers have seen some hints in recent data that other, yet-undiscovered particles might also be generated inside the LHC. If those other particles exist, they could revolutionize researcher’s understanding of everything from the laws of gravity, to quantum mechanics.
Unfortunately, Marsollier says, scientists will have to wait while workers bring the machine back online. Repairs will take a few days, but getting the machine fully ready to smash might take another week or two. “It may be mid-May,” he says.
These sorts of mishaps are not unheard of, says Marsollier. The LHC is located outside of Geneva. “We are in the countryside, and of course we have wild animals everywhere.” There have been previous incidents, including one in 2009, when a bird is believed to have dropped a baguette onto critical electrical systems.
Nor are the problems exclusive to the LHC: In 2006, raccoons conducted a “coordinated” attack on a particle accelerator in Illinois.
It is unclear whether the animals are trying to stop humanity from unlocking the secrets of the universe.
Uma ilustração de como deve ser a nave – Divulgação
NOVA YORK, EUA – O físico britânico Stephen Hawking e o bilionário russo Yuri Milner anunciaram nesta terça-feira um projeto de US$ 100 milhões para enviar uma nave até o sistema estelar mais próximo da Terra, o Alpha Centauri, que fica a 4,37 anos-luz de distância. Um dos principais objetivos é encontrar planetas habitáveis fora do nosso sistema solar.
A ideia do projeto “Breakthrough Starshot”, de diretoria composta por Milner e Hawking, além do CEO do Facebook, Mark Zuckerberg, é enviar uma nave minúscula, ou uma “nano nave”, numa viagem de 20 anos, atingindo, segundo eles, um quinto da velocidade da luz. O programa vai testar o know-how e as tecnologias necessárias para o projeto.
Da esquerda para a direita: o investidor Yuri Milner, Stephen Hawking, e os físicos Freeman Dyson e Avi Loeb – LUCAS JACKSON / REUTERS
O programa prevê a criação de uma nave automatizada pesando pouco mais do que uma folha de papel e impulsionada por uma vela solar não muito maior que uma pipa de criança, mas com uma fibra de apenas algumas centenas de átomos em grossura. Enquanto uma vela normal é impulsionada pelo vento, uma vela solar para uso espacial é impulsionada pela radiação emitida pelo Sol.
A ideia inicial é usar milhares de naves assim, que teriam um “empurrão” de um laser montado na Terra, que emitiria ainda mais radiação para ajudar na impulsão. Os desafios do projeto são muitos, entre eles juntar vários emissores num “grande canhão laser”, montar velas com nanotecnologia e juntar todos os componentes da nave num pequeno pacote de silicone.
“A história humana é feita de grandes saltos. Hoje estamos preparando o próximo grande salto, para as estrelas”, disse Yuri Milner em Londres. Já Hawking afirma que “a Terra é um lugar maravilho, mas pode não durar para sempre. Mais cedo ou mais tarde, devemos olhar para as estrelas. Esse projeto é um importante primeiro passo nessa jornada”.
Se for confirmada a existência de uma nova partícula, especialistas acreditam que poderá ser aberta uma porta para um mundo ‘desconhecido e inexplorado’ (Reuters)
O Grande Colisor de Hádrons (LHC, na sigla em inglês) – um acelerador de partículas gigantesco que fica na fronteira entre a França e a Suíça – causou fortes emoções entre físicos teóricos, uma comunidade que geralmente é muito cautelosa quando se trata de novas descobertas.
O motivo: “batidinhas” detectadas pelo Grande Colisor de Hádrons. Essas batidas, evidenciadas nos dados que resultam da aceleração dos prótons, podem sinalizar a existência de uma nova e desconhecida partícula seis vezes maior do que o Bóson de Higgs (a chamada “partícula de Deus”).
E isso, para o físico teórico Gian Giudice, significaria “uma porta para um mundo desconhecido e inexplorado”.
“Não é a confirmação de uma teoria já estabelecida”, disse à revista New Scientisto pesquisador, que também é trabalha na Organização Europeia para Investigação Nuclear (CERN).
A emoção dos cientistas começou quando, em dezembro de 2015, os dois laboratórios que trabalham no LHC de forma independente registraram os mesmos dados depois de colocar o colisor para funcionar praticamente na capacidade máxima (o dobro de energia necessária para detectar o Bóson de Higgs).
Os dados registrados não podem ser explicados com o que se sabe até hoje das leis da física.
Depois do anúncio desses novos dados foram publicados cerca de 280 ensaios que tentam explicar o que pode ser esse sinal – e nenhum deles descartou a teoria de que se trata de uma nova partícula.
Alguns cientistas sugerem que a partícula pode ser uma prima pesada do Bóson de Higgs, descoberto em 2012 e que explica por que a matéria tem massa.
Outros apresentaram a hipótese de o Bóson de Higgs ser feito de partículas menores. E ainda há o grupo dos que pensam que essas “batidinhas” podem ser de um gráviton, a partícula encarregada de transmitir a força da gravidade.
Se realmente for um gráviton, essa descoberta será um marco, porque até hoje não tinha sido possível conciliar a gravidade com o modelo padrão da física de partículas.
Extraordinário?
Para os especialistas, o fato de que ninguém conseguiu refutar o que os físicos detectaram é um sinal de que podemos estar perto de descobrir algo extraordinário.
“Se isso se provar verdadeiro, será uma (nota) dez na escala Richter dos físicos de partículas”, disse ao jornal britânico The Guardian o especialista John Ellis, do King’s College de Londres. Ele também já foi chefe do departamento de teoria da Organização Europeia para a Investigação Nuclear. “Seria a ponta de um iceberg de novas formas de matéria.”
Mesmo com toda a animação de Ellis, os cientistas não querem se precipitar.
Image captionEsta nova partícula seria seis vezes maior que o Bóson de Higgs (AFP)
Quando o anúncio foi feito pela primeira vez, alguns pensaram que tudo não passava de uma terrível coincidência que aconteceu devido à forma como o LHC funciona.
Duas máquinas de raios de prótons são aceleradas chegando quase à velocidade da luz. Elas vão em direções diferentes e se chocam em quatro pontos, criando padrões de dados diferentes.
Essas diferenças, batidas ou perturbações na estatística são o que permitem demonstrar a presença de partículas.
Mas estamos falando de bilhões de perturbações registradas a cada experimento, o que torna provável um erro estatístico.
Porém, o fato de que os dois laboratórios tenham detectado a mesma batida é o que faz com que os cientistas prestem mais atenção ao tema.
Boas notícias
O Grande Colisor de Hádrons volta a funcionar nesta semana
Além disso, recentemente os cientistas dos laboratórios CMC e Atlas apresentaram novas provas depois de refinar e recalibrar seus resultados.
E nenhuma das equipes pôde atribuir a anomalia detectada a um eventual erro estatístico.
São boas notícias para os especialistas que acreditam que essa descoberta seja o início de algo muito grande.
O lado ruim é que nenhum dos laboratórios conseguiu explicar o que é esta misteriosa partícula. São necessárias mais experiências para qualificar o evento como um “descobrimento”.
O lado bom é que não será preciso esperar muito para ver o fim da história.
Nesta semana, o Grande Colisor de Hádrons sairá de seu período de hibernação para voltar a disparar prótons em direções diferentes.
Uma das hipóteses é que esta nova partícula estaria relacionada com a gravidade (Thinkstock)
Nos próximos meses o colisor oferecerá o dobro de informação em comparação ao que os cientistas têm até agora.
E se estima que, em agosto, eles poderão saber o que é essa nova e promissora partícula.
New study describes what could be the 18th known form of ice
Date:
February 12, 2016
Source:
University of Nebraska-Lincoln
Summary:
A research team has predicted a new molecular form of ice with a record-low density. If the ice can be synthesized, it would become the 18th known crystalline form of water and the first discovered in the US since before World War II.
This illustration shows the ice’s molecular configuration. Credit: Courtesy photo/Yingying Huang and Chongqin Zhu
Amid the season known for transforming Nebraska into an outdoor ice rink, a University of Nebraska-Lincoln-led research team has predicted a new molecular form of the slippery stuff that even Mother Nature has never borne.
The proposed ice, which the researchers describe in a Feb. 12, 2016 study in the journal Science Advances, would be about 25 percent less dense than a record-low form synthesized by a European team in 2014.
If the ice can be synthesized, it would become the 18th known crystalline form of water — and the first discovered in the United States since before World War II.
“We performed a lot of calculations (focused on) whether this is not just a low-density ice, but perhaps the lowest-density ice to date,” said Xiao Cheng Zeng, an Ameritas University Professor of chemistry who co-authored the study. “A lot of people are interested in predicting a new ice structure beyond the state of the art.”
This newest finding represents the latest in a long line of ice-related research from Zeng, who previously discovered a two-dimensional “Nebraska Ice” that contracts rather than expands when frozen under certain conditions.
Zeng’s newest study, which was co-led by Dalian University of Technology’s Jijun Zhao, used a computational algorithm and molecular simulation to determine the ranges of extreme pressure and temperature under which water would freeze into the predicted configuration. That configuration takes the form of a clathrate — essentially a series of water molecules that form an interlocking cage-like structure.
It was long believed that these cages could maintain their structural integrity only when housing “guest molecules” such as methane, which fills an abundance of natural clathrates found on the ocean floor and in permafrost. Like the European team before them, however, Zeng and his colleagues have calculated that their clathrate would retain its stability even after its guest molecules have been evicted.
Actually synthesizing the clathrate will take some effort. Based on the team’s calculations, the new ice will form only when water molecules are placed inside an enclosed space that is subjected to ultra-high, outwardly expanding pressure.
Just how much? At minus-10 Fahrenheit, the enclosure would need to be surrounded by expansion pressure about four times greater than what is found at the Pacific Ocean’s deepest trench. At minus-460, that pressure would need to be even greater — roughly the same amount experienced by a person shouldering 300 jumbo jets at sea level.
The guest molecules would then need to be extracted via a vacuuming process pioneered by the European team, which Zeng credited with inspiring his own group to conduct the new study.
Yet Zeng said the wonders of ordinary ice — the type that has covered Earth for billions of years — have also motivated his team’s research.
“Water and ice are forever interesting because they have such relevance to human beings and life,” Zeng said. “If you think about it, the low density of natural ice protects the water below it; if it were denser, water would freeze from the bottom up, and no living species could survive. So Mother Nature’s combination is just so perfect.”
If confirmed, the new form of ice will be called “Ice XVII,” a naming quirk that resulted from scientists terming the first two identified forms “Ice I.”
Zeng and Zhao co-authored the Science Advances study with UNL postdoctoral researcher Chongqin Zhu; Yingying Huang, a visiting research fellow from the Dalian University of Technology; and researchers from the Chinese Academy of Sciences and the University of Science and Technology of China.
The team’s research was funded in part by the National Science Foundation and conducted with the assistance of UNL’s Holland Computing Center.
Journal Reference:
Y. Huang, C. Zhu, L. Wang, X. Cao, Y. Su, X. Jiang, S. Meng, J. Zhao, X. C. Zeng. A new phase diagram of water under negative pressure: The rise of the lowest-density clathrate s-III. Science Advances, 2016; 2 (2): e1501010 DOI: 10.1126/sciadv.1501010
Pole dancing water molecules: Researchers have seen this remarkable phenomenon on the surface of an important technological material
Date: December 21, 2015
Source: Vienna University of Technology
Summary: From pole dancing to square dance: Water molecules on perovskite surfaces show interesting patterns of motion. Surface scientists have now managed to image the dance of the atoms.
This is a visualization of the dance of the atoms on a crystal surface. Credit: TU Wien
Perovskites are materials used in batteries, fuel cells, and electronic components, and occur in nature as minerals. Despite their important role in technology, little is known about the reactivity of their surfaces. Professor Ulrike Diebold’s team at TU Wien (Vienna) has answered a long-standing question using scanning tunnelling microscopes and computer simulations: How do water molecules behave when they attach to a perovskite surface? Normally only the outermost atoms at the surface influence this behaviour, but on perovskites the deeper layers are important, too. The results have been published in the journal Nature Materials.
Perovskite dissociates water molecules
“We studied strontium ruthenate — a typical perovskite material,” says Ulrike Diebold. It has a crystalline structure containing oxygen, strontium and ruthenium. When the crystal is broken apart, the outermost layer consists of only strontium and oxygen atoms; the ruthenium is located underneath, surrounded by oxygen atoms.
A water molecule that lands on this surface splits into two parts: A hydrogen atom is stripped off the molecule and attaches to an oxygen atom on the crystal’s surface. This process is known as dissociation. However, although they are physically separated, the pieces continue to interact through a weak “hydrogen bond.”
It is this interaction that causes a strange effect: The OH group cannot move freely, and circles the hydrogen atom like a dancer spinning on a pole. Although this is the first observation of such behaviour, it was not entirely unexpected: “This effect was predicted a few years ago based on theoretical calculations, and we have finally confirmed it with our experiments” said Diebold.
Dancing requires space
When more water is put on to the surface, the stage becomes too crowded and spinning stops. “The OH group can only move freely in a circle if none of the neighbouring spaces are occupied,” explains Florian Mittendorfer, who performed the calculations together with PhD student Wernfried Mayr-Schmölzer. At first, when two water molecules are in neighbouring sites, the spinning OH groups collide and get stuck together, forming pairs. Then, as the amount of water is increased, the pairs stick together and form long chains. Eventually, water molecular cannot find the pair of sites it needs to split up, and attaches instead as a complete molecule.
The new methods that have been developed and applied by the TU Wien research team have made significant advances in surface research. Whereas researchers were previously reliant on indirect measurements, they can now — with the necessary expertise — directly map and observe the behaviour of individual atoms on the surface. This opens up new possibilities for modern materials research, for example for developing and improving catalysts.
Daniel Halwidl, Bernhard Stöger, Wernfried Mayr-Schmölzer, Jiri Pavelec, David Fobes, Jin Peng, Zhiqiang Mao, Gareth S. Parkinson, Michael Schmid, Florian Mittendorfer, Josef Redinger, Ulrike Diebold. Adsorption of water at the SrO surface of ruthenates. Nature Materials, 2015; DOI: 10.1038/nmat4512
Climate scientists are tiring of governance that does not lead to action. But democracy must not be weakened in the fight against global warming, warns Nico Stehr.
Illustration by David Parkins
There are many threats to democracy in the modern era. Not least is the risk posed by the widespread public feeling that politicians are not listening. Such discontent can be seen in the political far right: the Tea Party movement in the United States, the UK Independence Party, the Pegida (Patriotic Europeans Against the Islamization of the West) demonstrators in Germany, and the National Front in France.
More surprisingly, a similar impatience with the political elite is now also present in the scientific community. Researchers are increasingly concerned that no one is listening to their diagnosis of the dangers of human-induced climate change and its long-lasting consequences, despite the robust scientific consensus. As governments continue to fail to take appropriate political action, democracy begins to look to some like an inconvenient form of governance. There is a tendency to want to take decisions out of the hands of politicians and the public, and, given the ‘exceptional circumstances’, put the decisions into the hands of scientists themselves.
This scientific disenchantment with democracy has slipped under the radar of many social scientists and commentators. Attention is urgently needed: the solution to the intractable ‘wicked problem’ of global warming is to enhance democracy, not jettison it.
Voices of discontent
Democratic nations seem to have failed us in the climate arena so far. The past decade’s climate summits in Copenhagen, Cancun, Durban and Warsaw were political washouts. Expectations for the next meeting in Paris this December are low.
Academics increasingly point to democracy as a reason for failure. NASA climate researcher James Hansen was quoted in 2009 in The Guardian as saying: “the democratic process doesn’t quite seem to be working”1. In a special issue of the journal Environmental Politics in 2010, political scientist Mark Beeson argued2 that forms of ‘good’ authoritarianism “may become not only justifiable, but essential for the survival of humanity in anything approaching a civilised form”. The title of an opinion piece published earlier this year in The Conversation, an online magazine funded by universities, sums up the issue: ‘Hidden crisis of liberal democracy creates climate change paralysis’ (see go.nature.com/pqgysr).
The depiction of contemporary democracies as ill-equipped to deal with climate change comes from a range of considerations. These include a deep-seated pessimism about the psychological make-up of humans; the disinclination of people to mobilize on issues that seem far removed; and the presumed lack of intellectual competence of people to grasp complex issues. On top of these there is the presumed scientific illiteracy of most politicians and the electorate; the inability of governments locked into short-term voting cycles to address long-term problems; the influence of vested interests on political agendas; the addiction to fossil fuels; and the feeling among the climate-science community that its message falls on the deaf ears of politicians.
“It is dangerous to blindly believe that science and scientists alone can tell us what to do.”
Such views can be heard from the highest ranks of climate science. Hans Joachim Schellnhuber, founding director of the Potsdam Institute for Climate Impact Research and chair of the German Advisory Council on Global Change, said of the inaction in a 2011 interview with German newspaper Der Spiegel: “comfort and ignorance are the biggest flaws of human character. This is a potentially deadly mix”.
What, then, is the alternative? The solution hinted at by many people leans towards a technocracy, in which decisions are made by those with technical knowledge. This can be seen in a shift in the statements of some co-authors of Intergovernmental Panel on Climate Change reports, who are moving away from a purely advisory role towards policy prescription (see, for example, ref. 3).
We must be careful what we wish for. Nations that have followed the path of ‘authoritarian modernization’, such as China and Russia, cannot claim to have a record of environmental accomplishments. In the past two or three years, China’s system has made it a global leader in renewables (it accounts for more than one-quarter of the planet’s investment in such energies4). Despite this, it is struggling to meet ambitious environmental targets and will continue to lead the world for some time in greenhouse-gas emissions. As Chinese citizens become wealthier and more educated, they will surely push for more democratic inclusion in environmental policymaking.
Broad-based support for environmental concerns and subsequent regulations came about in open democratic argument on the value of nature for humanity. Democracies learn from mistakes; autocracies lack flexibility and adaptability5. Democratic nations have forged the most effective international agreements, such as the Montreal Protocol against ozone-depleting substances.
Global stage
Impatient scientists often privilege hegemonic players such as world powers, states, transnational organizations, and multinational corporations. They tend to prefer sweeping policies of global mitigation over messier approaches of local adaptation; for them, global knowledge triumphs over local know-how. But societal trends are going in the opposite direction. The ability of large institutions to impose their will on citizens is declining. People are mobilizing around local concerns and efforts6.
The pessimistic assessment of the ability of democratic governance to cope with and control exceptional circumstances is linked to an optimistic assessment of the potential of large-scale social and economic planning. The uncertainties of social, political and economic events are treated as minor obstacles that can be overcome easily by implementing policies that experts prescribe. But humanity’s capacity to plan ahead effectively is limited. The centralized social and economic planning concept, widely discussed decades ago, has rightly fallen into disrepute7.
The argument for an authoritarian political approach concentrates on a single effect that governance ought to achieve: a reduction of greenhouse-gas emissions. By focusing on that goal, rather than on the economic and social conditions that go hand-in-hand with it, climate policies are reduced to scientific or technical issues. But these are not the sole considerations. Environmental concerns are tightly entangled with other political, economic and cultural issues that both broaden the questions at hand and open up different ways of approaching it. Scientific knowledge is neither immediately performative nor persuasive.
Enhance engagement
There is but one political system that is able to rationally and legitimately cope with the divergent political interests affected by climate change and that is democracy. Only a democratic system can sensitively attend to the conflicts within and among nations and communities, decide between different policies, and generally advance the aspirations of different segments of the population. The ultimate and urgent challenge is that of enhancing democracy, for example by reducing social inequality8.
If not, the threat to civilization will be much more than just changes to our physical environment. The erosion of democracy is an unnecessary suppression of social complexity and rights.
The philosopher Friedrich Hayek, who led the debate against social and economic planning in the mid-twentieth century9, noted a paradox that applies today. As science advances, it tends to strengthen the idea that we should “aim at more deliberate and comprehensive control of all human activities”. Hayek pessimistically added: “It is for this reason that those intoxicated by the advance of knowledge so often become the enemies of freedom”10. We should heed his warning. It is dangerous to blindly believe that science and scientists alone can tell us what to do.
Nature 525, 449–450 (24 September 2015) dos:10.1038/525449a
Rosanvallon, P.The Society of Equals (Harvard Univ. Press, 2013).
Hayek, F. A.Nature148, 580–584 (1941).
Hayek, F. A.The Constitution of Liberty (Routledge, 1960).
Nico Steer is a sociologist and founding director of the European Center for Sustainability Research at Zeppelin University in Friedrichshafen, Germany.
Posted on September 24, 2015 by …and Then There’s Physics
I thought I would briefly discuss this Nature comment called Climate policy: Democracy is not an inconvenience. I initially read it and tweeted it, thinking “yes, democracy is important and not an inconvenience”. I then read it again and thought, “hold on, is this a massive strawman?”
The main premise seems to be based on:
Researchers are increasingly concerned that no one is listening to their diagnosis of the dangers of human-induced climate change and its long-lasting consequences, despite the robust scientific consensus. As governments continue to fail to take appropriate political action, democracy begins to look to some like an inconvenient form of governance. There is a tendency to want to take decisions out of the hands of politicians and the public, and, given the ‘exceptional circumstances’, put the decisions into the hands of scientists themselves.
Really? I realise that there are extreme elements everywhere, but I don’t think I’ve seen any scientists actually argue that we should subvert democracy. I’ve certainly seen people suggest that our democracies are not suited to solving this type of global problem, but this – as far as I can tell – is typically said in the context of democracy being the worst form of government, apart from all others. Also, it is often in reference to the influence of the media, vested interests, and short-term political thinking, rather than an argument against democracy itself.
In fact, what I think most scientists are frustrated with (me, certainly) is a sense that we have all this evidence, it is very strong, and yet it appears to be largely being ignored or dismissed. I think most scientists recognise that the evidence alone doesn’t tell us what should be done, and that there are other important factors that will – and should – influence decision making. The argument is more to do with robust, evidence-based policy-making, not an implicit suggestion that we should undermine democracy and put the decisions into the hands of scientists themselves. Not only would putting decision making into the hands of scientists be an exceptionally poor idea (I should know, I am one and work with many others), but I’d also like to see an example of someone making this argument, because I really can’t think of one.
Maybe the most ironic thing about this article is that it almost seems to be doing what it is criticising others for apparently doing. It is essentially trying to delegitimise some by suggesting that their concerns are an attempt to subvert democracy. Well, as far as I’m concerned, free speech and the right to criticise policy makers is a fundamental part of our modern democracies. Suggesting that something that is fundamentally democratic is an attempt to undermine democracy, seems rather confused; maybe intentionally. Of course, we live in democracies where such arguments are allowed, even if they don’t make much sense.
Equipe selecionou 61 placas do Observatório Nacional que documentam eclipse de 1919. Observação em Sobral (CE) ajudou a demonstrar conclusões de Albert Einstein
Uma equipe do Observatório Nacional (ON/MCTI) fez um levantamento das placas fotográficas que fazem parte do resultado da expedição que observou o eclipse total do Sol na cidade de Sobral (CE) em 1919 e contribuiu para a comprovação da Teoria da Relatividade Geral, de Albert Einstein.
Composto pelo astrônomo Carlos H. Veiga, pelas bibliotecárias Katia T. dos Santos e M. Luiza Dias e pelo analista Renaldo N. da S. Junior, o grupo avaliou 900 placas fotográficas do acervo da biblioteca do Observatório. Pela importância científica, foram selecionadas as 61 placas das observações do famoso eclipse, que ainda guardam fielmente a imagem da lua nova encobrindo perfeitamente a imagem do Sol, registradas num dia muito especial para a ciência.
A partir da segunda metade do século XIX, as imagens fotográficas eram registradas em placas de vidro. Esse dispositivo, coberto por uma emulsão contendo sais de prata sensíveis à luz, era usado não só para registrar o cotidiano, mas também pela comunidade astronômica, até a última década do século passado, para observação de corpos celestes. Por ter um baixo coeficiente de dilatação térmica, as placas de vidro garantiam, ao longo do tempo, a precisão e confiabilidade das medidas astronômicas.
A fotografia permitiu um grande avanço para a astronomia e para o desenvolvimento da astrofísica, passando a ter um papel de detector, comparando os dados observacionais com o distanciamento temporal de grandes estruturas, como as galáxias. Em 1873 foi iniciado um programa sistemático de observação da atividade das manchas solares e eclipses e da coroa solar.
Uma manhã que mudou a ciência
Na manhã de 29 de maio de 1919, um fenômeno celeste trocaria, por alguns minutos, o dia pela noite numa pacata cidade do Nordeste brasileiro. Os minutos de duração do fenômeno deveriam ser aproveitados ao máximo. Era a oportunidade para comprovar experimentalmente uma nova afirmação científica prevista por uma teoria idealizada por Einstein (1879-1955), físico de origem alemã: a relatividade geral, que pode ser entendida como uma teoria que explica os fenômenos gravitacionais.
Sobral, a cidade cearense, seria o palco que ajudaria a confirmar um efeito previsto pela relatividade geral: a deflexão da luz, na qual um feixe de luz (neste caso, vindo de uma estrela) deveria ter sua trajetória encurvada (ou desviada) ao passar nas proximidades de um forte campo gravitacional (no caso, gerado pelo Sol).
Esse desvio da luz faz com que a estrela observada seja vista em uma posição aparentemente diferente de sua posição real. O objetivo dos astrônomos era medir um pequeno ângulo formado por essas duas posições.
Naquele dia, aconteceria um eclipse solar total. Os cálculos previam que deveria haver, pelo menos, uma estrela localizada no fundo de céu cuja luz passasse próxima ao bordo solar. Com essa configuração e boas condições meteorológicas, haveria grande chance de comprovar a nova teoria.
Leia mais e veja outras imagens do evento histórico, além de indicações bibliográficas.
Scalable 3-D silicon chip architecture based on single atom quantum bits provides a blueprint to build operational quantum computers
Date:
October 30, 2015
Source:
University of New South Wales
Summary:
Researchers have designed a full-scale architecture for a quantum computer in silicon. The new concept provides a pathway for building an operational quantum computer with error correction.
This picture shows from left to right Dr Matthew House, Sam Hile (seated), Sciential Professor Sven Rogge and Scientia Professor Michelle Simmons of the ARC Centre of Excellence for Quantum Computation and Communication Technology at UNSW. Credit: Deb Smith, UNSW Australia
Australian scientists have designed a 3D silicon chip architecture based on single atom quantum bits, which is compatible with atomic-scale fabrication techniques — providing a blueprint to build a large-scale quantum computer.
Scientists and engineers from the Australian Research Council Centre of Excellence for Quantum Computation and Communication Technology (CQC2T), headquartered at the University of New South Wales (UNSW), are leading the world in the race to develop a scalable quantum computer in silicon — a material well-understood and favoured by the trillion-dollar computing and microelectronics industry.
Teams led by UNSW researchers have already demonstrated a unique fabrication strategy for realising atomic-scale devices and have developed the world’s most efficient quantum bits in silicon using either the electron or nuclear spins of single phosphorus atoms. Quantum bits — or qubits — are the fundamental data components of quantum computers.
One of the final hurdles to scaling up to an operational quantum computer is the architecture. Here it is necessary to figure out how to precisely control multiple qubits in parallel, across an array of many thousands of qubits, and constantly correct for ‘quantum’ errors in calculations.
Now, the CQC2T collaboration, involving theoretical and experimental researchers from the University of Melbourne and UNSW, has designed such a device. In a study published today in Science Advances, the CQC2T team describes a new silicon architecture, which uses atomic-scale qubits aligned to control lines — which are essentially very narrow wires — inside a 3D design.
“We have demonstrated we can build devices in silicon at the atomic-scale and have been working towards a full-scale architecture where we can perform error correction protocols — providing a practical system that can be scaled up to larger numbers of qubits,” says UNSW Scientia Professor Michelle Simmons, study co-author and Director of the CQC2T.
“The great thing about this work, and architecture, is that it gives us an endpoint. We now know exactly what we need to do in the international race to get there.”
In the team’s conceptual design, they have moved from a one-dimensional array of qubits, positioned along a single line, to a two-dimensional array, positioned on a plane that is far more tolerant to errors. This qubit layer is “sandwiched” in a three-dimensional architecture, between two layers of wires arranged in a grid.
By applying voltages to a sub-set of these wires, multiple qubits can be controlled in parallel, performing a series of operations using far fewer controls. Importantly, with their design, they can perform the 2D surface code error correction protocols in which any computational errors that creep into the calculation can be corrected faster than they occur.
“Our Australian team has developed the world’s best qubits in silicon,” says University of Melbourne Professor Lloyd Hollenberg, Deputy Director of the CQC2T who led the work with colleague Dr Charles Hill. “However, to scale up to a full operational quantum computer we need more than just many of these qubits — we need to be able to control and arrange them in such a way that we can correct errors quantum mechanically.”
“In our work, we’ve developed a blueprint that is unique to our system of qubits in silicon, for building a full-scale quantum computer.”
In their paper, the team proposes a strategy to build the device, which leverages the CQC2T’s internationally unique capability of atomic-scale device fabrication. They have also modelled the required voltages applied to the grid wires, needed to address individual qubits, and make the processor work.
“This architecture gives us the dense packing and parallel operation essential for scaling up the size of the quantum processor,” says Scientia Professor Sven Rogge, Head of the UNSW School of Physics. “Ultimately, the structure is scalable to millions of qubits, required for a full-scale quantum processor.”
Background
In classical computers, data is rendered as binary bits, which are always in one of two states: 0 or 1. However, a qubit can exist in both of these states at once, a condition known as a superposition. A qubit operation exploits this quantum weirdness by allowing many computations to be performed in parallel (a two-qubit system performs the operation on 4 values, a three-qubit system on 8, and so on).
As a result, quantum computers will far exceed today’s most powerful super computers, and offer enormous advantages for a range of complex problems, such as rapidly scouring vast databases, modelling financial markets, optimising huge metropolitan transport networks, and modelling complex biological molecules.
Dean Radin, author of The Conscious Universe: The Scientific Truth of Psychic Phenomena (HarperSanFrancisco 1997), says that “psi researchers have resolved a century of skeptical doubts through thousands of replicated laboratory studies” (289) regarding the reality of psychic phenomena such as ESP(extrasensory perception) and PK (psychokinesis). Of course, Radin also considers meta-analysis as the most widely accepted method of measuring replication in science (51). Few scientists would agree with either of these claims. In any case, most American adults—about 75%, according to a 2005 Gallup poll—believe in at least one paranormal phenomenon. Forty-one percent believe in ESP. Fifty-five percent believe in the power of the mind to heal the body. One doesn’t need to be psychic to know that the majority of believers in psi have come to their beliefs through experience or anecdotes, rather than through studying the scientific evidence Radin puts forth in his book.
Radin doesn’t claim that the scientific evidence is going to make more believers. He realizes that the kind of evidence psi researchers have put forth hasn’t persuaded most scientists that there is anything of value in parapsychology. He thinks there is “a general uneasiness about parapsychology” and that because of the “insular nature of scientific disciplines, the vast majority of psi experiments are unknown to most scientists.” He also dismisses critics as skeptics who’ve conducted “superficial reviews.” Anyone familiar with the entire body of research, he says, would recognize he is correct and would see that there are “fantastic theoretical implications” (129) to psi research. Nevertheless, in 2005 the Nobel Committee once again passed over the psi scientists when handing out awards to those who have made significant contributions to our scientific knowledge.
The evidence Radin presents, however, is little more than a hodgepodge of occult statistics. Unable to find a single person who can correctly guess a three-letter word or move a pencil an inch without trickery, the psi researchers have resorted to doing complex statistical analyses of data. In well-designed studies they assume that whenever they have data that, by some statistical formula, is not likely due to chance, they attribute the outcome to psi. A well-designed study is one that carefully controls for such things as cheating, sensory leakage (unintentional transfer of information by non-psychic means), inadequate randomization, and other factors that might lead to an artifact (something that looks like it’s due to psi when it’s actually due to something else).
The result of this enormous data that Radin cites is that there is statistical evidence (for what it’s worth) that indicates (however tentatively) that some very weak psi effects are present (so weak that not a single individual who participates in a successful study has any inkling of possessing psychic power). Nevertheless, Radin thinks it is appropriate to speculate about the enormous implications of psi for biology, psychology, sociology, philosophy, religion, medicine, technology, warfare, police work, business, and politics. Never mind that nobody has any idea as to how psi might work. That is a minor detail to someone who can write with a straight face (apparently) that:
lots of independent, simple glimpses of the future may one day innocently crash the future. It’s not clear what it means to “crash the future,” but it doesn’t sound good. (297)
No, it certainly doesn’t sound good. But, as somebody once said, “the future will be better tomorrow.”
According to Radin, we may look forward to a future with “psychic garage-door openers” and the ability to “push atoms around” with our minds (292). Radin is not the least bit put off by the criticism that all the other sciences have led us away from superstition andmagical thinking, while parapsychology tries to lead us into those pre-scientific modes. Radin notes that “the concept that mind is primary over matter is deeply rooted in Eastern philosophy and ancient beliefs about magic.” However, instead of saying that it is now time to move forward, he rebuffs “Western science” for rejecting such beliefs as “mere superstition.” Magical thinking, he says, “lies close beneath the veneer of the sophisticated modern mind” (293). He even claims that “the fundamental issues [of consciousness] remain as mysterious today as they did five thousand years ago.” We may not have arrived at a final theory of the mind, but a lot of the mystery has evaporated with the progress made in the neurosciences over the past century. None of our advancing knowledge of the mind, however, has been due to contributions from parapsychologists. (Cf. Blackmore 2001).
Radin doesn’t grasp the fact that the concept of mind can be an illusion without being a “meaningless illusion” (294). He seems to have read David Chalmers, but I suggest he and his followers read Daniel Dennett. I’d begin with Sweet Dreams(2005). Consciousness is not “a complete mystery,” as Radin claims (294). The best that Radin can come up with as evidence that psi research has something to offer consciousness studies is the claim that “information can be obtained in ways that bypass the ordinary sensory system altogether” (295). Let’s ignore the fact that this claim begs the question. What neuroscience has uncovered is just how interesting and complex this “ordinary sensory system” turns out to be.
Radin would have us believe that magical thinking is essential to our psychological well being (293). If he’s right, we’ll one day be able to solve all social problems by “mass-mind healings.” And religious claims will get new meaning as people come to understand the psychic forces behind miracles and talking to the dead. According to Radin, when a medium today talks to a spirit “perhaps he is in contact with someone who is alive in the past.From the ‘departed’ person’s perspective, she may find herself communicating with someone from the future, although it is not clear that she would know that” (295). Yes, I don’t think that would be clear, either.
In medicine, Radin expects distant mental healing (which he argues has been scientifically established) to expand to something that “might be called techno-shamanism” (296). He describes this new development as “an exotic, yet rigorously schooled combination of ancient magical principles and future technologies” (296). He expects psi to join magnetic resonance imaging and blood tests as common stock in the world of medicine. “This would translate into huge savings and improved quality of life for millions of people” (192) as “untold billions of dollars in medical costs could be saved” (193).
Then, of course, there will be the very useful developments that include the ability to telepathically “call a friend in a distant spacecraft, or someone in a deeply submerged submarine” (296). On the other hand, the use of psychic power by the military and by police investigators will depend, Radin says, on “the mood of the times.” If what is popular on television is an indicator of the mood of the times, I predict that there will be full employment for psychic detectives and remote viewers in the future.
Radin looks forward to the day when psi technology “might allow thought control of prosthetics for paraplegics” and “mind-melding techniques to provide people with vast, computer-enhanced memories, lightning-fast mathematical capabilities, and supersensitive perceptions” (197). He even suggests we employ remote viewer Joe McMoneagle to reveal future technological devices he “has sensed in his remote-viewing sessions” (100).
Radin considers a few other benefits that will come from our increased ability to use psi powers: “to guide archeological digs and treasure-hunting expeditions, enhance gambling profits, and provide insight into historical events” (202). However, he does not consider some of the obvious problems and benefits that would occur should psychic ability become common. Imagine the difficulties for the junior high teacher in a room full of adolescents trained in PK. Teachers and parents would be spending most of their psychic energy controlling the hormones of their charges. The female garment and beauty industries would be destroyed as many attractive females would be driven to try to make themselves look ugly to avoid having their clothes being constantly removed by psychic perverts and pranksters.
Ben Radford has noted the potential for “gross and unethical violations of privacy,” as people would be peeping into each other’s minds. On the other hand, infidelity and all forms of deception might die out, since nobody could deceive anyone about anything if we were all psychic. Magic would become pointless and “professions that involve deception would be worthless” (Radford 2000). There wouldn’t be any need for undercover work or spies. Every child molester would be identified immediately. No double agent could ever get away with it. There wouldn’t be any more lotteries, since everybody could predict the winning numbers. We wouldn’t need trials of accused persons and the polygraph would be a thing of the past.
Hurricanes, tsunamis, earthquakes, floods, and other signs of intelligent design will become things of the past as billions of humans unite to focus their thoughts on predicting and controlling the forces of nature. We won’t need to build elaborate systems to turn away errant asteroids or comets heading for our planet: billons of us will unite to will the objects on their merry way toward some other oblivion. It is unlikely that human nature will change as we become more psychically able, so warfare will continue but will be significantly changed. Weapons won’t be needed because we’ll be able to rearrange our enemies’ atoms and turn them into mush from the comfort of our living rooms. (Who knows? It might only take a few folks with super psi powers to find Osama bin Laden and turn him into a puddle of irradiated meat.) Disease and old age will become things of the past as we learn to use our thoughts to kill cancer cells and control our DNA.
Space travel will become trivial and heavy lifting will be eliminated as we will be able to teleport anything to anywhere at anytime through global consciousness. We’ll be able to transport all the benefits of earthly consciousness to every planet in the universe. There are many other likely effects of global psychic ability that Radin has overlooked but this is understandable given his heavy workload as Senior Scientist at IONS (The Institute of Noetic Sciences) and as a blogger.
Radin notes only one problem should psi ability become common: we’ll all be dipping into the future and we might “crash the future,” whatever that means. The bright side of crashing the future will be the realization of “true freedom” as we will no longer be doomed to our predestined fate. We will all have the power “to create the future as we wish, rather than blindly follow a predetermined course through our ignorance” (297). That should make even the most cynical Islamic fundamentalist or doomsday Christian take heed. This psi stuff could be dangerous to one’s delusions even as it tickles one’s funny bone and stimulates one’s imagination to aspire to the power of gods and demons.
****** ****** ******
update: Radin has a follow-up book out called Entangled Minds: Extrasensory Experiences in a Quantum Reality. Like The Conscious Universe, this one lays out the scientific evidence for psi as seen from the eyes of a true believer. As noted above, in The Conscious Universe, Radin uses statistics and meta-analysisto prove that psychic phenomena really do exist even if those who have the experiences in the labs are unaware of them. Statistical data show that the world has gone psychic, according to the latest generation of parapsychologists. You may be unconscious of it, but your mind is affecting random number generators all over the world as you read this. The old psychic stuff—thinking about aunt Hildie moments before she calls to tell you to bugger off—is now demonstrated to be true by statistical methods that were validated in 1937 by Burton Camp and meta-validated by Radin 60 years later when he asserted that meta-analysis was the replication parapsychologists had been looking for. The only difference is that now when you think of aunt Hildie it might be moments before she calls her car mechanic and that, too, may be linked to activity in your mind that you are unaware of.
Radin’s second book sees entanglement as a key to understanding extrasensory phenomena. Entanglement is a concept from quantum physics that refers to connections between subatomic particles that persist regardless of being separated by various distances. He notes that some physicists have speculated that the entire universe might be entangled and that the Eastern mystics of old might have been on to something cosmic. His speculations are rather wild but his assertions are rather modest. For example: “I believe that entanglement suggests a scenario that may ultimately lead to a vastly improved understanding of psi” (p. 14) and “I propose that the fabric of reality is comprised [sic] of ‘entangled threads’ that are consistent with the core of psi experience” (p. 19). Skeptics might suggest that studying self-deception and wishful thinking would lead to a vastly improved understanding of psi research and that being consistent with a model is a minimal, necessary condition for taking any model seriously, but hardly sufficient to warrant much faith.
Readers of The Conscious Universe will be pleased to know that Radin has outdone himself on the meta-analysis front. In his second book, he provides a meta-meta-analysis of over 1,000 studies on dream psi, ganzfeld psi, staring, distant intention, dice PK, and RNG PK. He concludes that the odds against chance of getting these results are 10104 against 1 (p. 276). As Radin says, “there can be little doubt that something interesting is going on” (p. 275). Yes, but I’m afraid it may be going on only in some entangled minds.
On the bright side, Radin continues to ignore Gary Schwartz and self-proclaimed psychics like Jon Edward, Sylvia Browne, Uri Geller, and Ted Owens. He still has a fondness for remote viewers like Joe McMoneagle, however, who seems impressive if you don’t understand subjective validation, are willing to ignore the vast majority of his visions, and aren’t bothered by vagueness in the criteria as to what counts as a “hit” in remote viewing. Even a broken clock is right twice a day.
Radin predicts that some day “psi research will be taught in universities with the same aplomb as today’s elementary economics and biology” (p. 295). Perhaps psi research will be taught in the same classroom as intelligent design, though this seems unlikely as parapsychology attempts to reduce all supernatural and paranormal phenomena to physics. Maybe they could both be taught in the same curriculum: things that explain everything but illuminate nothing.
note: If the reader wants to see a more complete review of Radin’s work, please read my reviews of his books. Links are given below.
Physical scientists aren’t trained for all the political and moral issues.
Oct 2 2015 – 10:00am
By: Joel N. Shurkin, Contributor
(Inside Science) — The notion that Earth’s climate is changing—and that the threat to the world is serious—goes back to the 1980s, when a consensus began to form among climate scientists as temperatures began to rise noticeably. Thirty years later, that consensus is solid, yet climate change and the disruption it may cause remain divisive political issues, and millions of people remain unconvinced.
A new book argues that social scientists should play a greater role in helping natural scientists convince people of the reality of climate change and drive policy.
Climate Change and Society consists of 13 essays on why the debate needs the voices of social scientists, including political scientists, psychologists, anthropologists, and sociologists. It is edited by Riley E. Dunlap, professor of sociology at Oklahoma State University in Stillwater, and Robert J. Brulle, of Drexel University, professor of sociology and environmental science in Philadelphia.
Brulle said the physical scientists tend to frame climate change “as a technocratic and managerial problem.”
“Contrast that to the Pope,” he said.
Pope Francis sees it as a “political, moral issue that won’t be settled by a group of experts sitting in a room,” said Brulle, who emphasized that it will be settled by political process. Sociologists agree.
Sheila Jasanoff also agrees. She is the Pforzheimer professor of science and technology studies at the Harvard Kennedy School in Cambridge, Massachusetts, and did not participate in the book.
She said that understanding how people behave differently depending on their belief system is important.
“Denial is a somewhat mystical thing in people’s heads,” Jasanoff said. “One can bring tools of sociology of knowledge and belief—or social studies—to understand how commitments to particular statements of nature are linked with understanding how you would feel compelled to behave if nature were that way.”
Parts of the world where climate change is considered a result of the colonial past may resist taking drastic action at the behest of the former colonial rulers. Jasanoff said that governments will have to convince these groups that climate change is a present danger and attention must be paid.
Some who agree there is a threat are reluctant to advocate for drastic economic changes because they believe the world will be rescued by innovation and technology, Jasanoff said. Even among industrialized countries, views about the potential of technology differ.
Understanding these attitudes is what social scientists do, the book’s authors maintain.
“One of the most pressing contributions our field can make is to legitimate big questions, especially the ability of the current global economic system to take the steps needed to avoid catastrophic climate change,” editors of the book wrote.
The issue also is deeply embedded in the social science of economics and in the problem of “have” and “have-not” societies in consumerism and the economy.
For example, Bangladesh sits at sea level, and if the seas rise enough, nearly the entire country could disappear in the waters. Hurricane Katrina brought hints of the consequences of that reality to New Orleans, a city that now sits below sea level. The heaviest burden of the storm’s effects fell on the poor neighborhoods, Brulle said.
“The people of Bangladesh will suffer more than the people on the Upper East Side of Manhattan,” Brulle said. He said they have to be treated differently, which is not something many physical scientists studying the processes behind sea level rise have to factor into their research.
“Those of us engaged in the climate fight need valuable insight from political scientists and sociologists and psychologists and economists just as surely as from physicists,” agreed Bill McKibben, an environmentalist and author who is a scholar in residence at Middlebury College in Vermont. “It’s very clear carbon is warming the planet; it’s very unclear what mix of prods and preferences might nudge us to use much less.”
Joel Shurkin is a freelance writer in Baltimore. He was former science writer at the Philadelphia Inquirer and was part of the team that won a Pulitzer Prize for covering Three Mile Island. He has nine published books and is working on a tenth. He has taught journalism at Stanford University, the University of California at Santa Cruz and the University of Alaska Fairbanks. He tweets at@shurkin.
I was struck this morning by the similarity between two twentieth-century passages about entropy. The first is from W.H. Auden’s poem “As I Walked Out One Evening,” and the second from Philip K. Dick’s Do Androids Dream of Electric Sheep. If I was a betting man, I’d put money on PKD having read Auden. The cupboard and the teacup, especially, drew my attention, but it is also worth noting that the passage in PKD immediately precedes J.R. Isidore’s vision of the “tomb world,” a variation on Auden’s “land of the dead.”
Whether or not the passage in PKD is a explicit allusion or homage to Auden, I find it interesting that PKD’s passage, which several times mentions the irradiated dust of nuclear fallout, so closely resembles Auden’s pre-nuclear poem. The psychological issue, in each case, is not humanity’s ability to destroy itself (despite the post-apocalyptic setting of Androids) but the problem of being, as Carl Sagan puts it, “a way for the cosmos to know itself.” How do we live with our knowledge of geologic or cosmological time–scales on which all of human history occupy a mere blip–and, simultaneously, assert the meaningfulness of individual lives? More after the break, but, first the passages:
W.H. Auden, from “As I Walked Out One Evening” (1940):
But all the clocks in the city
Began to whirr and chime:
‘O let not Time deceive you,
You cannot conquer Time.
‘In the burrows of the Nightmare
Where Justice naked is,
Time watches from the shadow
And coughs when you would kiss.
‘In headaches and in worry
Vaguely life leaks away,
And Time will have his fancy
To-morrow or to-day.
‘Into many a green valley
Drifts the appalling snow;
Time breaks the threaded dances
And the diver’s brilliant bow.
‘O plunge your hands in water,
Plunge them in up to the wrist;
Stare, stare in the basin
And wonder what you’ve missed.
‘The glacier knocks in the cupboard,
The desert sighs in the bed,
And the crack in the tea-cup opens
A lane to the land of the dead.
‘Where the beggars raffle the banknotes
And the Giant is enchanting to Jack,
And the Lily-white Boy is a Roarer,
And Jill goes down on her back.
Philip K. Dick, from Do Androids Dream of Electric Sheep (1968):
“he saw the dust and the ruin of the apartment as it lay spreading out everywhere–he heard the kipple coming, the final disorder of all forms, the absence which would win out. It grew around him as he stood holding the empty ceramic cup; the cupboards of the kitchen creaked and split and he felt the floor beneath his feet give.
Reaching out, he touched the wall. His hand broke the surface; gray particles trickled and hurried down, fragments of plaster resembling the radioactive dust outside. He seated himself at the table and, like rotten, hollow tubes the legs of the chair bent; standing quickly, he set down the cup and tried to reform the chair, tried to press it back into its right shape. The chair came apart in his hands, the screws which had previously connected its several sections ripping out and hanging loose. He saw, on the table, the ceramic cup crack; webs of fine lines grew like the shadows of a vine, and then a chip dropped from the edge of the cup, exposing the rough, unglazed interior.”
Nietzsche frequently and disparately writes about this problem in terms of “eternal recurrence”: the natural cycles of life and death that repeat themselves across long stretches of time dwarf the appearance of any individual member of a single species on one planet. In The Birth of Tragedy (an early work that Nietzsche distances himself from, but still a valuable touchstone in his thought), Nietzsche frames this as a problem of identification. We identify with our individual selves, but those selves are also part of the large natural cycles whose inevitable continuation will destroy the individual. We can attempt to identify with the cycle itself as a claim to immortality. As Sagan says, “Some part of our being knows this is where we came from. We long to return, and wecan, because the cosmos is also within us. We‘re made of star stuff.”
On the other hand, identifying with the cosmos as a whole diminishes the significance of our own disappearance within the natural cycle. As homo sapiens sapiens we may be part of the terran biosphere in the solar system (itself a secondary star system formed from the stuff of previous supernovas), but as Carl or Friedrich or Wiston or Dick, our individual deaths, like our lives, are not interchangeable. Hannah Arendt, in The Human Condition (1958), refers to this quality as “uniqueness”: “In man, otherness, which he shares with everything that is, and distinctness, which he shares with everything alive, become uniqueness, and human plurality is the paradoxical plurality of unique beings.” We act together, speak together, and, in the process, we forge identities that are irreducible to our membership in a class of objects or a biological species. We exercise what Nietzsche calls the “principle of individuation”: we create individual selves that will never be repeated in the eternal recurrence of natural cycles.
Taking this a step farther, our potential identification with the cosmos as a whole is only possible because we have individual consciousnesses that can identify/form identities. Nietzsche argues that simply disavowing our individual selves in favor of universal being/becoming prevents the cosmos from knowing or being known. The individual (what he calls Apollonian) may be a temporary, fleeting form, but for us to experience our place within the universal (what he calls Dionysian), we must hold our individual selves in tension with those larger processes.
The highest forms of art are born, Nietzsche argues, when Apollo and Dionysus are locked in conflict. We are individuals who will die, and our unique lives will be gone. We are also part of, constitutive of, and coextensive with the dynamic unfolding of the universe as a whole. A few billion years from now, the sun will die and take the Earth (and Mercury and Venus) with it, but even that will not be the end of our story. The productive problem we face is finding meaning that can emerge from both biography and cosmology and their vast differences in scale.
Arendt has some very interesting things to say about entropy and the apparently miraculous rescue of human life and worldliness from the seemingly inevitable destruction of natural cycles. I am tempted to end with her, but, for this post, I want to give Auden the final word. His poem begins with lovers declaring that they will love forever, and the entropic wisdom of the cities chiming clocks interrupts those declarations. The meaning of that interruption, however, is not a simple rejection of subjective folly in favor of a more objective, longer view. It leaves the lovers (and the listeners who are left long after the lovers leave) with a peculiar form of responsibility:
photo credit: Pieter Kuiper via Wikimedia Commons. A comparison of double slit interference patterns with different widths. Similar patterns produced by atoms have confirmed the dominant model of quantum mechanics
Physicists have succeeded in confirming one of the theoretical aspects of quantum physics: Subatomic objects switch between particle and wave states when observed, while remaining in a dual state beforehand.
In the macroscopic world, we are used to waves being waves and solid objects being particle-like. However, quantum theory holds that for the very small this distinction breaks down. Light can behave either as a wave, or as a particle. The same goes for objects with mass like electrons.
This raises the question of what determines when a photon or electron will behave like a wave or a particle. How, anthropomorphizing madly, do these things “decide” which they will be at a particular time?
The dominant modelof quantum mechanics holds that it is when a measurement is taken that the “decision” takes place. Erwin Schrodinger came up with his famous thought experiment using a cat to ridicule this idea. Physicists think that quantum behavior breaks down on a large scale, so Schrödinger’s cat would not really be both alive and dead—however, in the world of the very small, strange theories like this seem to be the only way to explain what we we see.
In 1978, John Wheeler proposed a series of thought experiments to make sense of what happens when a photon has to either behave in a wave-like or particle-like manner. At the time, it was considered doubtful that these could ever be implemented in practice, but in 2007 such an experiment was achieved.
Now, Dr. Andrew Truscott of the Australian National University has reported the same thing in Nature Physics, but this time using a helium atom, rather than a photon.
“A photon is in a sense quite simple,” Truscott told IFLScience. “An atom has significant mass and couples to magnetic and electric fields, so it is much more in tune with its environment. It is more of a classical particle in a sense, so this was a test of whether a more classical particle would behave in the same way.”
Trustcott’s experiment involved creating a Bose-Einstein Condensate of around a hundred helium atoms. He conducted the experiment first with this condensate, but says the possibility that atoms were influencing each other made it important to repeat after ejecting all but one. The atom was passed through a “grate” made by two laser beams that can scatter an atom in a similar manner to a solid grating that can scatter light. These have been shown to cause atoms to either pass through one arm, like a particle, or both, like a wave.
A random number generator was then used to determine whether a second grating would appear further along the atom’s path. Crucially, the number was only generated after the atom had passed the first grate.
The second grating, when applied, caused an interference pattern in the measurement of the atom further along the path. Without the second grating, the atom had no such pattern.
An optical version of Wheeler’s delayed choice experiment (left) and an atomic version as used by Truscott (right). Credit: Manning et al.
Truscott says that there are two possible explanations for the behavior observed. Either, as most physicists think, the atom decided whether it was a wave or a particle when measured, or “a future event (the method of detection) causes the photon to decide its past.”
In the bizarre world of quantum mechanics, events rippling back in time may not seem that much stranger than things like “spooky action at a distance” or even something being a wave and a particle at the same time. However, Truscott said, “this experiment can’t prove that that is the wrong interpretation, but it seems wrong, and given what we know from elsewhere, it is much more likely that only when we measure the atoms do their observable properties come into reality.”
The ‘holographic principle,’ the idea that a universe with gravity can be described by a quantum field theory in fewer dimensions, has been used for years as a mathematical tool in strange curved spaces. New results suggest that the holographic principle also holds in flat spaces. Our own universe could in fact be two dimensional and only appear three dimensional — just like a hologram.
Is our universe a hologram? Credit: TU Wien
At first glance, there is not the slightest doubt: to us, the universe looks three dimensional. But one of the most fruitful theories of theoretical physics in the last two decades is challenging this assumption. The “holographic principle” asserts that a mathematical description of the universe actually requires one fewer dimension than it seems. What we perceive as three dimensional may just be the image of two dimensional processes on a huge cosmic horizon.
Up until now, this principle has only been studied in exotic spaces with negative curvature. This is interesting from a theoretical point of view, but such spaces are quite different from the space in our own universe. Results obtained by scientists at TU Wien (Vienna) now suggest that the holographic principle even holds in a flat spacetime.
The Holographic Principle
Everybody knows holograms from credit cards or banknotes. They are two dimensional, but to us they appear three dimensional. Our universe could behave quite similarly: “In 1997, the physicist Juan Maldacena proposed the idea that there is a correspondence between gravitational theories in curved anti-de-sitter spaces on the one hand and quantum field theories in spaces with one fewer dimension on the other,” says Daniel Grumiller (TU Wien).
Gravitational phenomena are described in a theory with three spatial dimensions, the behaviour of quantum particles is calculated in a theory with just two spatial dimensions — and the results of both calculations can be mapped onto each other. Such a correspondence is quite surprising. It is like finding out that equations from an astronomy textbook can also be used to repair a CD-player. But this method has proven to be very successful. More than ten thousand scientific papers about Maldacena’s “AdS-CFT-correspondence” have been published to date.
Correspondence Even in Flat Spaces
For theoretical physics, this is extremely important, but it does not seem to have much to do with our own universe. Apparently, we do not live in such an anti-de-sitter-space. These spaces have quite peculiar properties. They are negatively curved, any object thrown away on a straight line will eventually return. “Our universe, in contrast, is quite flat — and on astronomic distances, it has positive curvature,” says Daniel Grumiller.
However, Grumiller has suspected for quite some time that a correspondence principle could also hold true for our real universe. To test this hypothesis, gravitational theories have to be constructed, which do not require exotic anti-de-sitter spaces, but live in a flat space. For three years, he and his team at TU Wien (Vienna) have been working on that, in cooperation with the University of Edinburgh, Harvard, IISER Pune, the MIT and the University of Kyoto. Now Grumiller and colleagues from India and Japan have published an article in the journal Physical Review Letters, confirming the validity of the correspondence principle in a flat universe.
Calculated Twice, Same Result
“If quantum gravity in a flat space allows for a holographic description by a standard quantum theory, then there must by physical quantities, which can be calculated in both theories — and the results must agree,” says Grumiller. Especially one key feature of quantum mechanics -quantum entanglement — has to appear in the gravitational theory.
When quantum particles are entangled, they cannot be described individually. They form a single quantum object, even if they are located far apart. There is a measure for the amount of entanglement in a quantum system, called “entropy of entanglement.” Together with Arjun Bagchi, Rudranil Basu and Max Riegler, Daniel Grumiller managed to show that this entropy of entanglement takes the same value in flat quantum gravity and in a low dimension quantum field theory.
“This calculation affirms our assumption that the holographic principle can also be realized in flat spaces. It is evidence for the validity of this correspondence in our universe,” says Max Riegler (TU Wien). “The fact that we can even talk about quantum information and entropy of entanglement in a theory of gravity is astounding in itself, and would hardly have been imaginable only a few years back. That we are now able to use this as a tool to test the validity of the holographic principle, and that this test works out, is quite remarkable,” says Daniel Grumiller.
This however, does not yet prove that we are indeed living in a hologram — but apparently there is growing evidence for the validity of the correspondence principle in our own universe.
Journal Reference:
Arjun Bagchi, Rudranil Basu, Daniel Grumiller, Max Riegler. Entanglement Entropy in Galilean Conformal Field Theories and Flat Holography. Physical Review Letters, 2015; 114 (11) DOI: 10.1103/PhysRevLett.114.111602
Você precisa fazer login para comentar.