Arquivo da tag: Estados da matéria

A new form of frozen water? (Science Daily)

New study describes what could be the 18th known form of ice

February 12, 2016
University of Nebraska-Lincoln
A research team has predicted a new molecular form of ice with a record-low density. If the ice can be synthesized, it would become the 18th known crystalline form of water and the first discovered in the US since before World War II.

This illustration shows the ice’s molecular configuration. Credit: Courtesy photo/Yingying Huang and Chongqin Zhu 

Amid the season known for transforming Nebraska into an outdoor ice rink, a University of Nebraska-Lincoln-led research team has predicted a new molecular form of the slippery stuff that even Mother Nature has never borne.

The proposed ice, which the researchers describe in a Feb. 12, 2016 study in the journal Science Advances, would be about 25 percent less dense than a record-low form synthesized by a European team in 2014.

If the ice can be synthesized, it would become the 18th known crystalline form of water — and the first discovered in the United States since before World War II.

“We performed a lot of calculations (focused on) whether this is not just a low-density ice, but perhaps the lowest-density ice to date,” said Xiao Cheng Zeng, an Ameritas University Professor of chemistry who co-authored the study. “A lot of people are interested in predicting a new ice structure beyond the state of the art.”

This newest finding represents the latest in a long line of ice-related research from Zeng, who previously discovered a two-dimensional “Nebraska Ice” that contracts rather than expands when frozen under certain conditions.

Zeng’s newest study, which was co-led by Dalian University of Technology’s Jijun Zhao, used a computational algorithm and molecular simulation to determine the ranges of extreme pressure and temperature under which water would freeze into the predicted configuration. That configuration takes the form of a clathrate — essentially a series of water molecules that form an interlocking cage-like structure.

It was long believed that these cages could maintain their structural integrity only when housing “guest molecules” such as methane, which fills an abundance of natural clathrates found on the ocean floor and in permafrost. Like the European team before them, however, Zeng and his colleagues have calculated that their clathrate would retain its stability even after its guest molecules have been evicted.

Actually synthesizing the clathrate will take some effort. Based on the team’s calculations, the new ice will form only when water molecules are placed inside an enclosed space that is subjected to ultra-high, outwardly expanding pressure.

Just how much? At minus-10 Fahrenheit, the enclosure would need to be surrounded by expansion pressure about four times greater than what is found at the Pacific Ocean’s deepest trench. At minus-460, that pressure would need to be even greater — roughly the same amount experienced by a person shouldering 300 jumbo jets at sea level.

The guest molecules would then need to be extracted via a vacuuming process pioneered by the European team, which Zeng credited with inspiring his own group to conduct the new study.

Yet Zeng said the wonders of ordinary ice — the type that has covered Earth for billions of years — have also motivated his team’s research.

“Water and ice are forever interesting because they have such relevance to human beings and life,” Zeng said. “If you think about it, the low density of natural ice protects the water below it; if it were denser, water would freeze from the bottom up, and no living species could survive. So Mother Nature’s combination is just so perfect.”

If confirmed, the new form of ice will be called “Ice XVII,” a naming quirk that resulted from scientists terming the first two identified forms “Ice I.”

Zeng and Zhao co-authored the Science Advances study with UNL postdoctoral researcher Chongqin Zhu; Yingying Huang, a visiting research fellow from the Dalian University of Technology; and researchers from the Chinese Academy of Sciences and the University of Science and Technology of China.

The team’s research was funded in part by the National Science Foundation and conducted with the assistance of UNL’s Holland Computing Center.

Journal Reference:

  1. Y. Huang, C. Zhu, L. Wang, X. Cao, Y. Su, X. Jiang, S. Meng, J. Zhao, X. C. Zeng. A new phase diagram of water under negative pressure: The rise of the lowest-density clathrate s-IIIScience Advances, 2016; 2 (2): e1501010 DOI: 10.1126/sciadv.1501010

Why Physicists Are Saying Consciousness Is A State Of Matter, Like a Solid, A Liquid Or A Gas (The Physics arXiv Blog)

Why Physicists Are Saying Consciousness Is A State Of Matter, Like a Solid, A Liquid Or A Gas

A new way of thinking about consciousness is sweeping through science like wildfire. Now physicists are using it to formulate the problem of consciousness in concrete mathematical terms for the first time

The Physics arXiv Blog

There’s a quiet revolution underway in theoretical physics. For as long as the discipline has existed, physicists have been reluctant to discuss consciousness, considering it a topic for quacks and charlatans. Indeed, the mere mention of the ‘c’ word could ruin careers.

That’s finally beginning to change thanks to a fundamentally new way of thinking about consciousness that is spreading like wildfire through the theoretical physics community. And while the problem of consciousness is far from being solved, it is finally being formulated mathematically as a set of problems that researchers can understand, explore and discuss.

Today, Max Tegmark, a theoretical physicist at the Massachusetts Institute of Technology in Cambridge, sets out the fundamental problems that this new way of thinking raises. He shows how these problems can be formulated in terms of quantum mechanics and information theory. And he explains how thinking about consciousness in this way leads to precise questions about the nature of reality that the scientific process of experiment might help to tease apart.

Tegmark’s approach is to think of consciousness as a state of matter, like a solid, a liquid or a gas. “I conjecture that consciousness can be understood as yet another state of matter. Just as there are many types of liquids, there are many types of consciousness,” he says.

He goes on to show how the particular properties of consciousness might arise from the physical laws that govern our universe. And he explains how these properties allow physicists to reason about the conditions under which consciousness arises and how we might exploit it to better understand why the world around us appears as it does.

Interestingly, the new approach to consciousness has come from outside the physics community, principally from neuroscientists such as Giulio Tononi at the University of Wisconsin in Madison.

In 2008, Tononi proposed that a system demonstrating consciousness must have two specific traits. First, the system must be able to store and process large amounts of information. In other words consciousness is essentially a phenomenon of information.

And second, this information must be integrated in a unified whole so that it is impossible to divide into independent parts. That reflects the experience that each instance of consciousness is a unified whole that cannot be decomposed into separate components.

Both of these traits can be specified mathematically allowing physicists like Tegmark to reason about them for the first time. He begins by outlining the basic properties that a conscious system must have.

Given that it is a phenomenon of information, a conscious system must be able to store in a memory and retrieve it efficiently.

It must also be able to to process this data, like a computer but one that is much more flexible and powerful than the silicon-based devices we are familiar with.

Tegmark borrows the term computronium to describe matter that can do this and cites other work showing that today’s computers underperform the theoretical limits of computing by some 38 orders of magnitude.

Clearly, there is so much room for improvement that allows for the performance of conscious systems.

Next, Tegmark discusses perceptronium, defined as the most general substance that feels subjectively self-aware. This substance should not only be able to store and process information but in a way that forms a unified, indivisible whole. That also requires a certain amount of independence in which the information dynamics is determined from within rather than externally.

Finally, Tegmark uses this new way of thinking about consciousness as a lens through which to study one of the fundamental problems of quantum mechanics known as the quantum factorisation problem.

This arises because quantum mechanics describes the entire universe using three mathematical entities: an object known as a Hamiltonian that describes the total energy of the system; a density matrix that describes the relationship between all the quantum states in the system; and Schrodinger’s equation which describes how these things change with time.

The problem is that when the entire universe is described in these terms, there are an infinite number of mathematical solutions that include all possible quantum mechanical outcomes and many other even more exotic possibilities.

So the problem is why we perceive the universe as the semi-classical, three dimensional world that is so familiar. When we look at a glass of iced water, we perceive the liquid and the solid ice cubes as independent things even though they are intimately linked as part of the same system. How does this happen? Out of all possible outcomes, why do we perceive this solution?

Tegmark does not have an answer. But what’s fascinating about his approach is that it is formulated using the language of quantum mechanics in a way that allows detailed scientific reasoning. And as a result it throws up all kinds of new problems that physicists will want to dissect in more detail.

Take for example, the idea that the information in a conscious system must be unified. That means the system must contain error-correcting codes that allow any subset of up to half the information to be reconstructed from the rest.

Tegmark points out that any information stored in a special network known as a Hopfield neural net automatically has this error-correcting facility. However, he calculates that a Hopfield net about the size of the human brain with 10^11 neurons, can only store 37 bits of integrated information.

“This leaves us with an integration paradox: why does the information content of our conscious experience appear to be vastly larger than 37 bits?” asks Tegmark.

That’s a question that many scientists might end up pondering in detail. For Tegmark, this paradox suggests that his mathematical formulation of consciousness is missing a vital ingredient. “This strongly implies that the integration principle must be supplemented by at least one additional principle,” he says. Suggestions please in the comments section!

And yet the power of this approach is in the assumption that consciousness does not lie beyond our ken; that there is no “secret sauce” without which it cannot be tamed.

At the beginning of the 20th century, a group of young physicists embarked on a quest to explain a few strange but seemingly small anomalies in our understanding of the universe. In deriving the new theories of relativity and quantum mechanics, they ended up changing the way we comprehend the cosmos. These physcists, at least some of them, are now household names.

Could it be that a similar revolution is currently underway at the beginning of the 21st century? Consciousness as a State of Matter

Transitions between states of matter: It’s more complicated, scientists find (Science Daily)

Date: November 6, 2014

Source: New York University

Summary: The seemingly simple process of phase changes — those transitions between states of matter — is more complex than previously known. New work reveals the need to rethink one of science’s building blocks and, with it, how some of the basic principles underlying the behavior of matter are taught in our classrooms.

Melting ice. The seemingly simple process of phase changes — those transitions between states of matter — is more complex than previously known. Credit: © shefkate / Fotolia

The seemingly simple process of phase changes — those transitions between states of matter — is more complex than previously known, according to research based at Princeton University, Peking University and New York University.

Their study, which appears in the journal Science, reveals the need to rethink one of science’s building blocks and, with it, how some of the basic principles underlying the behavior of matter are taught in our classrooms. The researchers examined the way that a phase change, specifically the melting of a solid, occurs at a microscopic level and discovered that the transition is far more involved than earlier models had accounted for.

“This research shows that phase changes can follow multiple pathways, which is counter to what we’ve previously known,” explains Mark Tuckerman, a professor of chemistry and applied mathematics at New York University and one of the study’s co-authors. “This means the simple theories about phase transitions that we teach in classes are just not right.”

According to Tuckerman, scientists will need to change the way they think of and teach on phase changes.

The work stems from a 10-year project at Princeton to develop a mathematical framework and computer algorithms to study complex behavior in systems, explained senior author Weinan E, a professor in Princeton’s Department of Mathematics and Program in Applied and Computational Mathematics. Phase changes proved to be a crucial test case for their algorithm, E said. E and Tuckerman worked with Amit Samanta, a postdoctoral researcher at Princeton now at Lawrence Livermore National Laboratory, and Tang-Qing Yu, a postdoctoral researcher at NYU’s Courant Institute of Mathematical Sciences.

“It was a test case for the rather powerful set of tools that we have developed to study hard questions about complex phenomena such as phase transitions,” E said. “The melting of a relatively simple atomic solid such as a metal, proved to be enormously rich. With the understanding we have gained from this case, we next aim to probe more complex molecular solids such as ice.”

The findings reveal that phase transition can occur via multiple and competing pathways and that the transitions involve at least two steps. The study shows that, along one of these pathways, the first step in the transition process is the formation of point defects — local defects that occur at or around a single lattice site in a crystalline solid. These defects turn out to be highly mobile. In a second step, the point defects randomly migrate and occasionally meet to form large, disordered defect clusters.

This mechanism predicts that “the disordered cluster grows from the outside in rather than from the inside out, as current explanations suggest,” Tuckerman notes. “Over time, these clusters grow and eventually become sufficiently large to cause the transition from solid to liquid.”

Along an alternative pathway, the defects grow into thin lines of disorder (called “dislocations”) that reach across the system. Small liquid regions then pool along these dislocations, these regions expand from the dislocation region, engulfing more and more of the solid, until the entire system becomes liquid.

This study modeled this process by tracing copper and aluminum metals from an atomic solid to an atomic liquid state. The researchers used advanced computer models and algorithms to reexamine the process of phase changes on a microscopic level.

“Phase transitions have always been something of a mystery because they represent such a dramatic change in the state of matter,” Tuckerman observes. “When a system changes from solid to liquid, the properties change substantially.”

He adds that this research shows the surprising incompleteness of previous models of nucleation and phase changes–and helps to fill in existing gaps in basic scientific understanding.

This work is supported by the Office of Naval Research (N00014-13-1-0338), the Army Research Office (W911NF- 11-1-0101), the Department of Energy (DE-SC0009248, DE-AC52-07NA27344), and the National Science Foundation of China (CHE-1301314).

Journal Reference:

  1. A. Samanta, M. E. Tuckerman, T.-Q. Yu, W. E. Microscopic mechanisms of equilibrium melting of a solid. Science, 2014; 346 (6210): 729 DOI:10.1126/science.1253810

Experimento demonstra decaimento do bóson de Higgs em componentes da matéria (Fapesp)

Comprovação corrobora a hipótese de que o bóson é o gerador das massas das partículas constituintes da matéria. Descoberta foi anunciada na Nature Physics por grupo com participação brasileira (CMS)

Por José Tadeu Arantes

Agência FAPESP – O decaimento direto do bóson de Higgs emférmions – corroborando a hipótese de que ele é o gerador das massas das partículas constituintes da matéria – foi comprovado no Grande Colisor de Hádrons (LHC, na sigla em inglês), o gigantesco complexo experimental mantido pela Organização Europeia para a Pesquisa Nuclear (Cern) na fronteira da Suíça com a França.

O anúncio da descoberta acaba de ser publicado na revista Nature Physics pelo grupo de pesquisadores ligado ao detector Solenoide Compacto de Múons (CMS, na sigla em inglês).

Da equipe internacional do CMS, composta por cerca de 4.300 integrantes (entre físicos, engenheiros, técnicos, estudantes e pessoal administrativo), participam dois grupos de cientistas brasileiros: um sediado no Núcleo de Computação Científica (NCC) da Universidade Estadual Paulista (Unesp), em São Paulo, e outro no Centro Brasileiro de Pesquisas Físicas, do Ministério da Ciência, Tecnologia e Inovação (MCTI), e na Universidade do Estado do Rio de Janeiro (Uerj), no Rio de Janeiro.

“O experimento mediu, pela primeira vez, os decaimentos do bóson de Higgs em quarks bottom e léptons tau. E mostrou que eles são consistentes com a hipótese de as massas dessas partículas também serem geradas por meio do mecanismo de Higgs”, disse o físico Sérgio Novaes, professor da Unesp, à Agência FAPESP.

Novaes é líder do grupo da universidade paulista no experimento CMS e pesquisador principal do Projeto Temático “Centro de Pesquisa e Análise de São Paulo” (Sprace), integrado ao CMS e apoiado pela FAPESP.

O novo resultado reforçou a convicção de que o objeto cuja descoberta foi oficialmente anunciada em 4 de julho de 2012 é realmente o bóson de Higgs, a partícula que confere massa às demais partículas, de acordo com o Modelo Padrão, o corpo teórico que descreve os componentes e as interações supostamente fundamentais do mundo material.

“Desde o anúncio oficial da descoberta do bóson de Higgs, muitas evidências foram coletadas, mostrando que a partícula correspondia às predições do Modelo Padrão. Foram, fundamentalmente, estudos envolvendo seu decaimento em outros bósons (partículas responsáveis pelas interações da matéria), como os fótons (bósons da interação eletromagnética) e o W e o Z (bósons da interação fraca)”, disse Novaes.

“Porém, mesmo admitindo que o bóson de Higgs fosse responsável pela geração das massas do W e do Z, não era óbvio que ele devesse gerar também as massas dos férmions (partículas que constituem a matéria, como os quarks, que compõem os prótons e os nêutrons; e os léptons, como o elétron e outros), porque o mecanismo é um pouco diferente, envolvendo o chamado ‘acoplamento de Yukawa’ entre essas partículas e o campo de Higgs”, prosseguiu.

Os pesquisadores buscavam uma evidência direta de que o decaimento do bóson de Higgs nesses campos de matéria obedeceria à receita do Modelo Padrão. Porém, essa não era uma tarefa fácil, porque, exatamente pelo fato de conferir massa, o Higgs tem a tendência de decair nas partículas mais massivas, como os bósons W e Z, por exemplo, que possuem massas cerca de 80 e 90 vezes superiores à do próton, respectivamente.

“Além disso, havia outros complicadores. No caso particular do quark bottom, por exemplo, um par bottom-antibottom pode ser produzido de muitas outras maneiras, além do decaimento do Higgs. Então era preciso filtrar todas essas outras possibilidades. E, no caso do lépton tau, a probabilidade de decaimento do Higgs nele é muito pequena”, contou Novaes.

“Para se ter ideia, a cada trilhão de colisões realizadas no LHC, existe um evento com bóson de Higgs. Destes, menos de 10% correspondem ao decaimento do Higgs em um par de taus. Ademais, o par de taus também pode ser produzido de outras maneiras, como, por exemplo, a partir de um fóton, com frequência muito maior”, disse.

Para comprovar com segurança o decaimento do bóson de Higgs no quark bottom e no lépton tau, a equipe do CMS precisou coletar e processar uma quantidade descomunal de dados. “Por isso nosso artigo na Nature demorou tanto tempo para sair. Foi literalmente mais difícil do que procurar uma agulha no palheiro”, afirmou Novaes.

Mas o interessante, segundo o pesquisador, foi que, mesmo nesses casos, em que se considerava que o Higgs poderia fugir à receita do Modelo Padrão, isso não ocorreu. Os experimentos foram muito coerentes com as predições teóricas.

“É sempre surpreendente verificar o acordo entre o experimento e a teoria. Durante anos, o bóson de Higgs foi considerado apenas um artifício matemático, para dar coerência interna ao Modelo Padrão. Muitos físicos apostavam que ele jamais seria descoberto. Essa partícula foi procurada por quase meio século e acabou sendo admitida pela falta de uma proposta alternativa, capaz de responder por todas as predições, com a mesma margem de acerto. Então, esses resultados que estamos obtendo agora no LHC são realmente espetaculares. A gente costuma se espantar quando a ciência não dá certo. Mas o verdadeiro espanto é quando ela dá certo”, disse Novaes.

“Em 2015, o LHC deverá rodar com o dobro de energia. A expectativa é chegar a 14 teraelétrons-volt (TeV) (14 trilhões de elétrons-volt). Nesse patamar de energia, os feixes de prótons serão acelerados a mais de 99,99% da velocidade da luz. É instigante imaginar o que poderemos descobrir”, afirmou.

O artigo Evidence for the direct decay of the 125 GeV Higgs boson to fermions (doi:10.1038/nphys3005), da colaboração CMS, pode ser lido em



Modelo Padrão

Modelo elaborado ao longo da segunda metade do século XX, a partir da colaboração de um grande número de físicos de vários países, com alto poder de predição dos eventos que ocorrem no mundo subatômico. Engloba três das quatro interações conhecidas (eletromagnética, fraca e forte), mas não incorpora a interação gravitacional. O Modelo Padrão baseia-se no conceito de partículas elementares, agrupadas em férmions (partículas constituintes da matéria), bósons (partículas mediadoras das interações) e o bóson de Higgs (partícula que confere massa às demais partículas).


Assim chamados em homenagem ao físico italiano Enrico Fermi (1901-1954), prêmio Nobel de Física de 1938. Segundo o Modelo Padrão, são as partículas constituintes da matéria. Compõem-se de seis quarks (up, down, charm, strange, top, bottom), seis léptons (elétron, múon, tau, neutrino do elétron, neutrino do múon, neutrino do tau) e suas respectivas antipartículas. Os quarks agrupam-se em tríades para formar os baryons (prótons e nêutrons) e em pares quark-antiquark para formar os mésons. Em conjunto, baryons e mésons constituem os hádrons.


Assim chamados em homenagem ao físico indiano Satyendra Nath Bose (1894-1974). Segundo o Modelo Padrão, os bósons vetoriais são as partículas mediadoras das interações. Compõem-se do fóton (mediador da interação eletromagnética); do W+, W− e Z (mediadores da interação fraca); e de oito tipos de glúons (mediadores da interação forte). O gráviton (suposto mediador da interação gravitacional) ainda não foi encontrado nem faz parte do Modelo Padrão.

Bóson de Higgs

Nome em homenagem ao físico britânico Peter Higgs (nascido em 1929). Segundo o Modelo Padrão, é o único bóson elementar escalar (os demais bósons elementares são vetoriais). De forma simplificada, diz-se que é a partícula que confere massa às demais partículas. Foi postulado para explicar por que todas as partículas elementares do Modelo Padrão possuem massa, exceto o fóton e os glúons. Sua massa, de 125 a 127 GeV/c2 (gigaelétrons-volt divididos pela velocidade da luz ao quadrado), equivale a aproximadamente 134,2 a 136,3 vezes a massa do próton. Sendo uma das partículas mais massivas propostas pelo Modelo Padrão, só pode ser produzido em contextos de altíssima energia (como aqueles que teriam existido logo depois do Big Bang ou os agora alcançados no LHC ), decaindo quase imediatamente em partículas de massas menores. Após quase meio século de buscas, desde a postulação teórica em 1964, sua descoberta foi oficialmente anunciada no dia 4 de julho de 2012. O anúncio foi feito, de forma independente, pelas duas principais equipes do LHC, ligadas aos detectores CMS e Atlas do LHC. Em reconhecimento à descoberta, a Real Academia Sueca concedeu o Prêmio Nobel de Física de 2013 a Peter Higgs e ao belga François Englert, dois dos propositores da partícula.


Processo espontâneo por meio do qual uma partícula se transforma em outras, dotadas de massas menores. Se as partículas geradas não são estáveis, o processo de decaimento pode continuar. No caso mencionado no artigo, o decaimento do bóson de Higgs em férmions (especificamente, no quark bottom e no lépton tau) é tomado como evidência de que o Higgs é o gerador das massas dessas partículas.


O Grande Colisor de Hádrons é o maior e mais sofisticado complexo experimental já possuído pela humanidade. Construído pelo Cern ao longo de 10 anos, entre 1998 e 2008, consiste basicamente em um túnel circular de 27 quilômetros de extensão, situado a 175 metros abaixo da superfície do solo, na fronteira entre a França e a Suíça. Nele, feixes de prótons são acelerados em sentidos contrários e levados a colidir em patamares altíssimos de energia, gerando, a cada colisão, outros tipos de partículas, que possibilitam investigar a estrutura da matéria. A expectativa, para 2015, é produzir colisões de 14 TeV (14 trilhões de elétrons-volt), com os prótons movendo-se a mais de 99,99% da velocidade da luz. O LHC é dotado de sete detectores, sendo os dois principais o CMS e o Atlas.

In the eye of a chicken, a new state of matter comes into view (Science Daily)

Date: February 24, 2014

Source: Princeton University

Summary: Along with eggs, soup and rubber toys, the list of the chicken’s most lasting legacies may eventually include advanced materials, according to scientists. The researchers report that the unusual arrangement of cells in a chicken’s eye constitutes the first known biological occurrence of a potentially new state of matter known as ‘disordered hyperuniformity,’ which has been shown to have unique physical properties.

Researchers from Princeton University and Washington University in St. Louis report that the unusual arrangement of cells in a chicken’s eye … Credit: Courtesy of Joseph Corbo and Timothy Lau, Washington University in St. Louis

Along with eggs, soup and rubber toys, the list of the chicken’s most lasting legacies may eventually include advanced materials such as self-organizing colloids, or optics that can transmit light with the efficiency of a crystal and the flexibility of a liquid.

The unusual arrangement of cells in a chicken’s eye constitutes the first known biological occurrence of a potentially new state of matter known as “disordered hyperuniformity,” according to researchers from Princeton University and Washington University in St. Louis. Research in the past decade has shown that disordered hyperuniform materials have unique properties when it comes to transmitting and controlling light waves, the researchers report in the journal Physical Review E.

States of disordered hyperuniformity behave like crystal and liquid states of matter, exhibiting order over large distances and disorder over small distances. Like crystals, these states greatly suppress variations in the density of particles — as in the individual granules of a substance — across large spatial distances so that the arrangement is highly uniform. At the same time, disordered hyperuniform systems are similar to liquids in that they have the same physical properties in all directions. Combined, these characteristics mean that hyperuniform optical circuits, light detectors and other materials could be controlled to be sensitive or impervious to certain light wavelengths, the researchers report.

“Disordered hyperuniform materials possess a hidden order,” explained co-corresponding author Salvatore Torquato, a Princeton professor of chemistry. It was Torquato who, with Frank Stillinger, a senior scientist in Princeton’s chemistry department, first identified hyperuniformity in a 2003 paper in Physical Review E.

“We’ve since discovered that such physical systems are endowed with exotic physical properties and therefore have novel capabilities,” Torquato said. “The more we learn about these special disordered systems, the more we find that they really should be considered a new distinguishable state of matter.”

The researchers studied the light-sensitive cells known as cones that are in the eyes of chickens and most other birds active in daytime. These birds have four types of cones for color — violet, blue, green and red — and one type for detecting light levels, and each cone type is a different size. The cones are packed into a single epithelial, or tissue, layer called the retina. Yet, they are not arranged in the usual way, the researchers report.

In many creatures’ eyes, visual cells are evenly distributed in an obvious pattern such as the familiar hexagonal compact eyes of insects. In many creatures, the different types of cones are laid out so that they are not near cones of the same type. At first glance, however, the chicken eye appears to have a scattershot of cones distributed in no particular order.

The lab of co-corresponding author Joseph Corbo, an associate professor of pathology and immunology, and genetics at Washington University in St. Louis, studies how the chicken’s unusual visual layout evolved. Thinking that perhaps it had something to do with how the cones are packed into such a small space, Corbo approached Torquato, whose group studies the geometry and dynamics of densely packed objects such as particles.

Torquato then worked with the paper’s first author Yang Jiao, who received his Ph.D. in mechanical and aerospace engineering from Princeton in 2010 and is now an assistant professor of materials science and engineering at Arizona State University. Torquato and Jiao developed a computer-simulation model that went beyond standard packing algorithms to mimic the final arrangement of chicken cones and allowed them to see the underlying method to the madness.

It turned out that each type of cone has an area around it called an “exclusion region” that other cones cannot enter. Cones of the same type shut out each other more than they do unlike cones, and this variant exclusion causes distinctive cone patterns. Each type of cone’s pattern overlays the pattern of another cone so that the formations are intertwined in an organized but disordered way — a kind of uniform disarray. So, while it appeared that the cones were irregularly placed, their distribution was actually uniform over large distances. That’s disordered hyperuniformity, Torquato said.

“Because the cones are of different sizes it’s not easy for the system to go into a crystal or ordered state,” Torquato said. “The system is frustrated from finding what might be the optimal solution, which would be the typical ordered arrangement. While the pattern must be disordered, it must also be as uniform as possible. Thus, disordered hyperuniformity is an excellent solution.”

The researchers’ findings add a new dimension called multi-hyperuniformity. This means that the elements that make up the arrangement are themselves hyperuniform. While individual cones of the same type appear to be unconnected, they are actually subtly linked by exclusion regions, which they use to self-organize into patterns. Multi-hyperuniformity is crucial for the avian system to evenly sample incoming light, Torquato said. He and his co-authors speculate that this behavior could provide a basis for developing materials that can self-assemble into a disordered hyperuniform state.

“You also can think of each one of these five different visual cones as hyperuniform,” Torquato said. “If I gave you the avian system with these cones and removed the red, it’s still hyperuniform. Now, let’s remove the blue — what remains is still hyperuniform. That’s never been seen in any system, physical or biological. If you had asked me to recreate this arrangement before I saw this data I might have initially said that it would be very difficult to do.”

The discovery of hyperuniformity in a biological system could mean that the state is more common than previously thought, said Remi Dreyfus, a researcher at the Pennsylvania-based Complex Assemblies of Soft Matter lab (COMPASS) co-run by the University of Pennsylvania, the French National Centre for Scientific Research and the French chemical company Solvay. Previously, disordered hyperuniformity had only been observed in specialized physical systems such as liquid helium, simple plasmas and densely packed granules.

“It really looks like this idea of hyperuniformity, which started from a theoretical basis, is extremely general and that we can find them in many places,” said Dreyfus, who is familiar with the research but had no role in it. “I think more and more people will look back at their data and figure out whether there is hyperuniformity or not. They will find this kind of hyperuniformity is more common in many physical and biological systems.”

The findings also provide researchers with a detailed natural model that could be useful in efforts to construct hyperuniform systems and technologies, Dreyfus said. “Nature has found a way to make multi-hyperuniformity,” he said. “Now you can take the cue from what nature has found to create a multi-hyperuniform pattern if you intend to.”

Evolutionarily speaking, the researchers’ results show that nature found a unique workaround to the problem of cramming all those cones into the compact avian eye, Corbo said. The ordered pattern of cells in most other animals’ eyes are thought to be the “optimal” arrangement, and anything less would result in impaired vision. Yet, birds with the arrangement studied here — including chickens — have impeccable vision, Corbo said.

“These findings are significant because they suggest that the arrangement of photoreceptors in the bird, although not perfectly regular, are, in fact, as regular as they can be given the packing constraints in the epithelium,” Corbo said.

“This result indicates that evolution has driven the system to the ‘optimal’ arrangement possible, given these constraints,” he said. “We still know nothing about the cellular and molecular mechanisms that underlie this beautiful and highly organized arrangement in birds. So, future research directions will include efforts to decipher how these patterns develop in the embryo.”

The paper, “Avian photoreceptor patterns represent a disordered hyperuniform solution to a multiscale packing problem,” was published Feb. 24 in Physical Review E. The work was supported by grants from the National Science Foundation (grant no. DMS-1211087), National Cancer Institute (grant no. U54CA143803); the National Institutes of Health (grant nos. EY018826, HG006346 and HG006790); the Human Frontier Science Program; the German Research Foundation (DFG); and the Simons Foundation (grant no. 231015).

Journal Reference:

  1. J. T. Miller, A. Lazarus, B. Audoly, P. M. Reis. Shapes of a Suspended Curly HairPhysical Review Letters, 2014; 112 (6) DOI:10.1103/PhysRevLett.112.068103