Arquivo da tag: Complexidade

Physics’s pangolin (AEON)

Trying to resolve the stubborn paradoxes of their field, physicists craft ever more mind-boggling visions of reality

by 

Illustration by Claire ScullyIllustration by Claire Scully

Margaret Wertheim is an Australian-born science writer and director of the Institute For Figuring in Los Angeles. Her latest book is Physics on the Fringe (2011).

Theoretical physics is beset by a paradox that remains as mysterious today as it was a century ago: at the subatomic level things are simultaneously particles and waves. Like the duck-rabbit illusion first described in 1899 by the Polish-born American psychologist Joseph Jastrow, subatomic reality appears to us as two different categories of being.

But there is another paradox in play. Physics itself is riven by the competing frameworks of quantum theory and general relativity, whose differing descriptions of our world eerily mirror the wave-particle tension. When it comes to the very big and the extremely small, physical reality appears to be not one thing, but two. Where quantum theory describes the subatomic realm as a domain of individual quanta, all jitterbug and jumps, general relativity depicts happenings on the cosmological scale as a stately waltz of smooth flowing space-time. General relativity is like Strauss — deep, dignified and graceful. Quantum theory, like jazz, is disconnected, syncopated, and dazzlingly modern.

Physicists are deeply aware of the schizophrenic nature of their science and long to find a synthesis, or unification. Such is the goal of a so-called ‘theory of everything’. However, to non-physicists, these competing lines of thought, and the paradoxes they entrain, can seem not just bewildering but absurd. In my experience as a science writer, no other scientific discipline elicits such contradictory responses.

In string cosmology, the totality of existing universes exceeds the number of particles in our universe by more than 400 orders of magnitude

This schism was brought home to me starkly some months ago when, in the course of a fortnight, I happened to participate in two public discussion panels, one with a cosmologist at Caltech, Pasadena, the other with a leading literary studies scholar from the University of Southern Carolina. On the panel with the cosmologist, a researcher whose work I admire, the discussion turned to time, about which he had written a recent, and splendid, book. Like philosophers, physicists have struggled with the concept of time for centuries, but now, he told us, they had locked it down mathematically and were on the verge of a final state of understanding. In my Caltech friend’s view, physics is a progression towards an ever more accurate and encompassing Truth. My literary theory panellist was having none of this. A Lewis Carroll scholar, he had joined me for a discussion about mathematics in relation to literature, art and science. For him, maths was a delightful form of play, a ludic formalism to be admired and enjoyed; but any claims physicists might make about truth in their work were, in his view, ‘nonsense’. This mathematically based science, he said, was just ‘another kind of storytelling’.

On the one hand, then, physics is taken to be a march toward an ultimate understanding of reality; on the other, it is seen as no different in status to the understandings handed down to us by myth, religion and, no less, literary studies. Because I spend my time about equally in the realms of the sciences and arts, I encounter a lot of this dualism. Depending on whom I am with, I find myself engaging in two entirely different kinds of conversation. Can we all be talking about the same subject?

Many physicists are Platonists, at least when they talk to outsiders about their field. They believe that the mathematical relationships they discover in the world about us represent some kind of transcendent truth existing independently from, and perhaps a priori to, the physical world. In this way of seeing, the universe came into being according to a mathematical plan, what the British physicist Paul Davies has called ‘a cosmic blueprint’. Discovering this ‘plan’ is a goal for many theoretical physicists and the schism in the foundation of their framework is thus intensely frustrating. It’s as if the cosmic architect has designed a fiendish puzzle in which two apparently incompatible parts must be fitted together. Both are necessary, for both theories make predictions that have been verified to a dozen or so decimal places, and it is on the basis of these theories that we have built such marvels as microchips, lasers, and GPS satellites.

Quite apart from the physical tensions that exist between them, relativity and quantum theory each pose philosophical problems. Are space and time fundamental qualities of the universe, as general relativity suggests, or are they byproducts of something even more basic, something that might arise from a quantum process? Looking at quantum mechanics, huge debates swirl around the simplest situations. Does the universe split into multiple copies of itself every time an electron changes orbit in an atom, or every time a photon of light passes through a slit? Some say yes, others say absolutely not.

Theoretical physicists can’t even agree on what the celebrated waves of quantum theory mean. What is doing the ‘waving’? Are the waves physically real, or are they just mathematical representations of probability distributions? Are the ‘particles’ guided by the ‘waves’? And, if so, how? The dilemma posed by wave-particle duality is the tip of an epistemological iceberg on which many ships have been broken and wrecked.

Undeterred, some theoretical physicists are resorting to increasingly bold measures in their attempts to resolve these dilemmas. Take the ‘many-worlds’ interpretation of quantum theory, which proposes that every time a subatomic action takes place the universe splits into multiple, slightly different, copies of itself, with each new ‘world’ representing one of the possible outcomes.

When this idea was first proposed in 1957 by the American physicist Hugh Everett, it was considered an almost lunatic-fringe position. Even 20 years later, when I was a physics student, many of my professors thought it was a kind of madness to go down this path. Yet in recent years the many-worlds position has become mainstream. The idea of a quasi-infinite, ever-proliferating array of universes has been given further credence as a result of being taken up by string theorists, who argue that every mathematically possible version of the string theory equations corresponds to an actually existing universe, and estimate that there are 10 to the power of 500 different possibilities. To put this in perspective: physicists believe that in our universe there are approximately 10 to the power of 80 subatomic particles. In string cosmology, the totality of existing universes exceeds the number of particles in our universe by more than 400 orders of magnitude.

Nothing in our experience compares to this unimaginably vast number. Every universe that can be mathematically imagined within the string parameters — including ones in which you exist with a prehensile tail, to use an example given by the American string theorist Brian Greene — is said to be manifest somewhere in a vast supra-spatial array ‘beyond’ the space-time bubble of our own universe.

What is so epistemologically daring here is that the equations are taken to be the fundamental reality. The fact that the mathematics allows for gazillions of variations is seen to be evidence for gazillions of actual worlds.

Perhaps what we are encountering here is not so much the edge of reality, but the limits of the physicists’ category system

This kind of reification of equations is precisely what strikes some humanities scholars as childishly naive. At the very least, it raises serious questions about the relationship between our mathematical models of reality, and reality itself. While it is true that in the history of physics many important discoveries have emerged from revelations within equations — Paul Dirac’s formulation for antimatter being perhaps the most famous example — one does not need to be a cultural relativist to feel sceptical about the idea that the only way forward now is to accept an infinite cosmic ‘landscape’ of universes that embrace every conceivable version of world history, including those in which the Middle Ages never ended or Hitler won.

In the 30 years since I was a student, physicists’ interpretations of their field have increasingly tended toward literalism, while the humanities have tilted towards postmodernism. Thus a kind of stalemate has ensued. Neither side seems inclined to contemplate more nuanced views. It is hard to see ways out of this tunnel, but in the work of the late British anthropologist Mary Douglas I believe we can find a tool for thinking about some of these questions.

On the surface, Douglas’s great book Purity and Danger (1966) would seem to have nothing do with physics; it is an inquiry into the nature of dirt and cleanliness in cultures across the globe. Douglas studied taboo rituals that deal with the unclean, but her book ends with a far-reaching thesis about human language and the limits of all language systems. Given that physics is couched in the language-system of mathematics, her argument is worth considering here.

In a nutshell, Douglas notes that all languages parse the world into categories; in English, for instance, we call some things ‘mammals’ and other things ‘lizards’ and have no trouble recognising the two separate groups. Yet there are some things that do not fit neatly into either category: the pangolin, or scaly anteater, for example. Though pangolins are warm-blooded like mammals and birth their young, they have armoured bodies like some kind of bizarre lizard. Such definitional monstrosities are not just a feature of English. Douglas notes that all category systems contain liminal confusions, and she proposes that such ambiguity is the essence of what is seen to be impure or unclean.

Whatever doesn’t parse neatly in a given linguistic system can become a source of anxiety to the culture that speaks this language, calling forth special ritual acts whose function, Douglas argues, is actually to acknowledge the limits of language itself. In the Lele culture of the Congo, for example, this epistemological confrontation takes place around a special cult of the pangolin, whose initiates ritualistically eat the abominable animal, thereby sacralising it and processing its ‘dirt’ for the entire society.

‘Powers are attributed to any structure of ideas,’ Douglas writes. We all tend to think that our categories of understanding are necessarily real. ‘The yearning for rigidity is in us all,’ she continues. ‘It is part of our human condition to long for hard lines and clear concepts’. Yet when we have them, she says, ‘we have to either face the fact that some realities elude them, or else blind ourselves to the inadequacy of the concepts’. It is not just the Lele who cannot parse the pangolin: biologists are still arguing about where it belongs on the genetic tree of life.

As Douglas sees it, cultures themselves can be categorised in terms of how well they deal with linguistic ambiguity. Some cultures accept the limits of their own language, and of language itself, by understanding that there will always be things that cannot be cleanly parsed. Others become obsessed with ever-finer levels of categorisation as they try to rid their system of every pangolin-like ‘duck-rabbit’ anomaly. For such societies, Douglas argues, a kind of neurosis ensues, as the project of categorisation takes ever more energy and mental effort. If we take this analysis seriously, then, in Douglas’ terms, might it be that particle-waves are our pangolins? Perhaps what we are encountering here is not so much the edge of reality, but the limits of the physicists’ category system.

In its modern incarnation, physics is grounded in the language of mathematics. It is a so-called ‘hard’ science, a term meant to imply that physics is unfuzzy — unlike, say, biology whose classification systems have always been disputed. Based in mathematics, the classifications of physicists are supposed to have a rigour that other sciences lack, and a good deal of the near-mystical discourse that surrounds the subject hinges on ideas about where the mathematics ‘comes from’.

According to Galileo Galilei and other instigators of what came to be known as the Scientific Revolution, nature was ‘a book’ that had been written by God, who had used the language of mathematics because it was seen to be Platonically transcendent and timeless. While modern physics is no longer formally tied to Christian faith, its long association with religion lingers in the many references that physicists continue to make about ‘the mind of God’, and many contemporary proponents of a ‘theory of everything’ remain Platonists at heart.

It’s a startling thought, in an age when we can read the speed of our cars from our digitised dashboards, that somebody had to discover ‘velocity’

In order to articulate a more nuanced conception of what physics is, we need to offer an alternative to Platonism. We need to explain how the mathematics ‘arises’ in the world, in ways other than assuming that it was put there there by some kind of transcendent being or process. To approach this question dispassionately, it is necessary to abandon the beautiful but loaded metaphor of the cosmic book — and all its authorial resonances — and focus, not the creation of the world, but on the creation of physics as a science.

When we say that ‘mathematics is the language of physics’, we mean that physicists consciously comb the world for patterns that are mathematically describable; these patterns are our ‘laws of nature’. Since mathematical patterns proceed from numbers, much of the physicist’s task involves finding ways to extract numbers from physical phenomena. In the 16th and 17th centuries, philosophical discussion referred to this as the process of ‘quantification’; today we call it measurement. One way of thinking about modern physics is as an ever more sophisticated process of quantification that multiplies and diversifies the ways we extract numbers from the world, thus giving us the raw material for our quest for patterns or ‘laws’. This is no trivial task. Indeed, the history of physics has turned on the question of whatcan be measured and how.

Stop for a moment and take a look around you. What do you think can be quantified? What colours and forms present themselves to your eye? Is the room bright or dark? Does the air feel hot or cold? Are birds singing? What other sounds do you hear? What textures do you feel? What odours do you smell? Which, if any, of these qualities of experience might be measured?

In the early 14th century, a group of scholarly monks known as the calculatores at the University of Oxford began to think about this problem. One of their interests was motion, and they were the first to recognise the qualities we now refer to as ‘velocity’ and ‘acceleration’ — the former being the rate at which a body changes position, the latter, the rate at which the velocity itself changes. It’s a startling thought, in an age when we can read the speed of our cars from our digitised dashboards, that somebody had to discover ‘velocity’.

Yet despite the calculatores’ advances, the science of kinematics made barely any progress until Galileo and his contemporaries took up the baton in the late-16th century. In the intervening time, the process of quantification had to be extracted from a burden of dreams in which it became, frankly, bogged down. For along with motion, the calculatoreswere also interested in qualities such as sin and grace and they tried to find ways to quantify these as well. Between the calculatores and Galileo, students of quantification had to work out what they were going to exclude from the project. To put it bluntly, in order for the science of physics to get underway, the vision had to be narrowed.

How, exactly, this narrowing was to be achieved was articulated by the 17th-century French mathematician and philosopher René Descartes. What could a mathematically based science describe? Descartes’s answer was that the new natural philosophers must restrict themselves to studying matter in motion through space and time. Maths, he said, could describe the extended realm — or res extensa.Thoughts, feelings, emotions and moral consequences, he located in the ‘realm of thought’, or res cogitans, declaring them inaccessible to quantification, and thus beyond the purview of science. In making this distinction, Descartes did not divide mind from body (that had been done by the Greeks), he merely clarified the subject matter for a new physical science.

So what else apart from motion could be quantified? To a large degree, progress in physics has been made by slowly extending the range of answers. Take colour. At first blush, redness would seem to be an ineffable and irreducible quale. In the late 19th century, however, physicists discovered that each colour in the rainbow, when diffracted through a prism, corresponds to a different wavelength of light. Red light has a wavelength of around 700 nanometres, violet light around 400 nanometres. Colour can be correlated with numbers — both the wavelength and frequency of an electromagnetic wave. Here we have one half of our duality: the wave.

The discovery of electromagnetic waves was in fact one of the great triumphs of the quantification project. In the 1820s, Michael Faraday noticed that, if he sprinkled iron filings around a magnet, the fragments would spontaneously assemble into a pattern of lines that, he conjectured, were caused by a ‘magnetic field’. Physicists today accept fields as a primary aspect of nature but at the start of the Industrial Revolution, when philosophical mechanism was at its peak, Faraday’s peers scoffed. Invisible fields smacked of magic. Yet, later in the 19th century, James Clerk Maxwell showed that magnetic and electric fields were linked by a precise set of equations — today known as Maxwell’s Laws — that enabled him to predict the existence of radio waves. The quantification of these hitherto unsuspected aspects of our world — these hidden invisible ‘fields’ — has led to the whole gamut of modern telecommunications on which so much of modern life is now staged.

Turning to the other side of our duality – the particle – with a burgeoning array of electrical and magnetic equipment, physicists in the late 19th and early 20th centuries began to probe matter. They discovered that atoms were composed from parts holding positive and negative charge. The negative electrons, were found to revolve around a positive nucleus in pairs, with each member of the pair in a slightly different state, or ‘spin’. Spin turns out to be a fundamental quality of the subatomic realm. Matter particles, such as electrons, have a spin value of one half. Particles of light, or photons, have a spin value of one. In short, one of the qualities that distinguishes ‘matter’ from ‘energy’ is the spin value of its particles.

We have seen how light acts like a wave, yet experiments over the past century have shown that under many conditions it behaves instead like a stream of particles. In the photoelectric effect (the explanation of which won Albert Einstein his Nobel Prize in 1921), individual photons knock electrons out of their atomic orbits. In Thomas Young’s infamous double-slit experiment of 1805, light behaves simultaneously like waves and particles. Here, a stream of detectably separate photons are mysteriously guided by a wave whose effect becomes manifest over a long period of time. What is the source of this wave and how does it influence billions of isolated photons separated by great stretches of time and space? The late Nobel laureate Richard Feynman — a pioneer of quantum field theory — stated in 1965 that the double-slit experiment lay at ‘the heart of quantum mechanics’. Indeed, physicists have been debating how to interpret its proof of light’s duality for the past 200 years.

Just as waves of light sometimes behave like particles of matter, particles of matter can sometimes behave like waves. In many situations, electrons are clearly particles: we fire them from electron guns inside the cathode-ray tubes of old-fashioned TV sets and each electron that hits the screen causes a tiny phosphor to glow. Yet, in orbiting around atoms, electrons behave like three-dimensional waves. Electron microscopes put the wave-quality of these particles to work; here, in effect, they act like short-wavelengths of light.

Physics is not just another story about the world: it is a qualitatively different kind of story to those told in the humanities, in myths and religions

Wave-particle duality is a core feature of our world. Or rather, we should say, it is a core feature of our mathematical descriptions of our world. The duck-rabbits are everywhere, colonising the imagery of physicists like, well, rabbits. But what is critical to note here is that however ambiguous our images, the universe itself remains whole and is manifestly not fracturing into schizophrenic shards. It is this tantalising wholeness in the thing itself that drives physicists onward, like an eternally beckoning light that seems so teasingly near yet is always out of reach.

Instrumentally speaking, the project of quantification has led physicists to powerful insights and practical gain: the computer on which you are reading this article would not exist if physicists hadn’t discovered the equations that describe the band-gaps in semiconducting materials. Microchips, plasma screens and cellphones are all byproducts of quantification and, every decade, physicists identify new qualities of our world that are amendable to measurement, leading to new technological possibilities. In this sense, physics is not just another story about the world: it is a qualitatively different kind of story to those told in the humanities, in myths and religions. No language other than maths is capable of expressing interactions between particle spin and electromagnetic field strength. The physicists, with their equations, have shown us new dimensions of our world.

That said, we should be wary of claims about ultimate truth. While quantification, as a project, is far from complete, it is an open question as to what it might ultimately embrace. Let us look again at the colour red. Red is not just an electromagnetic phenomenon, it is also a perceptual and contextual phenomenon. Stare for a minute at a green square then look away: you will see an afterimage of a red square. No red light has been presented to your eyes, yet your brain will perceive a vivid red shape. As Goethe argued in the late-18th century, and Edwin Land (who invented Polaroid film in 1932) echoed, colour cannot be reduced to purely prismatic effects. It exists as much in our minds as in the external world. To put this into a personal context, no understanding of the electromagnetic spectrum will help me to understand why certain shades of yellow make me nauseous, while electric orange fills me with joy.

Descartes was no fool; by parsing reality into the res extensa and res cogitans he captured something critical about human experience. You do not need to be a hard-core dualist to imagine that subjective experience might not be amenable to mathematical law. For Douglas, ‘the attempt to force experience into logical categories of non-contradiction’ is the ‘final paradox’ of an obsessive search for purity. ‘But experience is not amenable [to this narrowing],’ she insists, and ‘those who make the attempt find themselves led into contradictions.’

Quintessentially, the qualities that are amenable to quantification are those that are shared. All electrons are essentially the same: given a set of physical circumstances, every electron will behave like any other. But humans are not like this. It is our individuality that makes us so infuriatingly human, and when science attempts to reduce us to the status of electrons it is no wonder that professors of literature scoff.

Douglas’s point about attempting to corral experience into logical categories of non-contradiction has obvious application to physics, particularly to recent work on the interface between quantum theory and relativity. One of the most mysterious findings of quantum science is that two or more subatomic particles can be ‘entangled’. Once particles are entangled, what we do to one immediately affects the other, even if the particles are hundreds of kilometres apart. Yet this contradicts a basic premise of special relativity, which states that no signal can travel faster than the speed of light. Entanglement suggests that either quantum theory or special relativity, or both, will have to be rethought.

More challenging still, consider what might happen if we tried to send two entangled photons to two separate satellites orbiting in space, as a team of Chinese physicists, working with the entanglement theorist Anton Zeilinger, is currently hoping to do. Here the situation is compounded by the fact that what happens in near-Earth orbit is affected by both special and general relativity. The details are complex, but suffice it to say that special relativity suggests that the motion of the satellites will cause time to appear to slow down, while the effect of the weaker gravitational field in space should cause time to speed up. Given this, it is impossible to say which of the photons would be received first at which satellite. To an observer on the ground, both photons should appear to arrive at the same time. Yet to an observer on satellite one, the photon at satellite two should appear to arrive first, while to an observer on satellite two the photon at satellite one should appear to arrive first. We are in a mire of contradiction and no one knows what would in fact happen here. If the Chinese experiment goes ahead, we might find that some radical new physics is required.

To say that every possible version of their equations must be materially manifest strikes me as a kind of berserk literalism

You will notice that the ambiguity in these examples focuses on the issue of time — as do many paradoxes relating to relativity and quantum theory. Time indeed is a huge conundrum throughout physics, and paradoxes surround it at many levels of being. In Time Reborn: From the Crisis in Physics to the Future of the Universe (2013) the American physicist Lee Smolin argues that for 400 years physicists have been thinking about time in ways that are fundamentally at odds with human experience and therefore wrong. In order to extricate ourselves from some of the deepest paradoxes in physics, he says, its very foundations must be reconceived. In an op-ed in New Scientist in April this year, Smolin wrote:
The idea that nature consists fundamentally of atoms with immutable properties moving through unchanging space, guided by timeless laws, underlies a metaphysical view in which time is absent or diminished. This view has been the basis for centuries of progress in science, but its usefulness for fundamental physics and cosmology has come to an end.

In order to resolve contradictions between how physicists describetime and how we experience time, Smolin says physicists must abandon the notion of time as an unchanging ideal and embrace an evolutionary concept of natural laws.

This is radical stuff, and Smolin is well-known for his contrarian views — he has been an outspoken critic of string theory, for example. But at the heart of his book is a worthy idea: Smolin is against the reflexive reification of equations. As our mathematical descriptions of time are so starkly in conflict with our lived experience of time, it is our descriptions that will have to change, he says.

To put this into Douglas’s terms, the powers that have been attributed to physicists’ structure of ideas have been overreaching. ‘Attempts to force experience into logical categories of non-contradiction’ have, she would say, inevitablyfailed. From the contemplation of wave-particle pangolins we have been led to the limits of the linguistic system of physicists. Like Smolin, I have long believed that the ‘block’ conception of time that physics proposes is inadequate, and I applaud this thrilling, if also at times highly speculative, book. Yet, if we can fix the current system by reinventing its axioms, then (assuming that Douglas is correct) even the new system will contain its own pangolins.

In the early days of quantum mechanics, Niels Bohr liked to say that we might never know what ‘reality’ is. Bohr used John Wheeler’s coinage, calling the universe ‘a great smoky dragon’, and claiming that all we could do with our science was to create ever more predictive models. Bohr’s positivism has gone out of fashion among theoretical physicists, replaced by an increasingly hard-core Platonism. To say, as some string theorists do, that every possible version of their equations must be materially manifest strikes me as a kind of berserk literalism, reminiscent of the old Ptolemaics who used to think that every mathematical epicycle in their descriptive apparatus must represent a physically manifest cosmic gear.

We are veering here towards Douglas’s view of neurosis. Will we accept, at some point, that there are limits to the quantification project, just as there are to all taxonomic schemes? Or will we be drawn into ever more complex and expensive quests — CERN mark two, Hubble, the sequel — as we try to root out every lingering paradox? In Douglas’s view, ambiguity is an inherent feature of language that we must face up to, at some point, or drive ourselves into distraction.

3 June 2013

Anúncios

Key to adaptation limits of ocean dwellers: Simpler organisms better suited for climate change (Science Daily)

Date: July 1, 2014

Source: Alfred Wegener Institute, Helmholtz Centre for Polar and Marine Research

Summary: The simpler a marine organism is structured, the better it is suited for survival during climate change, researchers have discovered this in a new meta-study. For the first time biologists studied the relationship between the complexity of life forms and the ultimate limits of their adaptation to a warmer climate.

The temperature windows of some ocean dwellers as a comparison: the figures for green algae, seaweed and thermophilic bacteria were determined in the laboratory. The fish data stem from investigations in the ocean. Credit: Sina Löschke, Alfred Wegener Institute

The simpler a marine organism is structured, the better it is suited for survival during climate change. Scientists of the Alfred Wegener Institute, Helmholtz Centre for Polar and Marine Research, discovered this in a new meta-study, which appears today in the research journal Global Change Biology. For the first time biologists studied the relationship between the complexity of life forms and the ultimate limits of their adaptation to a warmer climate. While unicellular bacteria and archaea are able to live even in hot, oxygen-deficient water, marine creatures with a more complex structure, such as animals and plants, reach their growth limits at a water temperature of 41 degrees Celsius. This temperature threshold seems to be insurmountable for their highly developed metabolic systems.

The current IPCC Assessment Report shows that marine life forms respond very differently to the increasing water temperature and the decreasing oxygen content of the ocean. “We now asked ourselves why this is so. Why do bacteria, for example, still grow at temperatures of up to 90 degrees Celsius, while animals and plants reach their limits at the latest at a temperature of 41 degrees Celsius,” says Dr. Daniela Storch, biologist in the Ecophysiology Department at the Alfred Wegener Institute (AWI) and first author of the current study.

Since years Storch and her colleagues have been investigating the processes that result in animals having a certain temperature threshold up to which they can develop and reproduce. The scientists found that the reason for this is their cardiovascular system. They were able to show in laboratory experiments that this transport system is the first to fail in warmer water. Blood circulation supplies all cells and organs of a living organism with oxygen, but can only do so up to a certain maximum temperature. Beyond this threshold, the transport capacity of this system is no longer sufficient; the animal can then only sustain performance for a short time. Based on this, the biologists had suspected at an early date that there is a relationship between the complex structure of an organism and its limited ability to continue to function in increasingly warm water.

“In our study, therefore, we examined the hypothesis that the complexity could be the key that determines the ultimate adaptability of diverse life forms, from marine archaea to animals, to different living conditions in the course of evolutionary history. That means: the simpler the structure of an organism, the more resistant it should be,” explains the biologist. If this assumption is true, life forms consisting of a single simply structured cell would be much more resistant to high temperatures than life forms whose cell is very complex, such as algae, or whose bodies consist of millions of cells. Hence, the tolerance and adaptability thresholds of an organism type would always be found at its highest level of complexity. Among the smallest organisms, unicellular algae are the least resistant because they have highly complex cell organelles such as chloroplasts for photosynthesis. Unicellular protozoans also have cell organelles, but they are simpler in their structure. Bacteria and archaea entirely lack these organelles.

To test this assumption, the scientists evaluated over 1000 studies on the adaptability of marine life forms. Starting with simple archaea lacking a nucleus, bacteria and unicellular algae right through to animals and plants, they found the species in each case with the highest temperature tolerance within their group and determined their complexity. In the end, it became apparent that the assumed functional principle seems to apply: the simpler the structure, the more heat-tolerant the organism type.

But: “The adaptation limit of an organism is not only dependent on its upper temperature threshold, but also on its ability to cope with small amounts of oxygen. While many of the bacteria and archaea can survive at low oxygen concentrations or even without oxygen, most animals and plants require a higher minimum concentration,” explains Dr. Daniela Storch. The majority of the studies examined show that if the oxygen concentration in the water drops below a certain value, the oxygen supply for cells and tissues collapses after a short time.

The new research results also provide evidence that the body size of an organism plays a decisive role concerning adaptation limits. Smaller animal species or smaller individuals of an animal species can survive at lower oxygen concentration levels and higher temperatures than the larger animals.

“We observe among fish in the North Sea that larger individuals of a species are affected first at extreme temperatures. In connection with climate warming, there is generally a trend that smaller species replace larger species in a region. Today, however, plants and animals in the warmest marine environments already live at their tolerance limit and will probably not be able to adapt. If warming continues, they will migrate to cooler areas and there are no other tolerant animal and plant species that could repopulate the deserted habitats,” says Prof. Dr. Hans-Otto Pörtner of the Alfred Wegener Institute. The biologist initiated the current study and is the coordinating lead author of the chapter “Ocean systems” in the Fifth Assessment Report.

The new meta-study shows that their complex structure sets tighter limits for multicellular organisms, i.e. animals and plants, within which they can adapt to new living conditions. Individual animal species can reduce their body size, reduce their metabolism or generate more haemoglobin in order to survive in warmer, oxygen-deficient water. However, marine animals and plants are fundamentally not able to survive in conditions exceeding the temperature threshold of 41 degrees Celsius.

In contrast, simple unicellular organisms like bacteria benefit from warmer sea water. They reproduce and spread. “Communities of species in the ocean change as a result of this shift in living conditions. In the future animals and plants will have problems to survive in the warmest marine regions and archaea, bacteria as well as protozoa will spread in these areas. There are already studies showing that unicellular algae will be replaced by other unicellular organisms in the warmest regions of the ocean,” says Prof. Dr. Hans-Otto Pörtner. The next step for the authors is addressing the question regarding the role the complexity of species plays for tolerance and adaptation to the third climatic factor in the ocean, i.e. acidification, which is caused by rising carbon dioxide emissions and deposition of this greenhouse gas in seawater.

Living at the limit

For generations ocean dwellers have adapted to the conditions in their home waters: to the prevailing temperature, the oxygen concentration and the degree of water acidity. They grow best and live longest under these living conditions. However, not all creatures that live together in an ecosystem have the same preferences. The Antarctic eelpout, for instance, lives at its lower temperature limit and has to remain in warmer water layers of the Southern Ocean. If it enters cold water, the temperature quickly becomes too cold for it. The Atlantic cod in the North Sea, by contrast, would enjoy colder water as large specimens do not feel comfortable in temperatures over ten degrees Celsius. At such threshold values scientists refer to a temperature window: every poikilothermic ocean dweller has an upper and lower temperature limit at which it can live and grow. These “windows” vary in scope. Species in temperate zones like the North Sea generally have a broader temperature window. This is due to the extensively pronounced seasons in these regions. That means the animals have to withstand both warm summers and cold winters.

The temperature window of living creatures in the tropics or polar regions, in comparison, is two to four times smaller than that of North Sea dwellers. On the other hand, they have adjusted to extreme living conditions. Antarctic icefish species, for example, can live in water as cold as minus 1.8 degrees Celsius. Their blood contains antifreeze proteins. In addition, they can do without haemoglobin because their metabolism is low and a surplus of oxygen is available. For this reason their blood is thinner and the fish need less energy to pump it through the body — a perfect survival strategy. But: icefish live at the limit. If the temperature rises by a few degrees Celsius, the animals quickly reach their limits.

Journal Reference:

  1. Daniela Storch, Lena Menzel, Stephan Frickenhaus, Hans-O. Pörtner. Climate sensitivity across marine domains of life: limits to evolutionary adaptation shape species interactionsGlobal Change Biology, 2014; DOI:10.1111/gcb.12645

*   *   *

Starting With the Oceans, Single-Celled Organisms Will Re-Inherit the Earth (Motherboard)

Written by BEN RICHMOND

July 1, 2014 // 07:41 PM CET

I’ll be the first to cop to being guilty of multi-celled chauvinism: Having complex cells with organelles, which form complex systems allowing you to breathe, achieve consciousness, play volleyball, etc, is pretty much as good as it gets. While we enjoy all these advantages now, though, single-celled, simple organisms are just biding their time. More readily adaptable than us multi-celled organisms, it’s really a simple, single-celled world, and we’re just passing through.

Case in point: the oceans. A team of German researchers just published a paper in the journal Global Change Biology that found that the more simple an organism is, the better off it’s going to be as the oceans warm. Trout will die out, whales will fail, but unicellular bacteria and archaea (a type of microorganism) are going to flourish.

Animals can only develop and reproduce up to a temperature threshold in the water of about 41 degrees Celsius, or 105 degrees Fahrenheit. Beyond this, the cardiovascular system can’t deliver necessary oxygen throughout the body. Even as individual animal species can develop smaller bodies or generate more hemoglobin to survive in warmer and oxygen deficient water, the highly developed metabolic systems that allow for things like eyeballs can’t get over the temperature threshold and the other hurdles it brings, like decreasing oxygen.

Image: Sina Löschke, Alfred Wegener Institute

“The adaptation limit of an organism is not only dependent on its upper temperature threshold, but also on its ability to cope with small amounts of oxygen,”said Daniela Storch, the study’s lead author . “While many of the bacteria and archaea can survive at low oxygen concentrations or even without oxygen, most animals and plants require a higher minimum concentration.”

That’s part of the reason that unicellular organisms are found in the most dramatic settings that Earth has to offer: from Antarctic lakes that were buried under glaciers for 100,000 years, to super-hot hydrothermal vents on the ocean floor, acidic pools in Yellowstone, and the Atacama desert in Chile. When we look around the solar system, we see environments that can’t support complex, multicellular life, but still hold out hope that unicellular life has found a way in Europa’s unseen seas, or below the surface of Mars.

But as the Earth’s climate changes, and the ocean gets warmer and more acidic, complexity goes from an asset to a liability, and simplicity reigns.

“Communities of species in the ocean change as a result of this shift in living conditions. In the future animals and plants will have problems to survive in the warmest marine regions and archaea, bacteria as well as protozoa will spread in these areas,” said Dr. Hans-Otto Pörtner, one of the study’s co-authors. “There are already studies showing that unicellular algae will be replaced by other unicellular organisms in the warmest regions of the ocean.”

The story of life on Earth is, if nothing else, symmetrical. Three and a half billion years ago, prokaryotic cells showed up, without a nucleus or other organelles. Complex, multicellular life emerged with an increase in biomass and decrease in global surface temperature half a billion years ago. In another billion and a half years that complex multicellular life died back out, leaving the planet to the so-called simpler forms of life, as they basked in the light of a much brighter Sun. The best-case scenario is that life lasts until the Sun runs out of fuel, swells into a red giant,and vaporizes whatever is left of our planet in 7.6 billion years.

Multicellular life will have just been a two billion year flicker against a backdrop of adaptable single-celled life. But hey, we had a good run.

Important and complex systems, from the global financial market to groups of friends, may be highly controllable (Science Daily)

Date: March 20, 2014

Source: McGill University

Summary: Scientists have discovered that all complex systems, whether they are found in the body, in international finance, or in social situations, actually fall into just three basic categories, in terms of how they can be controlled.

All complex systems, whether they are found in the body, in international finance, or in social situations, actually fall into just three basic categories, in terms of how they can be controlled, researchers say. Credit: © Artur Marciniec / Fotolia

We don’t often think of them in these terms, but our brains, global financial markets and groups of friends are all examples of different kinds of complex networks or systems. And unlike the kind of system that exists in your car that has been intentionally engineered for humans to use, these systems are convoluted and not obvious how to control. Economic collapse, disease, and miserable dinner parties may result from a breakdown in such systems, which is why researchers have recently being putting so much energy into trying to discover how best to control these large and important systems.

But now two brothers, Profs. Justin and Derek Ruths, from Singapore University of Technology and Design and McGill University respectively, have suggested, in an article published in Science, that all complex systems, whether they are found in the body, in international finance, or in social situations, actually fall into just three basic categories, in terms of how they can be controlled.

They reached this conclusion by surveying the inputs and outputs and the critical control points in a wide range of systems that appear to function in completely different ways. (The critical control points are the parts of a system that you have to control in order to make it do whatever you want — not dissimilar to the strings you use to control a puppet).

“When controlling a cell in the body, for example, these control points might correspond to proteins that we can regulate using specific drugs,” said Justin Ruths. “But in the case of a national or international economic system, the critical control points could be certain companies whose financial activity needs to be directly regulated.”

One grouping, for example, put organizational hierarchies, gene regulation, and human purchasing behaviour together, in part because in each, it is hard to control individual parts of the system in isolation. Another grouping includes social networks such as groups of friends (whether virtual or real), and neural networks (in the brain), where the systems allow for relatively independent behaviour. The final group includes things like food systems, electrical circuits and the internet, all of which function basically as closed systems where resources circulate internally.

Referring to these groupings, Derek Ruths commented, “While our framework does provide insights into the nature of control in these systems, we’re also intrigued by what these groupings tell us about how very different parts of the world share deep and fundamental attributes in common — which may help unify our understanding of complexity and of control.”

“What we really want people to take away from the research at this point is that we can control these complex and important systems in the same way that we can control a car,” says Justin Ruths. “And that our work is giving us insight into which parts of the system we need to control and why. Ultimately, at this point we have developed some new theory that helps to advance the field in important ways, but it may still be another five to ten years before we see how this will play out in concrete terms.”

Journal Reference:

  1. Justin Ruths and Derek Ruths. Control Profiles of Complex NetworksScience, 2014 DOI: 10.1126/science.1242063

Física dos Sistemas Complexos pode prever impactos das mudanças ambientais (Fapesp)

Avaliação é de Jan-Michael Rost, pesquisador do Instituto Max Planck (foto: Nina Wagner/DWIH-SP)

19/02/2014

Elton Alisson

Agência FAPESP – Além da aplicação em áreas como a Engenharia e Tecnologias da Informação e Comunicação (TICs), a Física dos Sistemas Complexos – nos quais cada elemento contribui individualmente para o surgimento de propriedades somente observadas em conjunto – pode ser útil para avaliar os impactos de mudanças ambientais no planeta, como o desmatamento.

A avaliação foi feita por Jan-Michael Rost, pesquisador do Instituto Max-Planck para Física dos Sistemas Complexos, durante uma mesa-redonda sobre sistemas complexos e sustentabilidade, realizada no dia 14 de fevereiro no Hotel Pergamon, em São Paulo.

O encontro foi organizado pelo Centro Alemão de Ciência e Inovação São Paulo (DWIH-SP) e pela Sociedade Max Planck, em parceria com a FAPESP e o Serviço Alemão de Intercâmbio Acadêmico (DAAD), e fez parte de uma programação complementar de atividades da exposição científica Túnel da Ciência Max Planck.

“Os sistemas complexos, como a vida na Terra, estão no limiar entre a ordem e a desordem e levam um determinado tempo para se adaptar a mudanças”, disse Rost.

“Se houver grandes alterações nesses sistemas, como o desmatamento desenfreado de florestas, em um período curto de tempo, e for atravessado o limiar entre a ordem e a desordem, essas mudanças podem ser irreversíveis e colocar em risco a preservação da complexidade e a possibilidade de evolução das espécies”, afirmou o pesquisador.

De acordo com Rost, os sistemas complexos começaram a chamar a atenção dos cientistas nos anos 1950. A fim de estudá-los, porém, não era possível utilizar as duas grandes teorias que revolucionaram a Física no século 20: a da Relatividade, estabelecida por Albert Einstein (1879-1955), e da mecânica quântica, desenvolvida pelo físico alemão Werner Heisenberg (1901-1976) e outros cientistas.

Isso porque essas teorias podem ser aplicadas apenas a sistemas fechados, como os motores, que não sofrem interferência do meio externo e nos quais as reações de equilíbrio, ocorridas em seu interior, são reversíveis, afirmou Rost.

Por essa razão, segundo ele, essas teorias não são suficientes para estudar sistemas abertos, como máquinas dotadas de inteligência artificial e as espécies de vida na Terra, que interagem com o meio ambiente, são adaptativas e cujas reações podem ser irreversíveis. Por isso, elas deram lugar a teorias relacionadas à Física dos sistemas complexos, como a do caos e a da dinâmica não linear, mais apropriadas para essa finalidade.

“Essas últimas teorias tiveram um desenvolvimento espetacular nas últimas décadas, paralelamente às da mecânica clássica”, afirmou Rost.

“Hoje já se reconhece que os sistemas não são fechados, mas se relacionam com o exterior e podem apresentar reações desproporcionais à ação que sofreram. É nisso que a Engenharia se baseia atualmente para desenvolver produtos e equipamentos”, afirmou.

Categorias de sistemas complexos

De acordo com Rost, os sistemas complexos podem ser divididos em quatro categorias que se diferenciam pelo tempo de reação a uma determinada ação sofrida. A primeira delas é a dos sistemas complexos estáticos, que reagem instantaneamente a uma ação.

A segunda é a de sistemas adaptativos, como a capacidade de farejamento dos cães. Ao ser colocado na direção de uma trilha de rastros deixados por uma pessoa perdida em uma mata, por exemplo, os cães farejadores fazem movimentos de ziguezague.

Isso porque, segundo Rost, esses animais possuem um sistema de farejamento adaptativo. Isto é, ao sentir um determinado cheiro em um local, a sensibilidade olfativa do animal àquele odor diminui drasticamente e ele perde a capacidade de identificá-lo.

Ao sair do rastro em que estava, o animal recupera rapidamente a sensibilidade olfativa ao odor e é capaz de identificá-lo em uma próxima pegada. “O limiar da percepção olfativa desses animais é adaptado constantemente”, afirmou Rost.

A terceira categoria de sistemas complexos é a de sistemas autônomos, que utilizam a evolução como um sistema de adaptação e é impossível prever como será a reação a uma determinada mudança.

Já a última categoria é a de sistemas evolucionários ou transgeracionais, em que se inserem os seres humanos e outras espécies de vida na Terra, e na qual a reação a uma determinada alteração em seus sistemas de vida demora muito tempo para acontecer, afirmou Rost.

“Os sistemas transgeracionais recebem estímulos durante a vida toda e a reação de uma determinada geração não é comparável com a anterior”, disse o pesquisador.

“Tentar prever o tempo que um determinado sistema transgeracional, como a humanidade, leva para reagir a uma ação, como as mudanças ambientais, pode ser útil para assegurar a sustentabilidade do planeta”, avaliou Rost.