Todos os posts de renzotaddei

Sobre renzotaddei

Anthropologist, professor at the Federal University of São Paulo

A theory of my own mind (AEON)

Knowing the content of one’s own mind might seem straightforward but in fact it’s much more like mindreading other people

https://pbs.twimg.com/media/D9xE74lW4AEArgC.jpg:large
Tokyo, 1996. Photo by Harry Gruyaert/Magnum

Stephen M Fleming is professor of cognitive neuroscience at University College London, where he leads the Metacognition Group. He is author of Know Thyself: The Science of Self-awareness (2021). Edited by Pam Weintraub

23 September 2021

In 1978, David Premack and Guy Woodruff published a paper that would go on to become famous in the world of academic psychology. Its title posed a simple question: does the chimpanzee have a theory of mind?

In coining the term ‘theory of mind’, Premack and Woodruff were referring to the ability to keep track of what someone else thinks, feels or knows, even if this is not immediately obvious from their behaviour. We use theory of mind when checking whether our colleagues have noticed us zoning out on a Zoom call – did they just see that? A defining feature of theory of mind is that it entails second-order representations, which might or might not be true. I might think that someone else thinks that I was not paying attention but, actually, they might not be thinking that at all. And the success or failure of theory of mind often turns on an ability to appropriately represent another person’s outlook on a situation. For instance, I can text my wife and say: ‘I’m on my way,’ and she will know that by this I mean that I’m on my way to collect our son from nursery, not on my way home, to the zoo, or to Mars. Sometimes this can be difficult to do, as captured by a New Yorker cartoon caption of a couple at loggerheads: ‘Of course I care about how you imagined I thought you perceived I wanted you to feel.’

Premack and Woodruff’s article sparked a deluge of innovative research into the origins of theory of mind. We now know that a fluency in reading minds is not something humans are born with, nor is it something guaranteed to emerge in development. In one classic experiment, children were told stories such as the following:

Maxi has put his chocolate in the cupboard. While Maxi is away, his mother moves the chocolate from the cupboard to the drawer. When Maxi comes back, where will he look for the chocolate?

Until the age of four, children often fail this test, saying that Maxi will look for the chocolate where it actually is (the drawer), rather than where he thinks it is (in the cupboard). They are using their knowledge of the reality to answer the question, rather than what they know about where Maxi had put the chocolate before he left. Autistic children also tend to give the wrong answer, suggesting problems with tracking the mental states of others. This test is known as a ‘false belief’ test – passing it requires one to realise that Maxi has a different (and false) belief about the world.

Many researchers now believe that the answer to Premack and Woodruff’s question is, in part, ‘no’ – suggesting that fully fledged theory of mind might be unique to humans. If chimpanzees are given an ape equivalent of the Maxi test, they don’t use the fact that another chimpanzee has a false belief about the location of the food to sneak in and grab it. Chimpanzees can track knowledge states – for instance, being aware of what others see or do not see, and knowing that, when someone is blindfolded, they won’t be able to catch them stealing food. There is also evidence that they track the difference between true and false beliefs in the pattern of their eye movements, similar to findings in human infants. Dogs also have similarly sophisticated perspective-taking abilities, preferring to choose toys that are in their owner’s line of sight when asked to fetch. But so far, at least, only adult humans have been found to act on an understanding that other minds can hold different beliefs about the world to their own.

Research on theory of mind has rapidly become a cornerstone of modern psychology. But there is an underappreciated aspect of Premack and Woodruff’s paper that is only now causing ripples in the pond of psychological science. Theory of mind as it was originally defined identified a capacity to impute mental states not only to others but also to ourselves. The implication is that thinking about others is just one manifestation of a rich – and perhaps much broader – capacity to build what philosophers call metarepresentations, or representations of representations. When I wonder whether you know that it’s raining, and that our plans need to change, I am metarepresenting the state of your knowledge about the weather.

Intriguingly, metarepresentations are – at least in theory – symmetric with respect to self and other: I can think about your mind, and I can think about my own mind too. The field of metacognition research, which is what my lab at University College London works on, is interested in the latter – people’s judgments about their own cognitive processes. The beguiling question, then – and one we don’t yet have an answer to – is whether these two types of ‘meta’ are related. A potential symmetry between self-knowledge and other-knowledge – and the idea that humans, in some sense, have learned to turn theory of mind on themselves – remains largely an elegant hypothesis. But an answer to this question has profound consequences. If self-awareness is ‘just’ theory of mind directed at ourselves, perhaps it is less special than we like to believe. And if we learn about ourselves in the same way as we learn about others, perhaps we can also learn to know ourselves better.

A common view is that self-knowledge is special, and immune to error, because it is gained through introspection – literally, ‘looking within’. While we might be mistaken about things we perceive in the outside world (such as thinking a bird is a plane), it seems odd to say that we are wrong about our own minds. If I think that I’m feeling sad or anxious, then there is a sense in which I am feeling sad or anxious. We have untrammelled access to our own minds, so the argument goes, and this immediacy of introspection means that we are rarely wrong about ourselves.

This is known as the ‘privileged access’ view of self-knowledge, and has been dominant in philosophy in various guises for much of the 20th century. René Descartes relied on self-reflection in this way to reach his conclusion ‘I think, therefore I am,’ noting along the way that: ‘I know clearly that there is nothing that can be perceived by me more easily or more clearly than my own mind.’

An alternative view suggests that we infer what we think or believe from a variety of cues – just as we infer what others think or feel from observing their behaviour. This suggests that self-knowledge is not as immediate as it seems. For instance, I might infer that I am anxious about an upcoming presentation because my heart is racing and my breathing is heavier. But I might be wrong about this – perhaps I am just feeling excited. This kind of psychological reframing is often used by sports coaches to help athletes maintain composure under pressure.

The philosopher most often associated with the inferential view is Gilbert Ryle, who proposed in The Concept of Mind (1949) that we gain self-knowledge by applying the tools we use to understand other minds to ourselves: ‘The sorts of things that I can find out about myself are the same as the sorts of things that I can find out about other people, and the methods of finding them out are much the same.’ Ryle’s idea is neatly summarised by another New Yorker cartoon in which a husband says to his wife: ‘How should I know what I’m thinking? I’m not a mind reader.’

Many philosophers since Ryle have considered the strong inferential view as somewhat crazy, and written it off before it could even get going. The philosopher Quassim Cassam, author of Self-knowledge for Humans (2014), describes the situation:

Philosophers who defend inferentialism – Ryle is usually mentioned in this context – are then berated for defending a patently absurd view. The assumption that intentional self-knowledge is normally immediate … is rarely defended; it’s just seen as obviously correct.

But if we take a longer view of history, the idea that we have some sort of special, direct access to our minds is the exception, rather than the rule. For the ancient Greeks, self-knowledge was not all-encompassing, but a work in progress, and something to be striven toward, as captured by the exhortation to ‘know thyself’ carved on the Temple of Delphi. The implication is that most of us don’t know ourselves very well. This view persisted into medieval religious traditions: the Italian priest and philosopher Saint Thomas Aquinas suggested that, while God knows himself by default, we need to put in time and effort to know our own minds. And a similar notion of striving toward self-awareness is found in Eastern traditions, with the founder of Chinese Taoism, Lao Tzu, endorsing a similar goal: ‘To know that one does not know is best; not to know but to believe that one knows is a disease.’

Self-awareness is something that can be cultivated

Other aspects of the mind – most famously, perception – also appear to operate on the principles of an (often unconscious) inference. The idea is that the brain isn’t directly in touch with the outside world (it’s locked up in a dark skull, after all) – and instead has to ‘infer’ what is really out there by constructing and updating an internal model of the environment, based on noisy sensory data. For instance, you might know that your friend owns a Labrador, and so you expect to see a dog when you walk into her house, but don’t know exactly where in your visual field the dog will appear. This higher-level expectation – the spatially invariant concept of ‘dog’ – provides the relevant context for lower levels of the visual system to easily interpret dog-shaped blurs that rush toward you as you open the door.

Adelson’s checkerboard. Courtesy Wikipedia

Elegant evidence for this perception-as-inference view comes from a range of striking visual illusions. In one called Adelson’s checkerboard, two patches with the same objective luminance are perceived as lighter and darker because the brain assumes that, to reflect the same amount of light, the one in shadow must have started out brighter. Another powerful illusion is the ‘light from above’ effect – we have an automatic tendency to assume that natural light falls from above, whereas uplighting – such as when light from a fire illuminates the side of a cliff – is less common. This can lead the brain to interpret the same image as either bumps or dips in a surface, depending on whether the shadows are consistent with light falling from above. Other classic experiments show that information from one sensory modality, such as sight, can act as a constraint on how we perceive another, such as sound – an illusion used to great effect in ventriloquism. The real skill of ventriloquists is being able to talk without moving the mouth. Once this is achieved, the brains of the audience do the rest, pulling the sound to its next most likely source, the puppet.

These striking illusions are simply clever ways of exposing the workings of a system finely tuned for perceptual inference. And a powerful idea is that self-knowledge relies on similar principles – whereas perceiving the outside world relies on building a model of what is out there, we are also continuously building and updating a similar model of ourselves – our skills, abilities and characteristics. And just as we can sometimes be mistaken about what we perceive, sometimes the model of ourselves can also be wrong.

Let’s see how this might work in practice. If I need to remember something complicated, such as a shopping list, I might judge I will fail unless I write it down somewhere. This is a metacognitive judgment about how good my memory is. And this model can be updated – as I grow older, I might think to myself that my recall is not as good as it used to be (perhaps after experiencing myself forgetting things at the supermarket), and so I lean more heavily on list-writing. In extreme cases, this self-model can become completely decoupled from reality: in functional memory disorders, patients believe their memory is poor (and might worry they have dementia) when it is actually perfectly fine when assessed with objective tests.

We now know from laboratory research that metacognition, just like perception, is also subject to powerful illusions and distortions – lending credence to the inferential view. A standard measure here is whether people’s confidence tracks their performance on simple tests of perception, memory and decision-making. Even in otherwise healthy people, judgments of confidence are subject to systematic illusions – we might feel more confident about our decisions when we act more quickly, even if faster decisions are not associated with greater accuracy. In our research, we have also found surprisingly large and consistent differences between individuals on these measures – one person might have limited insight into how well they are doing from one moment to the next, while another might have good awareness of whether are likely to be right or wrong.

This metacognitive prowess is independent of general cognitive ability, and correlated with differences in the structure and function of the prefrontal and parietal cortex. In turn, people with disease or damage to these brain regions can suffer from what neurologists refer to as anosognosia – literally, the absence of knowing. For instance, in Alzheimer’s disease, patients can suffer a cruel double hit – the disease attacks not only brain regions supporting memory, but also those involved in metacognition, leaving people unable to understand what they have lost.

This all suggests – more in line with Socrates than Descartes – that self-awareness is something that can be cultivated, that it is not a given, and that it can fail in myriad interesting ways. And it also provides newfound impetus to seek to understand the computations that might support self-awareness. This is where Premack and Woodruff’s more expansive notion of theory of mind might be long overdue another look.

Saying that self-awareness depends on similar machinery to theory of mind is all well and good, but it begs the question – what is this machinery? What do we mean by a ‘model’ of a mind, exactly?

Some intriguing insights come from an unlikely quarter – spatial navigation. In classic studies, the psychologist Edward Tolman realised that the rats running in mazes were building a ‘map’ of the maze, rather than just learning which turns to make when. If the shortest route from a starting point towards the cheese is suddenly blocked, then rats readily take the next quickest route – without having to try all the remaining alternatives. This suggests that they have not just rote-learned the quickest path through the maze, but instead know something about its overall layout.

A few decades later, the neuroscientist John O’Keefe found that cells in the rodent hippocampus encoded this internal knowledge about physical space. Cells that fired in different locations became known as ‘place’ cells. Each place cell would have a preference for a specific position in the maze but, when combined together, could provide an internal ‘map’ or model of the maze as a whole. And then, in the early 2000s, the neuroscientists May-Britt Moser, Edvard Moser and their colleagues in Norway found an additional type of cell – ‘grid’ cells, which fire in multiple locations, in a way that tiles the environment with a hexagonal grid. The idea is that grid cells support a metric, or coordinate system, for space – their firing patterns tell the animal how far it has moved in different directions, a bit like an in-built GPS system.

There is now tantalising evidence that similar types of brain cell also encode abstract conceptual spaces. For instance, if I am thinking about buying a new car, then I might think about how environmentally friendly the car is, and how much it costs. These two properties map out a two-dimensional ‘space’ on which I can place different cars – for instance, a cheap diesel car will occupy one part of the space, and an expensive electric car another part of the space. The idea is that, when I am comparing these different options, my brain is relying on the same kind of systems that I use to navigate through physical space. In one experiment by Timothy Behrens and his team at the University of Oxford, people were asked to imagine morphing images of birds that could have different neck and leg lengths – forming a two-dimensional bird space. A grid-like signature was found in the fMRI data when people were thinking about the birds, even though they never saw them presented in 2D.

Clear overlap between brain activations involved in metacognition and mindreading was observed

So far, these lines of work – on abstract conceptual models of the world, and on how we think about other minds – have remained relatively disconnected, but they are coming together in fascinating ways. For instance, grid-like codes are also found for conceptual maps of the social world – whether other individuals are more or less competent or popular – suggesting that our thoughts about others seem to be derived from an internal model similar to those used to navigate physical space. And one of the brain regions involved in maintaining these models of other minds – the medial prefrontal cortex (PFC) – is also implicated in metacognition about our own beliefs and decisions. For instance, research in my group has discovered that medial prefrontal regions not only track confidence in individual decisions, but also ‘global’ metacognitive estimates of our abilities over longer timescales – exactly the kind of self-estimates that were distorted in the patients with functional memory problems.

Recently, the psychologist Anthony G Vaccaro and I surveyed the accumulating literature on theory of mind and metacognition, and created a brain map that aggregated the patterns of activations reported across multiple papers. Clear overlap between brain activations involved in metacognition and mindreading was observed in the medial PFC. This is what we would expect if there was a common system building models not only about other people, but also of ourselves – and perhaps about ourselves in relation to other people. Tantalisingly, this very same region has been shown to carry grid-like signatures of abstract, conceptual spaces.

At the same time, computational models are being built that can mimic features of both theory of mind and metacognition. These models suggest that a key part of the solution is the learning of second-order parameters – those that encode information about how our minds are working, for instance whether our percepts or memories tend to be more or less accurate. Sometimes, this system can become confused. In work led by the neuroscientist Marco Wittmann at the University of Oxford, people were asked to play a game involving tracking the colour or duration of simple stimuli. They were then given feedback about both their own performance and that of other people. Strikingly, people tended to ‘merge’ their feedback with those of others – if others were performing better, they tended to think they themselves were performing a bit better too, and vice-versa. This intertwining of our models of self-performance and other-performance was associated with differences in activity in the dorsomedial PFC. Disrupting activity in this area using transcranial magnetic stimulation (TMS) led to more self-other mergence – suggesting that one function of this brain region is not only to create models of ourselves and others, but also to keep these models apart.

Another implication of a symmetry between metacognition and mindreading is that both abilities should emerge around the same time in childhood. By the time that children become adept at solving false-belief tasks – around the age of four – they are also more likely to engage in self-doubt, and recognise when they themselves were wrong about something. In one study, children were first presented with ‘trick’ objects: a rock that turned out to be a sponge, or a box of Smarties that actually contained not sweets but pencils. When asked what they first thought the object was, three-year-olds said that they knew all along that the rock was a sponge and that the Smarties box was full of pencils. But by the age of five, most children recognised that their first impression of the object was false – they could recognise they had been in error.

Indeed, when Simon Baron-Cohen, Alan Leslie and Uta Frith outlined their influential theory of autism in the 1980s, they proposed that theory of mind was only ‘one of the manifestations of a basic metarepresentational capacity’. The implication is that there should also be noticeable differences in metacognition that are linked to changes in theory of mind. In line with this idea, several recent studies have shown that autistic individuals also show differences in metacognition. And in a recent study of more than 450 people, Elisa van der Plas, a PhD student in my group, has shown that theory of mind ability (measured by people’s ability to track the feelings of characters in simple animations) and metacognition (measured by the degree to which their confidence tracks their task performance) are significantly correlated with each other. People who were better at theory of mind also formed their confidence differently – they were more sensitive to subtle cues, such as their response times, that indicated whether they had made a good or bad decision.

Recognising a symmetry between self-awareness and theory of mind might even help us understand why human self-awareness emerged in the first place. The need to coordinate and collaborate with others in large social groups is likely to have prized the abilities for metacognition and mindreading. The neuroscientist Suzana Herculano-Houzel has proposed that primates have unusually efficient ways of cramming neurons into a given brain volume – meaning there is simply more processing power devoted to so-called higher-order functions – those that, like theory of mind, go above and beyond the maintenance of homeostasis, perception and action. This idea fits with what we know about the areas of the brain involved in theory of mind, which tend to be the most distant in terms of their connections to primary sensory and motor areas.

A symmetry between self-awareness and other-awareness also offers a subversive take on what it means for other agents such as animals and robots to be self-aware. In the film Her (2013), Joaquin Phoenix’s character Theodore falls in love with his virtual assistant, Samantha, who is so human-like that he is convinced she is conscious. If the inferential view of self-awareness is correct, there is a sense in which Theodore’s belief that Samantha is aware is sufficient to make her aware, in his eyes at least. This is not quite true, of course, because the ultimate test is if she is able to also recursively model Theodore’s mind, and create a similar model of herself. But being convincing enough to share an intimate connection with another conscious agent (as Theodore does with Samantha), replete with mindreading and reciprocal modelling, might be possible only if both agents have similar recursive capabilities firmly in place. In other words, attributing awareness to ourselves and to others might be what makes them, and us, conscious.

A simple route for improving self-awareness is to take a third-person perspective on ourselves

Finally, a symmetry between self-awareness and other-awareness also suggests novel routes towards boosting our own self-awareness. In a clever experiment conducted by the psychologists and metacognition experts Rakefet Ackerman and Asher Koriat in Israel, students were asked to judge both how well they had learned a topic, and how well other students had learned the same material, by watching a video of them studying. When judging themselves, they fell into a trap – they believed that spending less time studying was a signal of being confident in knowing the material. But when judging others, this relationship was reversed: they (correctly) judged that spending longer on a topic would lead to better learning. These results suggest that a simple route for improving self-awareness is to take a third-person perspective on ourselves. In a similar way, literary novels (and soap operas) encourage us to think about the minds of others, and in turn might shed light on our own lives.

There is still much to learn about the relationship between theory of mind and metacognition. Most current research on metacognition focuses on the ability to think about our experiences and mental states – such as being confident in what we see or hear. But this aspect of metacognition might be distinct from how we come to know our own, or others’, character and preferences – aspects that are often the focus of research on theory of mind. New and creative experiments will be needed to cross this divide. But it seems safe to say that Descartes’s classical notion of introspection is increasingly at odds with what we know of how the brain works. Instead, our knowledge of ourselves is (meta)knowledge like any other – hard-won, and always subject to revision. Realising this is perhaps particularly useful in an online world deluged with information and opinion, when it’s often hard to gain a check and balance on what we think and believe. In such situations, the benefits of accurate metacognition are myriad – helping us recognise our faults and collaborate effectively with others. As the poet Robert Burns tells us:

O wad some Power the giftie gie us
To see oursels as ithers see us!
It wad frae mony a blunder free us…

(Oh, would some Power give us the gift
To see ourselves as others see us!
It would from many a blunder free us )

Is There a Secularocene? (Political Theology Network)

A Snapshot of Sea Ice by NASA Goddard Space Flight Center CC BY-NC 2.0

By Mohamad Amer Meziane – September 17, 2021

If modernity is the Anthropocene and if secularization is a defining feature of modernity’s birth, then it is natural to ask: did secularization engender climate change?

Why is secularization never connected to climate change? And why is climate change not connected to secularization? If modernity is the Anthropocene and if secularization is a defining feature of modernity’s birth, then it is natural to ask: did secularization engender climate change?

I aim to open a new space in the study of both secularism and the Anthropocene, of religion and climate change. Further, I aim to create a philosophical bridge between influential currents in anthropology and the humanities. I build this bridge through the critique of Orientalism and the anthropology of secularism and Islam, respectively founded by Edward Said and Talal Asad, on one hand, and the literature on the Anthropocene influenced by scholars such as Donna Haraway and Bruno Latour, on the other.

I argue that secularization should be re-conceptualized not only as an imperial and racial but also as an ecological set of processes.

My perspective stems from a philosophical engagement with both the project and the concept of secularization. It therefore presupposes a critical understanding of what has been called ‘the secular’ as a name given to the result of the destruction of nature: the transformation of the earth itself by industrial and colonial powers. I propose an alternative definition of secularization, secularism, and secularity. As I argue fully in my first book, Des empires sous la terre, the Anthropocene is an outcome of secularization understood as a set of processes engendered by the imperial relations of power between Europe and the rest of the world.

Thinking Through the Secularocene

What is secularization? Neither a supposed decline of religion nor a simple continuation of Christianity by other means, secularization should be seen as a transformation of the earth itself by virtue of its connection with fossil empires and capitalism.

This perspective differs from scholars who have been engaged in criticizing the idea of secularization as a mythology of progress and privatization – a mythology to which 9/11 proved false. I argue that the concept of secularization should be redefined instead of being dissolved. It is only if one presupposes that secularization is reducible to the privatization of religion that the existence of political religion can be construed as testifying against the reality of secularization. When one opposes the permanence of religion or of Christianity to the reality of secularization, one is in fact reactivating the secularization thesis in its primitive, Hegelian version (developed by Marcel Gauchet) – that modernity is the secular realization of Christianity on earth – and, therefore, of all religions in the world.

In other words, before it can be seen as a process, secularization should be approached as an order which articulates philosophy and politics, discourse and practices throughout the 19th century in Western Europe. Secularization is the order which claims that the other-worldliness of religion and the divine must be abolished by virtue of its realization in this world. The first instance of this demand is Hegel’s absolute knowledge and his interpretation of the French Revolution as the realization of heaven on earth. The so-called ‘end of history’ is indeed the accomplishment of a secularizing process by which the divine becomes the institution of freedom through the modern state.

The first way in which secularization manifests its reality is discursive. As a discourse, it asserts that the modern West must be and therefore is Christianity itself, Christianity as the secular. Before it can become an analytical concept, the concept of secularization formulates a demand: Christianity and religions realize heaven and all forms of transcendence in this world. 

Is the reality of secularization solely discursive? No. The reality of the secular is the earth itself as it is transformed by industrial capitalism. This redefinition of the secular and of secularization allows us to think alternatively about this ‘global’ event called climate change. I argue that the Anthropocene should be seen as an effect of secularization, and that one might use the word Secularocene to describe this dimension of ‘colonial modernity.’

How did secularization lead to climate change, one might ask? By authorizing the extraction of coal through expropriating lands that belonged to the Church, and dismissing the reality of demons in the underground as superstitious, secularization allowed fossil industrialism to transform the planet. For this reason, secularization should be seen as a crucial aspect of what Marx calls the primitive accumulation of capital: an extra-economic process of expropriation structured by state violence deploying itself through racial, gender, class, and religious hierarchies.

The critique of secularism is more than the critique of a political doctrine demanding the privatization of religion. It is the critique of how the earth itself has been transformed. As such, philosophical secularism refers to an ontology that posits this world as the sole reality. It defines immanence, or earth, as the reality which must be opposed to transcendence, or “heaven”. The critique of heaven is not the condition of all critique, as Marx famously puts it. It is part of how capitalism operates. Hence, the critique of heaven has transformed the earth itself through the secularization of both empire and capital.

While genealogy authorizes us to think about the categories of religion and secularity critically, it should be integrated within a larger perspective if we are to rethink secularization by constructing an alternative narrative of its deployment beyond the tropes of religion’s decline. A post-genealogical philosophy of history is a theory, not of progress, but of how the earth has been transformed through imperial and capitalist processes of globalization. The very existence of climate change invites us to think past Foucault’s legacies in postcolonial thought. Beyond genealogy, the hypothesis of the Anthropocene – or of the Secularocene for that matter – might require that we integrate genealogical inquiries into a radically new form of philosophical history. After the genealogy of religion and the secular, a philosophy of global history might help us understand imperial secularization as the birth of the Anthropocene.

By Mohamad Amer Meziane

Mohamad Amer Meziane holds a PhD from the University of Paris 1 Panthéon-Sorbonne. He is currently a Postdoctoral Research Fellow and Lecturer at Columbia University. He is affiliated to the Institute of Religion Culture and Public Life, the Institute of African Studies and the Department of Religion.

We’re Finally Catching a Break in the Climate Fight (The Crucial Years/Bill McKibben)

As a new Oxford paper shows, the incredibly rapid fall in the cost of renewables offers hope–but only if movements can push banks and politicians hard enough

Bill McKibben – Sep 19, 2021

This is one of the first solar panels and batteries ever installed, in the state of Georgia in 1955. At the time it was the most expensive power on earth; now it’s the cheapest, and still falling fast.

So far in the global warming era, we’ve caught precious few breaks. Certainly not from physics: the temperature has increased at the alarming pace that scientists predicted thirty years ago, and the effects of that warming have increased even faster than expected. (“Faster Than Expected” is probably the right title for a history of climate change so far; if you’re a connoisseur of disaster, there is already a blog by that name). The Arctic is melting decades ahead of schedule, and the sea rising on an accelerated schedule, and the forest fires of the science fiction future are burning this autumn. And we haven’t caught any breaks from our politics either: it’s moved with the lumbering defensiveness one would expect from a system ruled by inertia and vested interest. And so it is easy, and completely plausible, to despair: we are on the bleeding edge of existential destruction.

            But one trend is, finally, breaking in the right direction, and perhaps decisively. The price of renewable energy is now falling nearly as fast as heat and rainfall records, and in the process perhaps offering us one possible way out. The public debate hasn’t caught up to the new reality—Bill Gates, in his recent bestseller on energy and climate, laments the “green premium” that must be paid for clean energy. But he (and virtually every other mainstream energy observer) is already wrong—and they’re all about to be spectacularly wrong, if the latest evidence turns out to be right.

            Last Wednesday, a team at Oxford University released a fascinating paper that I haven’t seen covered anywhere. Stirringly titled “Empirically grounded technology forecasts and the energy transition,” it makes the following argument: “compared to continuing with a fossil-fuel-based system, a rapid green energy transition will likely result in overall net savings of many trillions of dollars–even without accounting for climate damages or co-benefits of climate policy.” Short and muscular, the paper begins by pointing out that at the moment most energy technologies, from gas to solar, have converged on a price point of about $100 per megawatt hour. In the case of coal, gas, and oil, however, “after adjusting for inflation, prices now are very similar to what they were 140 years ago, and there is no obvious long-range trend.” Sun, wind, and batteries, however, have dropped exponentially at roughly ten percent a year for three decades. Solar power didn’t exist until the late 1950s; since that time it has dropped in price about three orders of magnitude.

            They note that all the forecasts over those years about how fast prices would drop were uniformly wrong, invariably underestimating by almost comic margins the drop in costs for renewable energy. This is a massive problem: “failing to appreciate cost improvement trajectories of renewables relative to fossil fuels not only leads to under-investment in critical emission reduction technologies, it also locks in higher cost energy infrastructure for decades to come.” That is, if economists don’t figure out that solar is going to get steadily cheaper, you’re going to waste big bucks building gas plants designed to last for decades. And indeed we have (and of course the cost of them is not the biggest problem; that would be the destruction of the planet.)

            Happily, the Oxford team demonstrates that there’s a much easier and more effective way to estimate future costs than the complicated calculations used in the past: basically, if you just figure out the historic rates of fall in the costs of renewable energy, you can project them forward into the future because the learning curve seems to keep on going. In their model, validated by thousands of runs using past data, by far the cheapest path for the future is a very fast transition to renewable energy: if you replace almost all fossil fuel use over the next twenty years, you save tens of trillions of dollars. (They also model the costs of using lots of nuclear power: it’s low in carbon but high in price).

            To repeat: the cost of fossil fuels is not falling; any technological learning curve for oil and gas is offset by the fact that we’ve already found the easy stuff, and now you must dig deeper. But the more solar and windpower you build, the more the price falls—because the price is only the cost of setting up the equipment, which we get better at all the time. The actual energy arrives every morning when the sun rises. This doesn’t mean it’s a miracle: you have to mine lithium and cobalt, you have to site windmills, and you have to try and do those things with as little damage as possible. But if it’s not a miracle, it’s something like a deus ex machina—and the point is that these machines are cheap.

            If we made policy with this fact in mind—if we pushed, as the new $3.5 trillion Senate bill does, for dramatic increases in renewable usage in short order, then we would not only be saving the planet, we’d be saving tons of money. That money would end up in our pockets—but it would be removed from the wallets of people who own oil wells and coal mines, which is precisely why the fossil fuel industry is working so hard to gum up the works, trying to slow down everything from electric cars to induction cooktops and using all their economic and political muscle to prolong the transition. Their economically outmoded system of energy generation can only be saved by political corruption, which sadly is the fossil fuel industry’s remaining specialty. So far the learning curve of their influence-peddling has been steep enough to keep carbon levels climbing.

            That’s why we need to pay attention to the only other piece of good news, the only other virtuous thing that’s happened faster than expected. And that’s been the growth of movements to take on the fossil fuel industry and push for change. If those keep growing—if enough of us divest and boycott and vote and march and go to jail—we may be able to push our politicians and our banks hard enough that they actually let us benefit from the remarkable fall in the price of renewable energy. Activists and engineers are often very different kinds of people—but their mostly unconscious alliance offers the only hope of even beginning to catch up with the runaway pace of global warming.

So if you’re a solar engineer working to drop the price of power ten percent a year, don’t you dare leave the lab—the rest of us will chip in to get you pizza and caffeine so you can keep on working. But if you’re not a solar engineer, then see you in the streets (perhaps at October’s ‘People vs Fossil Fuels’ demonstrations in DC). Because you’re the other half of this equation.

Battery-free electronics breakthrough allows devices to run forever without charging (The Independent)

independent.co.uk

Anthony Cuthbertson – Sept. 23, 2021


Researchers have unveiled a ground-breaking system that allows electronic devices to run without batteries for “an infinite lifetime”.

Computer engineers from Northwestern University and Delft University of Technology developed the BFree energy-harvesting technology in order to enable battery-free devices capable of running perpetually with only intermittent energy input.

The same team previously introduced the world’s first battery-free Game Boy last year, which is powered energy harvested from the user pushing the buttons.

The engineers hope the innovative BFree system will help cut the vast amounts of dead batteries that end up as e-waste in landfills around the world.

It will also allow amateur hobbyists and those within the Maker Movement to create their own battery-free electronic devices.

“Right now, it’s virtually impossible for hobbyists to develop devices with battery-free hardware, so we wanted to democratise our battery-free platform,” said Josiah Hester, an assistant professor of electrical and computer engineering at Northwestern University, who led the research .

“Makers all over the internet are asking how to extend their device’s battery life. They are asking the wrong question. We want them to forget about the battery and instead think about more sustainable ways to generate energy.”

In order to run perpetually with only intermittent energy – for example the sun going behind a cloud and no longer powering the device’s solar panel – the BFree system simply pauses the calculations it is running without losing memory or needing to run through a long list of operations before restarting when power returns.

The technology is part of a new trend known as ubiquitous computing, which aims to make computing available at any time and in any place through smart devices and the Internet of Things (IoT).

The research represents a significant advancement in this field by circumventing the need for a battery, and the associated charging and replacing that comes with them.

“Many people predict that we’re going to have a trillion devices in this IoT,” Dr Hester said.

“That means a trillion dead batteries or 100 million people replacing a dead battery every few minutes. That presents a terrible ecological cost to the environment.

“What we’re doing, instead, is truly giving power to the people. We want everyone to be able to effortlessly program devices in a more sustainable way.”

The research will be presented at the UbiComp 2021 conference on 22 September.

5 Economists Redefining… Everything. Oh Yes, And They’re Women (Forbes)

forbes.com

Avivah Wittenberg-Cox

May 31, 2020,09:56am EDT


Five female economists.
From top left: Mariana Mazzucato, Carlota Perez, Kate Raworth, Stephanie Kelton, Esther Duflo. 20-first

Few economists become household names. Last century, it was John Maynard Keynes or Milton Friedman. Today, Thomas Piketty has become the economists’ poster-boy. Yet listen to the buzz, and it is five female economists who deserve our attention. They are revolutionising their field by questioning the meaning of everything from ‘value’ and ‘debt’ to ‘growth’ and ‘GDP.’ Esther Duflo, Stephanie Kelton, Mariana Mazzucato, Carlota Perez and Kate Raworth are united in one thing: their amazement at the way economics has been defined and debated to date. Their incredulity is palpable.

It reminds me of many women I’ve seen emerge into power over the past decade. Like Rebecca Henderson, a Management and Strategy professor at Harvard Business School and author of the new Reimagining Capitalism in a World on Fire. “It’s odd to finally make it to the inner circle,” she says, “and discover just how strangely the world is being run.” When women finally make it to the pinnacle of many professions, they often discover a world more wart-covered frog than handsome prince. Like Dorothy in The Wizard of Oz, when they get a glimpse behind the curtain, they discover the machinery of power can be more bluster than substance. As newcomers to the game, they can often see this more clearly than the long-term players. Henderson cites Tom Toro’s cartoon as her mantra. A group in rags sit around a fire with the ruins of civilisation in the background. “Yes, the planet got destroyed” says a man in a disheveled suit, “but for a beautiful moment in time we created a lot of value for shareholders.”

You get the same sense when you listen to the female economists throwing themselves into the still very male dominated economics field. A kind of collective ‘you’re kidding me, right? These five female economists are letting the secret out – and inviting people to flip the priorities. A growing number are listening – even the Pope (see below).

All question concepts long considered sacrosanct. Here are four messages they share:

Get Over It – Challenge the Orthodoxy

Described as “one of the most forward-thinking economists of our times,” Mariana Mazzucato is foremost among the flame throwers.  A professor at University College London and the Founder/Director of the UCL Institute for Innovation and Public Purpose, she asks fundamental questions about how ‘value’ has been defined, who decides what that means, and who gets to measure it. Her TED talk, provocatively titled “What is economic value? And who creates it?” lays down the gauntlet. If some people are value creators,” she asks, what does that make everyone else? “The couch potatoes? The value extractors? The value destroyers?” She wants to make economics explicitly serve the people, rather than explain their servitude.

Stephanie Kelton takes on our approach to debt and spoofs the simplistic metaphors, like comparing national income and expenditure to ‘family budgets’ in an attempt to prove how dangerous debt is. In her upcoming book, The Deficit Myth (June 2020), she argues they are not at all similar; what household can print additional money, or set interest rates? Debt should be rebranded as a strategic investment in the future. Deficits can be used in ways good or bad but are themselves a neutral and powerful policy tool. “They can fund unjust wars that destabilize the world and cost millions their lives,” she writes, “or they can be used to sustain life and build a more just economy that works for the many and not just the few.” Like all the economists profiled here, she’s pointing at the mind and the meaning behind the money.

Get Green Growth – Reshaping Growth Beyond GDP

Kate Raworth, a Senior Research Associate at Oxford University’s Environmental Change Institute, is the author of Doughnut Economics. She challenges our obsession with growth, and its outdated measures. The concept of Gross Domestic Product (GDP), was created in the 1930s and is being applied in the 21st century to an economy ten times larger. GDP’s limited scope (eg. ignoring the value of unpaid labour like housework and parenting or making no distinction between revenues from weapons or water) has kept us “financially, politically and socially addicted to growth” without integrating its costs on people and planet. She is pushing for new visual maps and metaphors to represent sustainable growth that doesn’t compromise future generations. What this means is moving away from the linear, upward moving line of ‘progress’ ingrained in us all, to a “regenerative and distributive” model designed to engage everyone and shaped like … a doughnut (food and babies figure prominently in these women’s metaphors). 

Carlota Perez doesn’t want to stop or slow growth, she wants to dematerialize it. “Green won’t spread by guilt and fear, we need aspiration and desire,” she says. Her push is towards a redefinition of the ‘good life’ and the need for “smart green growth” to be fuelled by a desire for new, attractive and aspirational lifestyles. Lives will be built on a circular economy that multiplies services and intangibles which offer limitless (and less environmentally harmful) growth. She points to every technological revolution creating new lifestyles. She says we can see it emerging, as it has in the past, among the educated, the wealthy and the young: more services rather than more things, active and creative work, a focus on health and care, a move to solar power, intense use of the internet, a preference for customisation over conformity, renting vs owning, and recycling over waste. As these new lifestyles become widespread, they offer immense opportunities for innovation and new jobs to service them.

Get Good Government – The Strategic Role of the State

All these economists want the state to play a major role. Women understand viscerally how reliant the underdogs of any system are on the inclusivity of the rules of the game. “It shapes the context to create a positive sum game” for both the public and business, says Perez. You need an active state to “tilt the playing field toward social good.” Perez outlines five technological revolutions, starting with the industrial one. She suggests we’re halfway through the fifth, the age of Tech & Information. Studying the repetitive arcs of each revolution enables us to see the opportunity of the extraordinary moment we are in. It’s the moment to shape the future for centuries to come. But she balances economic sustainability with the need for social sustainability, warning that one without the other is asking for trouble.

Mariana Mazzucato challenges governments to be more ambitious. They gain confidence and public trust by remembering and communicating what they are there to do. In her mind that is ensuring the public good. This takes vision and strategy, two ingredients she says are too often sorely lacking. Especially post-COVID, purpose needs to be the driver determining the ‘directionality’ of focus, investments and public/ private partnerships. Governments should be using their power – both of investment and procurement – to orient efforts towards the big challenges on our horizon, not just the immediate short-term recovery. They should be putting conditions on the massive financial bail outs they are currently handing out. She points to the contrast in imagination and impact between airline bailouts in Austria and the UK. The Austrian airlines are getting government aid on the condition they meet agreed emissions targets. The UK is supporting airlines without any conditionality, a huge missed opportunity to move towards larger, broader goals of building a better and greener economy out of the crisis.

Get Real – Beyond the Formulae and Into the Field

All of these economists also argue for getting out of the theories and into the field. They reject the idea of nerdy theoretical calculations done within the confines of a university tower and challenge economists to experiment and test their formulae in the real world.

Esther Duflo, Professor of Poverty Alleviation and Development Economics at MIT, is the major proponent of bringing what is accepted practice in medicine to the field of economics: field trials with randomised control groups. She rails against the billions poured into aid without any actual understanding or measurement of the returns. She gently accuses us of being no better with our 21st century approaches to problems like immunisation, education or malaria than any medieval doctor, throwing money and solutions at things with no idea of their impact. She and her husband, Abhijit Banerjee, have pioneered randomised control trials across hundreds of locations in different countries of the world, winning a Nobel Prize for Economics in 2019 for the insights.

They test, for example, how to get people to use bed nets against malaria. Nets are a highly effective preventive measure but getting people to acquire and use them has been a hard nut to crack. Duflo set up experiments to answer the conundrums: If people have to pay for nets, will they value them more? If they are free, will they use them? If they get them free once, will this discourage future purchases? As it turns out, based on these comparisons, take-up is best if nets are initially given, “people don’t get used to handouts, they get used to nets,” and will buy them – and use them – once they understand their effectiveness. Hence, she concludes, we can target policy and money towards impact.

Mazzucato is also hands-on with a number of governments around the world, including Denmark, the UK, Austria, South Africa and even the Vatican, where she has just signed up for weekly calls contributing to a post-Covid policy. ‘I believe [her vision] can help to think about the future,’ Pope Francis said after reading her book, The Value of Everything: Making and Taking in the Global Economy. No one can accuse her of being stuck in an ivory tower. Like Duflo, she is elbow-deep in creating new answers to seemingly intractable problems.

She warns that we don’t want to go back to normal after Covid-19. Normal was what got us here. Instead, she invites governments to use the crisis to embed ‘directionality’ towards more equitable public good into their recovery strategies and investments. Her approach is to define ambitious ‘missions’ which can focus minds and bring together broad coalitions of stakeholders to create solutions to support them. The original NASA mission to the moon is an obvious precursor model. Why, anyone listening to her comes away thinking, did we forget purpose in our public spending? And why, when so much commercial innovation and profit has grown out of government basic research spending, don’t a greater share of the fruits of success return to promote the greater good?

Economics has long remained a stubbornly male domain and men continue to dominate mainstream thinking. Yet, over time, ideas once considered without value become increasingly visible. The move from outlandish to acceptable to policy is often accelerated by crisis. Emerging from this crisis, five smart economists are offering an innovative range of new ideas about a greener, healthier and more inclusive way forward. Oh, and they happen to be women.

Soon, satellites will be able to watch you everywhere all the time (MIT Technology Review)

Can privacy survive?

Christopher Beam

June 26, 2019


In 2013, police in Grants Pass, Oregon, got a tip that a man named Curtis W. Croft had been illegally growing marijuana in his backyard. So they checked Google Earth. Indeed, the four-month-old satellite image showed neat rows of plants growing on Croft’s property. The cops raided his place and seized 94 plants.

In 2018, Brazilian police in the state of Amapá used real-time satellite imagery to detect a spot where trees had been ripped out of the ground. When they showed up, they discovered that the site was being used to illegally produce charcoal, and arrested eight people in connection with the scheme.

Chinese government officials have denied or downplayed the existence of Uighur reeducation camps in Xinjiang province, portraying them as “vocational schools.” But human rights activists have used satellite imagery to show that many of the “schools” are surrounded by watchtowers and razor wire.

Every year, commercially available satellite images are becoming sharper and taken more frequently. In 2008, there were 150 Earth observation satellites in orbit; by now there are 768. Satellite companies don’t offer 24-hour real-time surveillance, but if the hype is to be believed, they’re getting close. Privacy advocates warn that innovation in satellite imagery is outpacing the US government’s (to say nothing of the rest of the world’s) ability to regulate the technology. Unless we impose stricter limits now, they say, one day everyone from ad companies to suspicious spouses to terrorist organizations will have access to tools previously reserved for government spy agencies. Which would mean that at any given moment, anyone could be watching anyone else.

The images keep getting clearer

Commercial satellite imagery is currently in a sweet spot: powerful enough to see a car, but not enough to tell the make and model; collected frequently enough for a farmer to keep tabs on crops’ health, but not so often that people could track the comings and goings of a neighbor. This anonymity is deliberate. US federal regulations limit images taken by commercial satellites to a resolution of 25 centimeters, or about the length of a man’s shoe. (Military spy satellites can capture images far more granular, although just how much more is classified.)

Ever since 2014, when the National Oceanic and Atmospheric Administration (NOAA) relaxed the limit from 50 to 25 cm, that resolution has been fine enough to satisfy most customers. Investors can predict oil supply from the shadows cast inside oil storage tanks. Farmers can monitor flooding to protect their crops. Human rights organizations have tracked the flows of refugees from Myanmar and Syria.

But satellite imagery is improving in a way that investors and businesses will inevitably want to exploit. The imaging company Planet Labs currently maintains 140 satellites, enough to pass over every place on Earth once a day. Maxar, formerly DigitalGlobe, which launched the first commercial Earth observation satellite in 1997, is building a constellation that will be able to revisit spots 15 times a day. BlackSky Global promises to revisit most major cities up to 70 times a day. That might not be enough to track an individual’s every move, but it would show what times of day someone’s car is typically in the driveway, for instance.

Some companies are even offering live video from space. As early as 2014, a Silicon Valley startup called SkyBox (later renamed Terra Bella and purchased by Google and then Planet) began touting HD video clips up to 90 seconds long. And a company called EarthNow says it will offer “continuous real-time” monitoring “with a delay as short as about one second,” though some think it is overstating its abilities. Everyone is trying to get closer to a “living map,” says Charlie Loyd of Mapbox, which creates custom maps for companies like Snapchat and the Weather Channel. But it won’t arrive tomorrow, or the next day: “We’re an extremely long way from high-res, full-time video of the Earth.”

Some of the most radical developments in Earth observation involve not traditional photography but rather radar sensing and hyperspectral images, which capture electromagnetic wavelengths outside the visible spectrum. Clouds can hide the ground in visible light, but satellites can penetrate them using synthetic aperture radar, which emits a signal that bounces off the sensed object and back to the satellite. It can determine the height of an object down to a millimeter. NASA has used synthetic aperture radar since the 1970s, but the fact that the US approved it for commercial use only last year is testament to its power—and political sensitivity. (In 1978, military officials supposedly blocked the release of radar satellite images that revealed the location of American nuclear submarines.)

While GPS data from cell phones is a legitimate privacy threat, you can at least decide to leave your phone at home. It’s harder to hide from a satellite camera.

Meanwhile, farmers can use hyperspectral sensing to tell where a crop is in its growth cycle, and geologists can use it to detect the texture of rock that might be favorable to excavation. But it could also be used, whether by military agencies or terrorists, to identify underground bunkers or nuclear materials. 

The resolution of commercially available imagery, too, is likely to improve further. NOAA’s 25-centimeter cap will come under pressure as competition from international satellite companies increases. And even if it doesn’t, there’s nothing to stop, say, a Chinese company from capturing and selling 10 cm images to American customers. “Other companies internationally are going to start providing higher-­resolution imagery than we legally allow,” says Therese Jones, senior director of policy for the Satellite Industry Association. “Our companies would want to push the limit down as far as they possibly could.”

What will make the imagery even more powerful is the ability to process it in large quantities. Analytics companies like Orbital Insight and SpaceKnow feed visual data into algorithms designed to let anyone with an internet connection understand the pictures en masse. Investors use this analysis to, for example, estimate the true GDP of China’s Guangdong province on the basis of the light it emits at night. But burglars could also scan a city to determine which families are out of town most often and for how long.

Satellite and analytics companies say they’re careful to anonymize their data, scrubbing it of identifying characteristics. But even if satellites aren’t recognizing faces, those images combined with other data streams—GPS, security cameras, social-media posts—could pose a threat to privacy. “People’s movements, what kinds of shops do you go to, where do your kids go to school, what kind of religious institutions do you visit, what are your social patterns,” says Peter Martinez, of the Secure World Foundation. “All of these kinds of questions could in principle be interrogated, should someone be interested.”

Like all tools, satellite imagery is subject to misuse. Its apparent objectivity can lead to false conclusions, as when the George W. Bush administration used it to make the case that Saddam Hussein was stockpiling chemical weapons in Iraq. Attempts to protect privacy can also backfire: in 2018, a Russian mapping firm blurred out the sites of sensitive military operations in Turkey and Israel—inadvertently revealing their existence, and prompting web users to locate the sites on other open-source maps.

Capturing satellite imagery with good intentions can have unintended consequences too. In 2012, as conflict raged on the border between Sudan and South Sudan, the Harvard-based Satellite Sentinel Project released an image that showed a construction crew building a tank-capable road leading toward an area occupied by the Sudanese People’s Liberation Army. The idea was to warn citizens about the approaching tanks so they could evacuate. But the SPLA saw the images too, and within 36 hours it attacked the road crew (which turned out to consist of Chinese civilians hired by the Sudanese government), killed some of them, and kidnapped the rest. As an activist, one’s instinct is often to release more information, says Nathaniel Raymond, a human rights expert who led the Sentinel project. But he’s learned that you have to take into account who else might be watching.

It’s expensive to watch you all the time

One thing that might save us from celestial scrutiny is the price. Some satellite entrepreneurs argue that there isn’t enough demand to pay for a constellation of satellites capable of round-the-clock monitoring at resolutions below 25 cm. “It becomes a question of economics,” says Walter Scott, founder of DigitalGlobe, now Maxar. While some companies are launching relatively cheap “nanosatellites” the size of toasters—the 120 Dove satellites launched by Planet, for example, are “orders of magnitude” cheaper than traditional satellites, according to a spokesperson—there’s a limit to how small they can get and still capture hyper-detailed images. “It is a fundamental fact of physics that aperture size determines the limit on the resolution you can get,” says Scott. “At a given altitude, you need a certain size telescope.” That is, in Maxar’s case, an aperture of about a meter across, mounted on a satellite the size of a small school bus. (While there are ways around this limit—interferometry, for example, uses multiple mirrors to simulate a much larger mirror—they’re complex and pricey.) Bigger satellites mean costlier launches, so companies would need a financial incentive to collect such granular data.

That said, there’s already demand for imagery with sub–25 cm resolution—and a supply of it. For example, some insurance underwriters need that level of detail to spot trees overhanging a roof, or to distinguish a skylight from a solar panel, and they can get it from airplanes and drones. But if the cost of satellite images came down far enough, insurance companies would presumably switch over.

Of course, drones can already collect better images than satellites ever will. But drones are limited in where they can go. In the US, the Federal Aviation Administration forbids flying commercial drones over groups of people, and you have to register a drone that weighs more than half a pound (227 grams) or so. There are no such restrictions in space. The Outer Space Treaty, signed in 1967 by the US, the Soviet Union, and dozens of UN member states, gives all states free access to space, and subsequent agreements on remote sensing have enshrined the principle of “open skies.” During the Cold War this made sense, as it allowed superpowers to monitor other countries to verify that they were sticking to arms agreements. But the treaty didn’t anticipate that it would one day be possible for anyone to get detailed images of almost any location.

And then there are the tracking devices we carry around in our pockets, a.k.a. smartphones. But while the GPS data from cell  phones is a legitimate privacy threat, you can at least decide to leave your phone at home. It’s harder to hide from a satellite camera. “There’s some element of ground truth—no pun intended—that satellites have that maybe your cell phone or digital record or what happens on Twitter [doesn’t],” says Abraham Thomas, chief data officer at the analytics company Quandl. “The data itself tends to be innately more accurate.”

The future of human freedom

American privacy laws are vague when it comes to satellites. Courts have generally allowed aerial surveillance, though in 2015 the New Mexico Supreme Court ruled that an “aerial search” by police without a warrant was unconstitutional. Cases often come down to whether an act of surveillance violates someone’s “reasonable expectation of privacy.” A picture taken on a public sidewalk: fair game. A photo shot by a drone through someone’s bedroom window: probably not. A satellite orbiting hundreds of miles up, capturing video of a car pulling into the driveway? Unclear.

That doesn’t mean the US government is powerless. It has no jurisdiction over Chinese or Russian satellites, but it can regulate how American customers use foreign imagery. If US companies are profiting from it in a way that violates the privacy of US citizens, the government could step in.

Raymond argues that protecting ourselves will mean rethinking privacy itself. Current privacy laws, he says, focus on threats to the rights of individuals. But those protections “are anachronistic in the face of AI, geospatial technologies, and mobile technologies, which not only use group data, they run on group data as gas in the tank,” Raymond says. Regulating these technologies will mean conceiving of privacy as applying not just to individuals, but to groups as well. “You can be entirely ethical about personally identifiable information and still kill people,” he says.

Until we can all agree on data privacy norms, Raymond says, it will be hard to create lasting rules around satellite imagery. “We’re all trying to figure this out,” he says. “It’s not like anything’s riding on it except the future of human freedom.”

Christopher Beam is a writer based in Los Angeles.

The space issue

This story was part of our July 2019 issue

O futuro sombrio previsto por agências de inteligência dos EUA para o mundo em 2040 (BBC Brasil)

Gordon Corera

20 abril 2021

Logo da CIA em sua sede
Previsões incluem incerteza e instabilidade crescentes e mais polarização e populismo

A Comunidade de Inteligência dos EUA (CI), federação de 17 agências governamentais independentes que realizam atividades de inteligência, divulgou uma pesquisa sobre o estado do mundo em 2040.

E o futuro é sombrio: o estudo alerta para uma volatilidade política e crescente competição internacional ou mesmo conflito.

O relatório intitulado “Globo Trends 2040 – A More Contested World” (“Tendências Globais 2040 – Um Mundo Mais Disputado”, em português) é uma tentativa de analisar as principais tendências, descrevendo uma série de cenários possíveis.

É o sétimo relatório desse tipo, publicado a cada quatro anos pelo Conselho Nacional de Inteligência desde 1997.

Não se trata de uma leitura relaxante para quem é um líder político ou diplomata internacional – ou espera ser um nos próximos anos.

Em primeiro lugar, o relatório foca nos fatores-chave que vão impulsionar a mudança.

Um deles é a volatilidade política.

“Em muitos países, as pessoas estão pessimistas sobre o futuro e estão cada vez mais desconfiadas de líderes e instituições que consideram incapazes ou relutantes em lidar com tendências econômicas, tecnológicas e demográficas disruptivas”, adverte o relatório.

Bandeiras dos EUA e China tremulando lado a lado
Tensão entre EUA e China pode dividir o mundo, diz relatório

Democracias vulneráveis

O estudo argumenta que as pessoas estão gravitando em torno de grupos com ideias semelhantes e fazendo demandas maiores e mais variadas aos governos em um momento em que esses mesmos governos estão cada vez mais limitados no que podem fazer.

“Essa incompatibilidade entre as habilidades dos governos e as expectativas do público tende a se expandir e levar a mais volatilidade política, incluindo crescente polarização e populismo dentro dos sistemas políticos, ondas de ativismo e movimentos de protesto e, nos casos mais extremos, violência, conflito interno, ou mesmo colapso do estado”, diz o relatório.

Expectativas não atendidas, alimentadas por redes sociais e tecnologia, podem criar riscos para a democracia.

“Olhando para o futuro, muitas democracias provavelmente serão vulneráveis a uma erosão e até mesmo ao colapso”, adverte o texto, acrescentando que essas pressões também afetarão os regimes autoritários.

Pandemia, uma ‘grande ruptura global’

O relatório afirma que a atual pandemia é a “ruptura global mais significativa e singular desde a 2ª Guerra Mundial”, que alimentou divisões, acelerou as mudanças existentes e desafiou suposições, inclusive sobre como os governos podem lidar com isso.

Uma loja fechada a cadeado exibe uma placa dizendo 'desculpe, estamos fechados até novo aviso do governo, desculpe por qualquer inconveniente, nos vemos em breve'
Analistas previram ‘grande pandemia de 2023’, mas não associaram à covid

O último relatório, de 2017, previu a possibilidade de uma “pandemia global em 2023” reduzir drasticamente as viagens globais para conter sua propagação.

Os autores reconhecem, no entanto, que não esperavam o surgimento da covid-19, que dizem ter “abalado suposições antigas sobre resiliência e adaptação e criado novas incertezas sobre a economia, governança, geopolítica e tecnologia”.

As mudanças climáticas e demográficas também vão exercer um impacto primordial sobre o futuro do mundo, assim como a tecnologia, que pode ser prejudicial, mas também trazer oportunidades para aqueles que a utilizarem de maneira eficaz e primeiro.

Competição geopolítica

Internacionalmente, os analistas esperam que a intensidade da competição pela influência global alcance seu nível mais alto desde a Guerra Fria nas próximas duas décadas em meio ao enfraquecimento contínuo da velha ordem, enquanto instituições como as Nações Unidas enfrentam dificuldades.

Mãos segurando um cartaz dizendo 'nós, o povo, significa todo mundo'
Pessoas estão gravitando em torno de grupos com ideias semelhantes e fazendo demandas maiores e mais variadas aos governos em um momento em que esses mesmos governos estão cada vez mais limitados no que podem fazer, diz relatório

Organizações não-governamentais, incluindo grupos religiosos e as chamadas “empresas superestrelas da tecnologia” também podem ter a capacidade de construir redes que competem com – ou até mesmo – driblam os Estados.

O risco de conflito pode aumentar, tornando-se mais difícil impedir o uso de novas armas.

O terrorismo jihadista provavelmente continuará, mas há um alerta de que terroristas de extrema direita e esquerda que promovem questões como racismo, ambientalismo e extremismo antigovernamental possam ressurgir na Europa, América Latina e América do Norte.

Os grupos podem usar inteligência artificial para se tornarem mais perigosos ou usar realidade aumentada para criar “campos de treinamento de terroristas virtuais”.

A competição entre os EUA e a China está no centro de muitas das diferenças nos cenários – se um deles se torna mais bem-sucedido ou se os dois competem igualmente ou dividem o mundo em esferas de influência separadas.

Um relatório de 2004 também previu um califado emergindo do Oriente Médio, como o que o autodenominado Estado Islâmico tentou criar na última década, embora o mesmo estudo – olhando para 2020 – não tenha capturado a competição com a China, que agora domina as preocupações de segurança dos EUA.

O objetivo geral é analisar futuros possíveis, em vez de acertar previsões.

Democracias mais fortes ou ‘mundo à deriva’?

Existem alguns cenários otimistas para 2040 – um deles foi chamado de “o renascimento das democracias”.

Isso envolve os EUA e seus aliados aproveitando a tecnologia e o crescimento econômico para lidar com os desafios domésticos e internacionais, enquanto as repressões da China e da Rússia (inclusive em Hong Kong) sufocam a inovação e fortalecem o apelo da democracia.

Mas outros são mais desanimadores.

“O cenário do mundo à deriva” imagina as economias de mercado nunca se recuperando da pandemia de Covid, tornando-se profundamente divididas internamente e vivendo em um sistema internacional “sem direção, caótico e volátil”, já que as regras e instituições internacionais são ignoradas por países, empresas e outros grupos.

Um cenário, porém, consegue combinar pessimismo com otimismo.

“Tragédia e mobilização” prevê um mundo em meio a uma catástrofe global no início de 2030, graças às mudanças climáticas, fome e agitação – mas isso, por sua vez, leva a uma nova coalizão global, impulsionada em parte por movimentos sociais, para resolver esses problemas.

Claro, nenhum dos cenários pode acontecer ou – mais provavelmente – uma combinação deles ou algo totalmente novo pode surgir. O objetivo, dizem os autores, é se preparar para uma série de futuros possíveis – mesmo que muitos deles pareçam longe de ser otimistas.

How big science failed to unlock the mysteries of the human brain (MIT Technology Review)

technologyreview.com

Large, expensive efforts to map the brain started a decade ago but have largely fallen short. It’s a good reminder of just how complex this organ is.

Emily Mullin

August 25, 2021


In September 2011, a group of neuroscientists and nanoscientists gathered at a picturesque estate in the English countryside for a symposium meant to bring their two fields together. 

At the meeting, Columbia University neurobiologist Rafael Yuste and Harvard geneticist George Church made a not-so-modest proposal: to map the activity of the entire human brain at the level of individual neurons and detail how those cells form circuits. That knowledge could be harnessed to treat brain disorders like Alzheimer’s, autism, schizophrenia, depression, and traumatic brain injury. And it would help answer one of the great questions of science: How does the brain bring about consciousness? 

Yuste, Church, and their colleagues drafted a proposal that would later be published in the journal Neuron. Their ambition was extreme: “a large-scale, international public effort, the Brain Activity Map Project, aimed at reconstructing the full record of neural activity across complete neural circuits.” Like the Human Genome Project a decade earlier, they wrote, the brain project would lead to “entirely new industries and commercial ventures.” 

New technologies would be needed to achieve that goal, and that’s where the nanoscientists came in. At the time, researchers could record activity from just a few hundred neurons at once—but with around 86 billion neurons in the human brain, it was akin to “watching a TV one pixel at a time,” Yuste recalled in 2017. The researchers proposed tools to measure “every spike from every neuron” in an attempt to understand how the firing of these neurons produced complex thoughts. 

The audacious proposal intrigued the Obama administration and laid the foundation for the multi-year Brain Research through Advancing Innovative Neurotechnologies (BRAIN) Initiative, announced in April 2013. President Obama called it the “next great American project.” 

But it wasn’t the first audacious brain venture. In fact, a few years earlier, Henry Markram, a neuroscientist at the École Polytechnique Fédérale de Lausanne in Switzerland, had set an even loftier goal: to make a computer simulation of a living human brain. Markram wanted to build a fully digital, three-dimensional model at the resolution of the individual cell, tracing all of those cells’ many connections. “We can do it within 10 years,” he boasted during a 2009 TED talk

In January 2013, a few months before the American project was announced, the EU awarded Markram $1.3 billion to build his brain model. The US and EU projects sparked similar large-scale research efforts in countries including Japan, Australia, Canada, China, South Korea, and Israel. A new era of neuroscience had begun. 

An impossible dream?

A decade later, the US project is winding down, and the EU project faces its deadline to build a digital brain. So how did it go? Have we begun to unwrap the secrets of the human brain? Or have we spent a decade and billions of dollars chasing a vision that remains as elusive as ever? 

From the beginning, both projects had critics.

EU scientists worried about the costs of the Markram scheme and thought it would squeeze out other neuroscience research. And even at the original 2011 meeting in which Yuste and Church presented their ambitious vision, many of their colleagues argued it simply wasn’t possible to map the complex firings of billions of human neurons. Others said it was feasible but would cost too much money and generate more data than researchers would know what to do with. 

In a blistering article appearing in Scientific American in 2013, Partha Mitra, a neuroscientist at the Cold Spring Harbor Laboratory, warned against the “irrational exuberance” behind the Brain Activity Map and questioned whether its overall goal was meaningful. 

Even if it were possible to record all spikes from all neurons at once, he argued, a brain doesn’t exist in isolation: in order to properly connect the dots, you’d need to simultaneously record external stimuli that the brain is exposed to, as well as the behavior of the organism. And he reasoned that we need to understand the brain at a macroscopic level before trying to decode what the firings of individual neurons mean.  

Others had concerns about the impact of centralizing control over these fields. Cornelia Bargmann, a neuroscientist at Rockefeller University, worried that it would crowd out research spearheaded by individual investigators. (Bargmann was soon tapped to co-lead the BRAIN Initiative’s working group.)

There isn’t a single, agreed-upon theory of how the brain works, and not everyone in the field agreed that building a simulated brain was the best way to study it.

While the US initiative sought input from scientists to guide its direction, the EU project was decidedly more top-down, with Markram at the helm. But as Noah Hutton documents in his 2020 film In Silico, Markram’s grand plans soon unraveled. As an undergraduate studying neuroscience, Hutton had been assigned to read Markram’s papers and was impressed by his proposal to simulate the human brain; when he started making documentary films, he decided to chronicle the effort. He soon realized, however, that the billion-dollar enterprise was characterized more by infighting and shifting goals than by breakthrough science.

In Silico shows Markram as a charismatic leader who needed to make bold claims about the future of neuroscience to attract the funding to carry out his particular vision. But the project was troubled from the outset by a major issue: there isn’t a single, agreed-upon theory of how the brain works, and not everyone in the field agreed that building a simulated brain was the best way to study it. It didn’t take long for those differences to arise in the EU project. 

In 2014, hundreds of experts across Europe penned a letter citing concerns about oversight, funding mechanisms, and transparency in the Human Brain Project. The scientists felt Markram’s aim was premature and too narrow and would exclude funding for researchers who sought other ways to study the brain. 

“What struck me was, if he was successful and turned it on and the simulated brain worked, what have you learned?” Terry Sejnowski, a computational neuroscientist at the Salk Institute who served on the advisory committee for the BRAIN Initiative, told me. “The simulation is just as complicated as the brain.” 

The Human Brain Project’s board of directors voted to change its organization and leadership in early 2015, replacing a three-member executive committee led by Markram with a 22-member governing board. Christoph Ebell, a Swiss entrepreneur with a background in science diplomacy, was appointed executive director. “When I took over, the project was at a crisis point,” he says. “People were openly wondering if the project was going to go forward.”

But a few years later he was out too, after a “strategic disagreement” with the project’s host institution. The project is now focused on providing a new computational research infrastructure to help neuroscientists store, process, and analyze large amounts of data—unsystematic data collection has been an issue for the field—and develop 3D brain atlases and software for creating simulations.

The US BRAIN Initiative, meanwhile, underwent its own changes. Early on, in 2014, responding to the concerns of scientists and acknowledging the limits of what was possible, it evolved into something more pragmatic, focusing on developing technologies to probe the brain. 

New day

Those changes have finally started to produce results—even if they weren’t the ones that the founders of each of the large brain projects had originally envisaged. 

Last year, the Human Brain Project released a 3D digital map that integrates different aspects of human brain organization at the millimeter and micrometer level. It’s essentially a Google Earth for the brain. 

And earlier this year Alipasha Vaziri, a neuroscientist funded by the BRAIN Initiative, and his team at Rockefeller University reported in a preprint paper that they’d simultaneously recorded the activity of more than a million neurons across the mouse cortex. It’s the largest recording of animal cortical activity yet made, if far from listening to all 86 billion neurons in the human brain as the original Brain Activity Map hoped.

The US effort has also shown some progress in its attempt to build new tools to study the brain. It has speeded the development of optogenetics, an approach that uses light to control neurons, and its funding has led to new high-density silicon electrodes capable of recording from hundreds of neurons simultaneously. And it has arguably accelerated the development of single-cell sequencing. In September, researchers using these advances will publish a detailed classification of cell types in the mouse and human motor cortexes—the biggest single output from the BRAIN Initiative to date.

While these are all important steps forward, though, they’re far from the initial grand ambitions. 

Lasting legacy

We are now heading into the last phase of these projects—the EU effort will conclude in 2023, while the US initiative is expected to have funding through 2026. What happens in these next years will determine just how much impact they’ll have on the field of neuroscience.

When I asked Ebell what he sees as the biggest accomplishment of the Human Brain Project, he didn’t name any one scientific achievement. Instead, he pointed to EBRAINS, a platform launched in April of this year to help neuroscientists work with neurological data, perform modeling, and simulate brain function. It offers researchers a wide range of data and connects many of the most advanced European lab facilities, supercomputing centers, clinics, and technology hubs in one system. 

“If you ask me ‘Are you happy with how it turned out?’ I would say yes,” Ebell said. “Has it led to the breakthroughs that some have expected in terms of gaining a completely new understanding of the brain? Perhaps not.” 

Katrin Amunts, a neuroscientist at the University of Düsseldorf, who has been the Human Brain Project’s scientific research director since 2016, says that while Markram’s dream of simulating the human brain hasn’t been realized yet, it is getting closer. “We will use the last three years to make such simulations happen,” she says. But it won’t be a big, single model—instead, several simulation approaches will be needed to understand the brain in all its complexity. 

Meanwhile, the BRAIN Initiative has provided more than 900 grants to researchers so far, totaling around $2 billion. The National Institutes of Health is projected to spend nearly $6 billion on the project by the time it concludes. 

For the final phase of the BRAIN Initiative, scientists will attempt to understand how brain circuits work by diagramming connected neurons. But claims for what can be achieved are far more restrained than in the project’s early days. The researchers now realize that understanding the brain will be an ongoing task—it’s not something that can be finalized by a project’s deadline, even if that project meets its specific goals.

“With a brand-new tool or a fabulous new microscope, you know when you’ve got it. If you’re talking about understanding how a piece of the brain works or how the brain actually does a task, it’s much more difficult to know what success is,” says Eve Marder, a neuroscientist at Brandeis University. “And success for one person would be just the beginning of the story for another person.” 

Yuste and his colleagues were right that new tools and techniques would be needed to study the brain in a more meaningful way. Now, scientists will have to figure out how to use them. But instead of answering the question of consciousness, developing these methods has, if anything, only opened up more questions about the brain—and shown just how complex it is. 

“I have to be honest,” says Yuste. “We had higher hopes.”

Emily Mullin is a freelance journalist based in Pittsburgh who focuses on biotechnology.

Pew’s new global survey of climate change attitudes finds promising trends but deep divides (The Conversation)

theconversation.com

September 14, 2021 10.00am EDT

By Kate T. Luong (Postdoctoral Research Fellow, George Mason University), Ed Maibach (Director of Center for Climate Communication, George Mason University), and John Kotcher (Assistant Professor of Communications, George Mason University)


People’s views about climate change, from how worried they are about it affecting them to how willing they are to do something about it, have shifted in developed countries around the world in recent years, a new survey by the Pew Research Center finds.

The study polled more than 16,000 adults in 17 countries considered to be advanced economies. Many of these countries have been large contributors to climate change and will be expected to lead the way in fixing it.

In general, the survey found that a majority of people are concerned about global climate change and are willing to make lifestyle changes to reduce its effects.

However, underneath this broad pattern lie more complicated trends, such as doubt that the international community can effectively reduce climate change and deep ideological divides that can hinder the transition to cleaner energy and a climate-friendly world. The survey also reveals an important disconnect between people’s attitudes and the enormity of the challenge climate change poses.

Here’s what stood out to us as professionals who study the public’s response to climate change.

Strong concern and willingness to take action

In all the countries surveyed in early 2021 except Sweden, between 60% and 90% of the citizens reported feeling somewhat or very concerned about the harm they would personally face from climate change. While there was a clear increase in concern in several countries between 2015, when Pew conducted the same survey, and 2021, this number did not change significantly in the U.S.

Chart of responses to question on concern about climate change harming the people surveyed personally
CC BY-ND

Similarly, in all countries except Japan, at least 7 out of 10 people said they are willing to make some or a lot of changes in how they live and work to help address global climate change.

Across most countries, young people were much more likely than older generations to report higher levels of both concern about climate change and willingness to change their behaviors.

Perceptions about government responses

Clearly, on a global level, people are highly concerned about this existential threat and are willing to change their everyday behaviors to mitigate its impacts. However, focusing on changing individual behaviors alone will not stop global warming.

In the U.S., for example, about 74% of greenhouse gas emissions are from fossil fuel combustion. People can switch to driving electric vehicles or taking electric buses and trains, but those still need power. To pressure utilities to shift to renewable energy requires policy-level changes, both domestically and internationally.

When we look at people’s attitudes regarding how their own country is handling climate change and how effective international actions would be, the results painted a more complex picture.

On average, most people evaluated their own government’s handling of climate change as “somewhat good,” with the highest approval numbers in Sweden, the United Kingdom, Singapore and New Zealand. However, data shows that such positive evaluations are not actually warranted. The 2020 U.N. Emissions Gap Report found that greenhouse gas emissions have continued to rise. Many countries, including the U.S., are projected to miss their target commitments to reduce emissions by 2030; and even if all countries achieve their targets, annual emissions need to be reduced much further to reach the goals set by the Paris climate agreement.

When it comes to confidence in international actions to address climate change, the survey respondents were more skeptical overall. Although the majority of people in Germany, the Netherlands, South Korea and Singapore felt confident that the international community can significantly reduce climate change, most respondents in the rest of the countries surveyed did not. France and Sweden had the lowest levels of confidence with more than 6 in 10 people being unconvinced.

Together, these results suggest that people generally believe climate change to be a problem that can be solved by individual people and governments. Most people say they are willing to change their lifestyles, but they may not have an accurate perception of the scale of actions needed to effectively address global climate change. Overall, people may be overly optimistic about their own country’s capability and commitment to reduce emissions and fight climate change, and at the same time, underestimate the value and effectiveness of international actions.

These perceptions may reflect the fact that the conversation surrounding climate change so far has been dominated by calls to change individual behaviors instead of emphasizing the necessity of collective and policy-level actions. Addressing these gaps is an important goal for people who are working in climate communication and trying to increase public support for stronger domestic policies and international collaborations.

Deep ideological divide in climate attitudes

As with most surveys about climate change attitudes, the new Pew report reveals a deep ideological divide in several countries.

Perhaps not surprisingly, the U.S. leads in ideological differences for all but one question. In the U.S., 87% of liberals are somewhat or very concerned about the personal harms from climate change, compared to only 28% of conservatives – a stark 59-point difference. This difference persists for willingness to change one’s lifestyle (49-point difference), evaluation of government’s handling of climate change (41-point difference), and perceived economic impacts of international actions (41-point difference).

And the U.S. is not alone; large ideological differences were also found in Canada, Australia and the Netherlands. In fact, only Australians were more divided than Americans on how their government is handling the climate crisis.

This ideological divide is not new, but the size of the gap between people on the two ends of the ideological spectrum is astounding. The differences lie not only in how to handle the issue or who should be responsible but also in the scope and severity of climate change in the first place. Such massive, entrenched differences in public understanding and acceptance of the scientific facts regarding climate change will present significant challenges in enacting much-needed policy changes.

Better understanding of the cultural, political and media dynamics that shape those differences might reveal helpful insights that could ease the path toward progress in slowing climate change.

Metaverso pode ser nova Internet e vira prioridade das Big Techs (MIT Technology Review)

mittechreview.com.br

by Guilherme Ravache

setembro 10, 2021


Em maio, afirmei, aqui na MIT Technology Review Brasil, que o “Brasil tem chance de liderar a corrida pelo metaverso”. Em apenas três meses muito aconteceu e o metaverso se tornou um termo cada vez mais presente na mídia, e principalmente, uma nova estratégia de gigantes de tecnologia. O termo foi mencionado por CEOs em várias recentes conferências de anúncio de resultados no segundo trimestre. Mark Zuckerberg, do Facebook, Satya Nadella, da Microsoft, David Baszucki, da Roblox, e Shar Dubey, da Match Group, afirmaram que o metaverso iria pautar a estratégia de suas empresas.

Do Vale do Silício a Shenzhen, as empresas de tecnologia aumentam suas apostas nesse setor. Para os não iniciados, “o metaverso é a terminologia utilizada para indicar um tipo de mundo virtual que tenta replicar a realidade através de dispositivos digitais. É um espaço coletivo e virtual compartilhado, constituído pela soma de ‘realidade virtual’, ‘realidade aumentada’ e ‘Internet’”, como afirma a página do termo na Wikipédia. A expressão foi cunhada pelo escritor Neal Stephenson em seu romance de 1992, “Snow Crash”. Depois, Ernest Cline usou o mesmo conceito para criar o Oásis em seu romance “Ready Player One”, que virou filme de Steven Spielberg.

Mark Zuckerberg, fundador e CEO do Facebook, parece ter se tornado o mais recente convertido ao metaverso. O executivo deu uma série de entrevistas recentemente afirmando que o Facebook vai apostar o seu futuro no metaverso. “Nós vamos realizar uma transição de ser empresa vista primariamente como de redes sociais para sermos uma empresa de mertaverso”, disse Zuckerberg.

Em julho, o Facebook disse que estava criando uma equipe de produto para trabalhar no metaverso que faria parte de seu grupo de AR e VR, no Facebook Reality Labs. Dias atrás tivemos uma demonstração do que está por vir. O Facebook convidou um grupo de jornalistas para conhecer seu Horizon Workrooms. O app é a primeira tentativa da rede social de criar uma experiência de Realidade Virtual especificamente para as pessoas trabalharem juntas.

Segundo o jornalista Alex Heath, que participou da demonstração, até 16 pessoas em VR podem estar juntas em uma sala de trabalho, enquanto outras 34 pessoas podem entrar em uma videochamada sem usar um fone de ouvido. Um aplicativo de desktop complementar permite que você faça uma transmissão ao vivo da tela do seu computador sobre o seu espaço de mesa virtual. Graças ao rastreamento manual e às câmeras frontais, uma representação virtual do seu teclado físico fica embaixo da tela para digitar em um aplicativo web simples que o Facebook criou para fazer anotações e gerenciar calendários. Ou seja, você entra em um mundo virtual para realizar a reunião com seus colegas.

Facebook não deve liderar o metaverso

Zuckerberg fala de realidade virtual há anos. Ainda em 2014, quando o Facebook comprou a Oculus por US$ 2 bilhões, ele afirmou com entusiasmo que a compra permitiria experiências virtuais imersivas nas quais você se sentiria “presente em outro lugar com outras pessoas”. De certa forma, o metaverso é uma sequência dos planos do Facebook iniciados há quase uma década.

O Facebook é um player gigante a ser reconhecido, mas minha aposta é que não será o vencedor na corrida pelo metaverso. Da mesma forma que a IBM não se tornou a líder nos computadores pessoais ou na nuvem, o Google nunca conseguiu construir uma presença sólida nas redes sociais ou no setor de mensagens instantâneas e nem a Microsoft e muito menos a Nokia se tornaram as líderes em smartphones, o Facebook, apesar de seu entusiasmo, não deve liderar essa corrida.

Basicamente, porque mesmo tendo a vontade e os recursos, usualmente falta às empresas líderes a cultura para operar nesses novos mercados. E não estou dizendo que o Facebook será um player irrelevante, longe disso. Os bilhões de dólares que a empresa já investiu no desenvolvimento do Oculos Quest e toda a tecnologia de hardware criada para uso em realidade virtual (e consequentemente o metaverso) são impressionantes e levaram a avanços indiscutíveis.

“O metaverso, o sonho de um tecnólogo, é o pesadelo do Facebook. Ele tornaria a rede social irrelevante”, afirmou Scott Galloway, professor de marketing. “O ativo mais valioso do Facebook é seu gráfico social, seu conjunto de dados de usuários, links entre usuários e seu conteúdo compartilhado. Em um futuro metaverso, nós todos teremos identidades no metaverso e qualquer um pode abrir um espaço virtual para compartilhar fotos da festa de aniversário de seu filho de 10 anos ou discutir sobre vacinas”, conclui.

Quem tem potencial no metaverso?

De um ponto de vista ocidental, eu apostaria minhas fichas na Roblox e na Epic Games como novos líderes do metaverso de maneira mais ampla. Nas aplicações empresariais, a vantagem seria da Microsoft.

Da perspectiva hardware/software Nvidia e Apple levam vantagem por já terem a capacidade de desenvolverem seus próprios chips (o Facebook compra chips prontos da Qualcomm). Uma vasta biblioteca de chips de Inteligência Artificial e o software necessário para executá-los também são peças essenciais do metaverso.

Do outro lado do mundo, Tencent, Bytedance e Sea são competidores robustos, mas as duas primeiras se vêem diante da crescente regulação chinesa e a terceira tem seu foco na construção de um e-commerce competitivo na Ásia.

A Microsoft tem uma grande vantagem não somente por sua gigantesca comunidade de desenvolvedores criando soluções corporativas e sua robusta presença no mundo corporativo. A Microsoft também está trazendo jogos em nuvem para seus consoles Xbox. Em breve, os assinantes do Xbox Game Pass Ultimate nos consoles Xbox Series X / S e Xbox One poderão transmitir mais de 100 jogos sem baixá-los. Segundo a Microsoft, as métricas de desempenho do serviço serão 1080p e 60 frames por segundo. O Xbox Cloud Gaming se tornou disponível para dispositivos móveis e PC em junho de 2021. A Microsoft também anunciou esta semana que o próximo capítulo da popular série Halo, Halo Infinite, será lançado em 8 de dezembro de 2021.

O poder da comunidade

Há anos a Microsoft desenvolve hardware de mixed reality para aplicações corporativas. Seu HoloLens é um dos mais usados no mercado. Realidade mista ou realidade híbrida é a tecnologia que une características da realidade virtual com a realidade aumentada. Ela insere objetos virtuais no mundo real e permite a interação do usuário com os objetos, produzindo novos ambientes nos quais itens físicos e virtuais coexistem e interagem em tempo real.

No ano passado, a Nvidia lançou sua plataforma Omniverse “para conectar mundos 3D em um universo virtual compartilhado.” O presidente-executivo, Jensen Huang, usou a maior conferência anual da empresa, em outubro, para creditar publicamente “Snow Crash”, de Stephenson, como a inspiração original para o conceito de um sucessor de realidade virtual para a Internet, afirmando que “o metaverso está chegando”.

Mas o que definirá os vencedores do metaverso não será apenas o dinheiro, a vontade de fazer liderar esse movimento ou a propriedade intelectual de uma empresa. É a capacidade de envolver comunidades, seja para as pessoas congregarem no metaverso ou desenvolverem as experiências desse ambiente digital que criará os vencedores.

Games, Netflix e onde gastamos nosso tempo

Os games são uma parte essencial do metaverso, mas o metaverso não irá se limitar aos jogos. Eles são apenas a porta de entrada, um primeiro passo nesse sentido. Reed Rastings, CEO da Netflix, já disse que o Netflix “compete com (e perde para) o Fortnite mais do que a HBO”. Recentemente, a Netflix inclusive anunciou que a partir de 2022 entrará no segmento de jogos, oferecendo games em seu app.

Como aponta o ensaísta Matthew Ball, o mercado de games é enorme e cresce rapidamente, mas essa não é a única razão para a entrada da Netflix em games. “Embora seja comum ouvir que ‘os jogos agora têm quatro vezes o tamanho da bilheteria global dos cinemas’, a bilheteria é menos de 1/15 da receita total de vídeo globalmente. Em outras palavras, os jogos provavelmente vão arrecadar cerca de US$ 180 bilhões em 2021, enquanto os vídeos excederão US$ 650 bilhões”, diz Ball. Ou seja, na guerra pela atenção do consumidor o videogame e o metaverso têm um potencial enorme e a receita de games mostra que esse ainda é um mercado bastante incipiente em comparação ao vídeo como um todo.

Vale lembrar que somente em 2021 a Netflix deve investir US$ 19 bilhões na produção de conteúdo original. Mesmo assim, a empresa tem perdido assinantes nos Estados Unidos e Canadá. A entrada do HBO Max, Paramount+ e diversos novos concorrentes ajudam a explicar a queda, mas os games também são um elemento a ser considerado. E no final do dia, a Netflix está no mercado de vender entretenimento, e estar próximo da indústria de games não é uma ideia ruim.

Nossas crianças, nosso futuro

Mas assim como o Facebook, se sobra dinheiro e vontade/necessidade de reter nossa atenção, falta o elemento da comunidade de desenvolvedores para uma entrada relevante no metaverso. Ao observarmos o Roblox fica mais fácil de entender como esse elemento se aplica.

O Roblox é muito mais que um jogo, é uma plataforma onde qualquer um pode criar um jogo (ou experiência). Hoje, já são mais de 8 milhões de desenvolvedores criando essas experiências. São mais de 20 milhões de experiências, que vão desde adotar um animal de estimação no Adopt Me! ou aprender sobre história em uma visita virtual ao Coliseu.

Desde 2008, quando a plataforma foi lançada, os usuários já passaram mais de 30,6 bilhões de horas engajados no jogo. No segundo trimestre, a receita da Roblox aumentou 127% em relação ao segundo trimestre de 2020, indo para US$ 454,1 milhões. A média de usuários ativos diários (DAUs) foi de 43,2 milhões, um aumento de 29% ano após ano.

Perceba a ironia de que enquanto Facebook e Netflix estagnaram no crescimento de usuários, a Roblox continua aumentando sua base mesmo com a pandemia diminuindo o isolamento social e permitindo que muitos retornem às suas atividades.

Mas provavelmente os grandes números do Roblox e da Epic Games (dona do Fortnite), que tem o capital fechado e não divulga números da mesma maneira que o Roblox, são o aspecto menos interessante das possibilidades que oferecem.

O metaverso é o novo terceiro lugar 

Como já escrevi aqui na MIT Tech Review ao falar sobre o impacto dos games no e-commerce, o crescimento dos jogos eletrônicos está diretamente ligado à transformação dos games em um “Terceiro Lugar”. O termo foi cunhado pelo sociólogo Ray Oldenburg e se refere a lugares onde as pessoas passam o tempo entre a casa (“primeiro” lugar) e o trabalho (“segundo” lugar). São espaços onde as pessoas trocam ideias, se divertem e estabelecem relacionamentos. Igrejas, cafés e parques são exemplos de “Terceiro Lugar”. Ter um terceiro lugar para socializar fora de casa e do trabalho é crucial para o bem-estar, pois traz um sentimento de conexão e pertencimento. E os videogames são cada vez mais um “Terceiro Lugar”. Historicamente, as atividades e o desenvolvimento da comunidade eram offline, mas graças aos avanços da tecnologia os videogames se tornaram sociais.

Não por acaso, são cada vez mais frequentes shows e eventos dentro do Roblox e Fortnite (Travis Scott reuniu milhares de pessoas e o diretor Christopher Nolan fez uma premiere). As marcas têm investido pesadamente para entrar nesse universo. De olho nos 43 milhões de usuários que acessam o Roblox diariamente, a Netflix anunciou em julho um novo ponto de encontro virtual baseado na série Stranger Things. Mais recentemente, a Roblox anunciou o lançamento do Vans World, um metaverso de skate interativo da marca Vans dentro do mundo dos jogos. Ele é inspirado nos locais da marca, como a House of Vans e outros destinos de skate, o Vans World é um espaço 3D contínuo onde os fãs podem praticar suas manobras com outras pessoas e experimentar os equipamentos da marca.

“A Roblox é o novo ponto de encontro social, muito parecido com o shopping local na década de 1980, onde os adolescentes se reuniam”, afirma Christina Wootton, vice-presidente de parcerias de marca da Roblox. “O Starcourt Mall virtual é um cenário semelhante reinventado dentro do Roblox que abre possibilidades únicas para envolver e aumentar o público global do programa.”

Vale assistir a essa apresentação de fevereiro de David Baszucki, CEO da Roblox. Nela, o executivo detalha a estratégia de crescimento da empresa com seu potencial de criar experiências, inclusive educativas e comerciais, com uma crescente comunidade.

Brasil pode ser protagonista no metaverso

De tempos em tempos acontece um alinhamento de estrelas que pode beneficiar um mercado. E o Brasil possivelmente se vê diante dessa oportunidade. Na China, o governo cria um ambiente cada vez mais inóspito para empresas e desenvolvedores. Nos Estados Unidos, existe o dinheiro e a escala de usuários, mas falta engajamento e mão de obra. Não é fácil apostar no metaverso em um país que sobram empregos e faltam candidatos. Na Europa há desenvolvedores, principalmente no Leste Europeu, mas a fragmentação é gigantesca.

Enrico Machado, brasileiro que desenvolve Roblox, é um exemplo do potencial de milhares de usuários acostumados a uma base desde a infância. Ele começou a jogar Roblox com 11 anos de idade. Aos 15 já era um desenvolvedor. Hoje, na faculdade, cursa sistemas da informação e trabalha em um grande estúdio brasileiro desenvolvendo apenas jogos para Roblox.

“O Roblox está muito popular. Ele funciona a partir de microtransações. Você pode comprar coisas nos jogos que as pessoas criam e os desenvolvedores ganham dinheiro com isso. Hoje tem muita gente fazendo uma grana absurda. É tipo o mercado de futebol que você tem que você tem alguns caras que estão no topo da Pirâmide. Para cada Neymar você tem milhões de pessoas que gostariam de ser Neymar, essa relação é parecida. Mas qualquer pessoa pode ganhar um dinheiro razoável”, diz Machado.

Ele garante que não é muito difícil ganhar um dinheiro razoável na plataforma.

Tem muita consumidor querendo jogar. Então, se você entende o básico de comunidade, de design, de jogo, de programação, você sai do zero e em um curto espaço de tempo já começa a ganhar uma graninha se você focar nisso”.

Machado trabalha em um estúdio com outras dezenas de desenvolvedores. “No estúdio fazemos reuniões e tudo mais para para aplicar as melhores práticas para todos os jogos. Estou aprendendo bastante com eles. Eu sei programar, sei fazer um joguinho bonitinho mas eu não entendo nada de game design. Eu não sei como fazer um jogo de sucesso. Você sabe que existem melhores práticas, mas com um grupo maior fica mais fácil. Conhecer essas práticas é tão importante quanto saber programar”, garante.

Milhões de desenvolvedores se unindo 

Não é um caso isolado. Como Machado existem milhares de jovens no Brasil trabalhando em estúdios enormes desenvolvendo Roblox. E diferentemente de outras linguagens, a utilizada pela Roblox é acessível e de fácil aprendizado. Além disso, não é essencial ter um computador super poderoso ou uma conexão ultra-rápida.

Não por acaso o Brasil já é o quinto mercado de games do mundo, tem uma das maiores comunidades de usuários do planeta, um crescente mercado de streaming e ícones de jogos eletrônicos como Nobru.

A Wildlife, unicórnio brasileiro avaliado em mais de US$ 1,3 bilhão, já conta com mais de 800 colaboradores em países como Brasil, Estados Unidos, Argentina e Irlanda. Criada em 2011, a empresa tem mais de 60 jogos mobile.

O metaverso precisa de tecnologia e software, mas o fator determinante é uma engajada comunidade de desenvolvedores e usuários. Por essas razões, o Roblox e o Fortnite estão na dianteira. Já o Brasil tem todos os elementos para ser o líder global neste setor. Mas nada garante que isso irá acontecer. Montreal, no Canadá, oferece pistas sobre como podemos acelerar esse processo ao criar incentivos para atrair e reunir empresas, desenvolvedores e investimentos. Mas esse será assunto para a próxima coluna.

O metaverso deverá se tornar a próxima Internet e muitos gigantes de hoje vão perder influência. Mas assim como a Internet criou uma nova indústria, com novos empregos e novos bilionários, o metaverso repetirá essa história e possivelmente em uma escala ainda maior. É irônico que Stephenson tenha dito para a revista Vanity Fair, em 2017, que quando escrevia “Snow Crash” e criava o metaverso, estava “apenas inventando merda”. Décadas depois, os CEOs levam essa “invenção” cada vez mais a sério.


Este artigo foi produzido por Guilherme Ravache, jornalista, consultor digital e colunista da MIT Technology Review Brasil.

Is everything in the world a little bit conscious? (MIT Technology Review)

technologyreview.com

Christof Koch – August 25, 2021

The idea that consciousness is widespread is attractive to many for intellectual and, perhaps, also emotional reasons. But can it be tested? Surprisingly, perhaps it can.

Panpsychism is the belief that consciousness is found throughout the universe—not only in people and animals, but also in trees, plants, and bacteria. Panpsychists hold that some aspect of mind is present even in elementary particles. The idea that consciousness is widespread is attractive to many for intellectual and, perhaps, also emotional reasons. But can it be empirically tested? Surprisingly, perhaps it can. That’s because one of the most popular scientific theories of consciousness, integrated information theory (IIT), shares many—though not all—features of panpsychism.

As the American philosopher Thomas Nagel has argued, something is conscious if there is “something that it is like to be” that thing in the state that it is in. A human brain in a state of wakefulness feels like something specific. 

IIT specifies a unique number, a system’s integrated information, labeled by the Greek letter φ (pronounced phi). If φ is zero, the system does not feel like anything; indeed, the system does not exist as a whole, as it is fully reducible to its constituent components. The larger φ, the more conscious a system is, and the more irreducible. Given an accurate and complete description of a system, IIT predicts both the quantity and the quality of its experience (if any). IIT predicts that because of the structure of the human brain, people have high values of φ, while animals have smaller (but positive) values and classical digital computers have almost none.

A person’s value of φ is not constant. It increases during early childhood with the development of the self and may decrease with onset of dementia and other cognitive impairments. φ will fluctuate during sleep, growing larger during dreams and smaller in deep, dreamless states. 

IIT starts by identifying five true and essential properties of any and every conceivable conscious experience. For example, experiences are definite (exclusion). This means that an experience is not less than it is (experiencing only the sensation of the color blue but not the moving ocean that brought the color to mind), nor is it more than it is (say, experiencing the ocean while also being aware of the canopy of trees behind one’s back). In a second step, IIT derives five associated physical properties that any system—brain, computer, pine tree, sand dune—has to exhibit in order to feel like something. A “mechanism” in IIT is anything that has a causal role in a system; this could be a logical gate in a computer or a neuron in the brain. IIT says that consciousness arises only in systems of mechanisms that have a particular structure. To simplify somewhat, that structure must be maximally integrated—not accurately describable by breaking it into its constituent parts. It must also have cause-and-effect power upon itself, which is to say the current state of a given mechanism must constrain the future states of not only that particular mechanism, but the system as a whole. 

Given a precise physical description of a system, the theory provides a way to calculate the φ of that system. The technical details of how this is done are complicated, but the upshot is that one can, in principle, objectively measure the φ of a system so long as one has such a precise description of it. (We can compute the φ of computers because, having built them, we understand them precisely. Computing the φ of a human brain is still an estimate.)

Debating the nature of consciousness might at first sound like an academic exercise, but it has real and important consequences.

Systems can be evaluated at different levels—one could measure the φ of a sugar-cube-size piece of my brain, or of my brain as a whole, or of me and you together. Similarly, one could measure the φ of a silicon atom, of a particular circuit on a microchip, or of an assemblage of microchips that make up a supercomputer. Consciousness, according to the theory, exists for systems for which φ is at a maximum. It exists for all such systems, and only for such systems. 

The φ of my brain is bigger than the φ values of any of its parts, however one sets out to subdivide it. So I am conscious. But the φ of me and you together is less than my φ or your φ, so we are not “jointly” conscious. If, however, a future technology could create a dense communication hub between my brain and your brain, then such brain-bridging would create a single mind, distributed across four cortical hemispheres. 

Conversely, the φ of a supercomputer is less than the φs of any of the circuits composing it, so a supercomputer—however large and powerful—is not conscious. The theory predicts that even if some deep-learning system could pass the Turing test, it would be a so-called “zombie”—simulating consciousness, but not actually conscious. 

Like panpsychism, then, IIT considers consciousness an intrinsic, fundamental property of reality that is graded and most likely widespread in the tree of life, since any system with a non-zero amount of integrated information will feel like something. This does not imply that a bee feels obese or makes weekend plans. But a bee can feel a measure of happiness when returning pollen-laden in the sun to its hive. When a bee dies, it ceases to experience anything. Likewise, given the vast complexity of even a single cell, with millions of proteins interacting, it may feel a teeny-tiny bit like something. 

Debating the nature of consciousness might at first sound like an academic exercise, but it has real and important consequences. Most obviously, it matters to how we think about people in vegetative states. Such patients may groan or otherwise move unprovoked but fail to respond to commands to signal in a purposeful manner by moving their eyes or nodding. Are they conscious minds, trapped in their damaged body, able to perceive but unable to respond? Or are they without consciousness?

Evaluating such patients for the presence of consciousness is tricky. IIT proponents have developed a procedure that can test for consciousness in an unresponsive person. First they set up a network of EEG electrodes that can measure electrical activity in the brain. Then they stimulate the brain with a gentle magnetic pulse, and record the echoes of that pulse. They can then calculate a mathematical measure of the complexity of those echoes, called a perturbational complexity index (PCI).

In healthy, conscious individuals—or in people who have brain damage but are clearly conscious—the PCI is always above a particular threshold. On the other hand, 100% of the time, if healthy people are asleep, their PCI is below that threshold (0.31). So it is reasonable to take PCI as a proxy for the presence of a conscious mind. If the PCI of someone in a persistent vegetative state is always measured to be below this threshold, we can with confidence say that this person is not covertly conscious. 

This method is being investigated in a number of clinical centers across the US and Europe. Other tests seek to validate the predictions that IIT makes about the location and timing of the footprints of sensory consciousness in the brains of humans, nonhuman primates, and mice. 

Unlike panpsychism, the startling claims of IIT can be empirically tested. If they hold up, science may have found a way to cut through a knot that has puzzled philosophers for as long as philosophy has existed.

Christof Koch is the chief scientist of the MindScope program at the Allen Institute for Brain Science in Seattle.

The Mind issue

This story was part of our September 2021 issue

A notável atualidade do Animismo (Outras Palavras)

Ele foi visto pela velha antropologia como “forma mais primitiva” de religião. Mas, surpresa: sugere respostas a questões cruciais de hoje: o divórcio entre cultura e natureza e a tendência da ciência a tratar como objeto tudo o que não é “humano”

Publicado 02/09/2021 às 17:46 – Atualizado 02/09/2021 às 18:00

Por Renato Sztutman, na Revista Cult, parceira editorial de Outras Palavras

Muito se tem falado hoje em dia sobre o animismo. E mais, muito se tem falado sobre uma necessidade de retomar o animismo – uma forma de responder ao projeto racionalista da modernidade, que transformou o ambiente em algo inerte, opaco, sinônimo de recurso, mercadoria. Em tempos de pandemia, constatamos que algo muito importante se perdeu na relação entre os sujeitos humanos e o mundo que eles habitam, e isso estaria na origem da profunda crise que vivemos.

Animismo é, em princípio, um conceito antropológico, proposto por Edward Tylor, em Primitive Culture (1871), para se referir à forma mais “primitiva” de religião, aquela que atribui “alma” a todos os habitantes do cosmos e que precederia o politeísmo e o monoteísmo. O termo “alma” provém do latim anima – sopro, princípio vital. Seria a causa mesma da vida, bem como algo capaz de se desprender do corpo, viajar para outros planos e tempos. O raciocínio evolucionista de autores como Tylor foi refutado por diferentes correntes da antropologia ao longo do século 20, embora possamos dizer que ainda seja visto entranhado no senso comum da modernidade. A ideia de uma religião embrionária, fundada em crenças desprovidas de lógica, perdeu lugar no discurso dos antropólogos, que passaram a buscar racionalidades por trás de diferentes práticas mágico-religiosas.

Uma reabilitação importante do conceito antropológico de animismo aparece com Philippe Descola, em sua monografia “La nature domestique” (1986), sobre os Achuar da Amazônia equatoriana. Descola demonstrou que, quando os Achuar dizem que animais e plantas têm wakan (“alma” ou, mais precisamente, intencionalidade, faculdade de comunicação ou inteligência), isso não deve ser interpretado de maneira metafórica ou como simbolismo. Isso quer dizer que o modo de os Achuar descreverem o mundo é diverso do modo como o fazem os naturalistas (baseados nos ditames da Ciência moderna), por não pressuporem uma linha intransponível entre o que costumamos chamar Natureza e Cultura. O animismo não seria mera crença, representação simbólica ou forma primitiva de religião, mas, antes de tudo, uma ontologia, modo de descrever tudo o que existe, associada a práticas. Os Achuar engajam-se em relações efetivas com outras espécies, o que faz com que, por exemplo, mulheres sejam tidas como mães das plantas que cultivam, e homens como cunhados dos animais de caça.

Para Descola, a ontologia naturalista não pode ser tomada como único modo de descrever o mundo, como fonte última de verdade. Outros três regimes ontológicos deveriam ser considerados de maneira simétrica, entre eles o animismo. Esse ponto foi desenvolvido de maneira exaustiva em Par-delà nature et culture (2005), no qual o autor se lança em uma aventura comparatista cruzando etnografias de todo o globo. O animismo inverte o quadro do naturalismo: se neste último caso a identificação entre humanos e não humanos passa pelo plano da fisicalidade (o que chamamos corpo, organismo ou biologia), no animismo essa mesma identificação se dá no plano da interioridade (o que chamamos alma, espírito ou subjetividade). Para os naturalistas, a alma seria privilégio da espécie humana, já para os animistas é uma mesma “alma humana” que se distribui entre todos os seres do cosmos.

A ideia de perspectivismo, que autores como Eduardo Viveiros de Castro e Tânia Stolze Lima atribuem a cosmologias ameríndias, estende e transforma a de animismo. O perspectivismo seria, grosso modo, uma teoria ou metafísica indígena que afirma que (idealmente) diferentes espécies se têm como humanas, mas têm as demais como não humanas. Tudo o que existe no cosmos pode ser sujeito, mas todos não podem ser sujeitos ao mesmo tempo, o que implica uma disputa. Diz-se, por exemplo, que onças veem-se como humanas e veem humanos como presas. O que os humanos veem como sangue é, para elas, cerveja de mandioca, bebida de festa. Onças e outros animais (mas também plantas, astros, fenômenos meteorológicos) são, em suma, humanos “para si mesmos”. Um xamã ameríndio seria capaz de mudar de perspectiva, de se colocar no lugar de outrem e ver como ele o vê, portanto de compreender que a condição humana é partilhada por outras criaturas.

Como insiste Viveiros de Castro em A inconstância da alma selvagem (2002), a perspectiva está nos corpos, conjuntos de afecções mais do que organismos. A mudança de perspectiva seria, assim, uma metamorfose somática e se ancoraria na ideia de um fundo comum de humanidade, numa potencialidade anímica distribuída horizontalmente no cosmos. Se o perspectivismo é o avesso do antropocentrismo, ele não se separa de certo antropomorfismo, fazendo com que prerrogativas humanas deixem de ser exclusividade da espécie humana, assumindo formas as mais diversas.

O livro de Davi Kopenawa e Bruce Albert, A queda do céu (2010), traz exemplos luminosos desses animismos e perspectivismos amazônicos. Toda a narrativa de Kopenawa está baseada em sua formação como xamã yanomami, que se define pelo trato com os espíritos xapiripë, seres antropomórficos que nada mais são que “almas” ou “imagens” (tradução que Albert prefere dar para o termo utupë) dos “ancestrais animais” (yaroripë). Segundo a mitologia yanomami, os animais eram humanos em tempos primordiais, mas se metamorfosearam em seus corpos atuais. O que uniria humanos e animais seria justamente utupë, e é como utupë que seus ancestrais aparecem aos xamãs. Quando os xamãs yanomami inalam a yãkoana (pó psicoativo), seus olhos “morrem” e – mudando de perspectiva – eles acessam a realidade invisível dos xapiripë, que se apresentam em uma grande festa, dançando e cantando, adornados e brilhosos. O xamanismo yanomami – apoiando-se em experiências de transe e sonho – é um modo de conhecer e descrever o mundo. É nesse sentido que Kopenawa diz dos brancos, “povo da mercadoria”, que eles não conhecem a terra-floresta (urihi), pois não sabem ver. Onde eles identificam uma natureza inerte, os Yanomami apreendem um emaranhado de relações. O conhecimento dessa realidade oculta é o que permitiria a esses xamãs impedir a queda do céu, catalisada pela ação destrutiva dos brancos. E assim, insiste Kopenawa, esse conhecimento passa a dizer respeito não apenas aos Yanomami, mas a todos os habitantes do planeta.

Embora distintas, as propostas de Descola e de Ingold buscam na experiência animista um contraponto às visões naturalistas e racionalistas, que impõem uma barreira entre o sujeito (humano) e o mundo. Como propõe Viveiros de Castro, essa crítica consiste na “descolonização do pensamento”, pondo em xeque o excepcionalismo humano e a pretensão de uma ontologia exclusiva detida pelos modernos. Contraponto e descolonização que não desembocam de modo algum na negação das ciências modernas, mas que exigem imaginar que é possível outra ciência ou que é possível reencontrar o animismo nas ciências. Tal tem sido o esforço de autores como Bruno Latour e Isabelle Stengers, expoentes mais expressivos dos science studies: mostrar que a ciência em ação desmente o discurso oficial, para o qual conhecer é desanimar (dessubjetivar) o mundo, reduzi-lo a seu caráter imutável, objetivo.

No livro Sobre o culto moderno dos deuses “fatiches” (1996), Latour aproxima a ideia de fetiche nas religiões africanas à ideia de fato nas ciências modernas. Um fetiche é um objeto de culto (ou mesmo uma divindade) feito por humanos e que, ao mesmo tempo, age sobre eles. Com seu trabalho etnográfico em laboratórios, Latour sugeriu que os fatos científicos não são meramente “dados”, mas dependem de interações e articulações em rede. Num laboratório, moléculas e células não seriam simplesmente objetos, mas actantes imprevisíveis, constantemente interrogados pelo pesquisador. Em seu pioneiro Jamais fomos modernos (1991), Latour assume que fatos científicos são em certo sentido feitos, e só serão aceitos como fatos quando submetidos à prova das controvérsias, isto é, quando conseguirem ser estabilizados como verdades.

Isabelle Stengers vai além da analogia entre fatos (“fatiches”) e fetiches para buscar na história das ciências modernas a tensão constitutiva com as práticas ditas mágicas. Segundo ela, as ciências modernas se estabelecem a partir da desqualificação de outras práticas, acusadas de equívoco ou charlatanismo. Ela acompanha, por exemplo, como a química se divorciou da alquimia, e a psicanálise, do magnetismo e da hipnose. Em suma, as ciências modernas desqualificam aquilo que está na sua origem. E isso, segundo Stengers, não pode ser dissociado do lastro entre a história das ciências e a do capitalismo. Em La sorcellerie capitaliste (A feitiçaria do capitalismo, 2005), no diálogo com a ativista neopagã Starhawk, Stengers e Philippe Pignarre lembram que o advento da ciência moderna e do capitalismo nos séculos 17 e 18 não se separa da perseguição às práticas de bruxaria lideradas por mulheres. Se o capitalismo, ancorado na propriedade privada e no patriarcado, emergia com a política dos cercamentos (expulsão dos camponeses das terras comuns), a revolução científica se fazia às custas da destruição de práticas mágicas. Stengers e Pignarre encontram no ativismo de Starhawk e de seu grupo Reclaim, que despontou na Califórnia no final dos anos 1980, um exemplo de resistência anticapitalista. Para Starhawk, resistir ao capitalismo é justamente retomar (reclaim) práticas – no caso, a tradição wicca, de origem europeia – que foram sacrificadas para que ele florescesse.

Retomar a magia, retomar o animismo seria, para Stengers, uma forma de existência e de resistência. Como escreveu em Cosmopolíticas (1997), quando falamos de práticas desqualificadas pelas ciências modernas, não deveríamos apenas incorrer em um ato de tolerância. Não se trata de considerar a magia uma crença ou “cultura”, como fez-se na antropologia da época de Tylor e até pouco tempo atrás. Ir além da “maldição da tolerância” é levar a sério asserções indígenas, por exemplo, de que uma rocha tem vida ou uma árvore pensa. Stengers não está interessada no animismo como “outra” ontologia: isso o tornaria inteiramente exterior à experiência moderna. Ela tampouco se interessa em tomar o animismo como verdade única, nova ontologia que viria desbancar as demais. Mais importante seria experimentá-lo, seria fazê-lo funcionar no mundo moderno.

Que outra ciência seria capaz de retomar o animismo hoje? Eis uma questão propriamente stengersiana. Hoje vivemos mundialmente uma crise sanitária em proporções jamais vistas, que não pode ser dissociada da devastação ambiental e do compromisso estabelecido entre as ciências e o mercado. A outra ciência, diriam Latour e Stengers, seria a do sistema terra e do clima, que tem como marco a teoria de Gaia, elaborada por James Lovelock e Lynn Margulis nos anos 1970. Gaia é para esses cientistas a Terra como um organismo senciente, a Terra como resultante de um emaranhado de relações entre seres vivos e não vivos. Poderíamos dizer que Gaia é um conceito propriamente animista que irrompe no seio das ciências modernas, causando desconfortos e ceticismos. O que Stengers chama de “intrusão de Gaia”, em sua obra No tempo das catástrofes (2009), é uma reação ou resposta do planeta aos efeitos destruidores do capitalismo, é a ocorrência cada vez mais frequente de catástrofes ambientais e o alerta para um eventual colapso do globo. Mas é também, ou sobretudo, um chamado para a conexão entre práticas não hegemônicas – científicas, artísticas, políticas – e a possibilidade de recriar uma inteligência coletiva e imaginar novos mundos.

O chamado de Stengers nos obriga a pensar a urgência de uma conexão efetiva entre as ciências modernas e as ciências indígenas, uma conexão que retoma o animismo, reconhecendo nele um modo de engajar humanos ao mundo, contribuindo assim para evitar ou adiar a destruição do planeta. Como escreve Ailton Krenak, profeta de nosso tempo, em Ideias para adiar o fim do mundo (2019), “quando despersonalizamos o rio, a montanha, quando tiramos deles os seus sentidos, considerando que isso é atributo exclusivo de humanos, nós liberamos esses lugares para que se tornem resíduos da atividade industrial e extrativista”. Em outras palavras, quando desanimamos o mundo, o deixamos à mercê de um poder mortífero. Retomar o animismo surge como um chamado de sobrevivência, como uma chance para reconstruir a vida e o sentido no tempo pós-pandêmico que há de vir.

Entenda o que é o “Parecer Antidemarcação” e o que está em jogo no STF (CIMI)

20/05/2020

Após liminar, o STF decide se mantém a suspensão do Parecer 001/2017 da AGU, que vem sendo usado para barrar demarcações de terras indígenas

Indígenas manifestam-se em frente ao STF. Foto: Guilherme Cavalli/Cimi Por Mobilização Nacional Indígena

Os ministros do Supremo Tribunal Federal (STF) realizarão um importante julgamento, que terá consequências para todos os povos indígenas do Brasil. Em data ainda indefinida, eles avaliarão no plenário, confirmando ou não, a decisão liminar que suspendeu o Parecer 001/2017 da Advocacia-Geral da União (AGU), que trata da demarcação de terras indígenas.

O pedido de suspensão foi deferido pelo relator, ministro Edson Fachin, no dia 7 de maio. O Parecer da AGU foi emitido em julho de 2017 e traz graves consequências para os povos indígenas: ele vem sendo usado para barrar e anular demarcações de terras.

O julgamento chegou a iniciar, no dia 22 de maio, em plenário virtual, mas acabou sendo interrompido após pedido de destaque do ministro Alexandre de Moraes. Agora, a votação deverá ser retomada, ainda sem data definida, por meio de videoconferência.

Entenda o que é o Parecer 001/2017 e porque é tão importante que o STF mantenha sua suspensão.

O que é o Parecer 001/2017 da AGU?
Qual a origem do Parecer 001?
Que consequências o Parecer tem para os povos indígenas?
O que é a tese do marco temporal?
Por que a tese do marco temporal é tão ruim para os povos indígenas?
O que é o caso de repercussão geral no STF?
Qual a participação dos povos indígenas neste processo?
Qual a participação da sociedade civil no processo?
Qual a relação entre o Parecer da AGU e o caso de repercussão geral?
E o que está em jogo agora?

O que é o Parecer 001/2017 da AGU?

O Parecer Normativo 001/2017, publicado pela AGU em 20 de julho de 2017, determina que toda a administração pública federal adote uma série de restrições à demarcação de TIs. Entre elas, estão as condicionantes do caso da TI Raposa Serra do Sol (RR), de 2009, e a tese do chamado “marco temporal”, segundo a qual os povos indígenas só teriam direito à demarcação das terras que estivessem comprovadamente sob sua posse em 5 de outubro de 1988, data da promulgação da Constituição.

Na prática, o Parecer 001/2017 serve para inviabilizar e rever demarcações, mesmo aquelas já  concluídas ou em estágio avançado. A tese legitima as invasões, expulsões e a violência que vitimaram os povos indígenas antes da promulgação da Constituição Federal, quando eram tutelados pelo Estado e sequer podiam reclamar seus direitos na Justiça.

Por esse motivo, muitos povos indígenas referem-se a ele como o “Parecer Antidemarcação” ou o “Parecer do Genocídio”. Esta medida é considerada inconstitucional inclusive pelo Ministério Público Federal (MPF).

Qual a origem do Parecer 001?

O Parecer foi publicado pela AGU no governo de Michel Temer, em meio às negociações do então presidente para evitar que as denúncias de corrupção contra ele, feitas pela Procuradoria-Geral da República (PGR), fossem aceitas pela Câmara dos Deputados. As negociações envolveram a liberação de emendas a parlamentares e também o atendimento à pauta de setores e bancadas, como a ruralista.

Dias antes da publicação do Parecer 001/2017, a Frente Parlamentar Agropecuária (FPA) publicou em suas redes um vídeo em que o deputado Luís Carlos Heinze (PP-RS) afirmou ter conversado sobre a medida com diversos ministros e “acertado um parecer vinculante” com os então ministros da Casa Civil, Eliseu Padilha, da Justiça, Osmar Serraglio, e a advogada-geral da União, Grace Mendonça.

O vídeo de comemoração de Heinze e a publicação do Parecer 001/2017 da AGU ocorreram pouco antes da votação da primeira denúncia da Procuradoria-Geral da República (PGR) contra Michel Temer na Câmara dos Deputados, ocorrida no dia 2 de agosto. A Câmara negou  a autorização para a investigação, e 134 dos 251 votos a favor de Temer vieram da bancada ruralista. Heinze é o deputado que, também em 2014, afirmou em um vídeo que quilombolas, índios, gays e lésbicas eram “tudo o que não presta”.

Desde então, os povos indígenas vêm lutando para barrar a medida, com diversas manifestações na AGU, inúmeros pedidos feitos ao órgão e diversas reuniões nas quais expuseram as contradições da medida.

Que consequências o Parecer tem para os povos indígenas?

Desde a sua publicação, ainda sob o governo de Michel Temer, o Parecer 001/2017 vem sendo utilizado para inviabilizar, retardar e até reverter demarcações de terras indígenas. Em 2018, o próprio ministro da Justiça de Temer, Torquato Jardim, admitiu ter “dificuldades” para trabalhar com a norma.

Em janeiro de 2020, uma reportagem apurou que pelo menos 17 processos de demarcação foram devolvidos pelo Ministério da Justiça para análise da Funai. Segundo o MPF, há pelo menos 27 processos que hoje estão sendo revistos com base na medida.

Além disso, desde 2019, a Funai também já vinha abandonando a defesa de comunidades indígenas em diversos processos judiciais com base na norma, deixando os indígenas à mercê de despejos e da anulação da demarcação de suas terras. O órgão fez isso em, pelo menos, quatro processos. Conforme a  legislação, os indígenas devem ser defendidos pela Procuradoria da Funai quando não constituem advogados próprios.

O potencial destrutivo do Parecer 001/2017 e do marco temporal, portanto, é enorme: ele pode afetar todas as terras indígenas com o processo de demarcação ainda não concluído e, inclusive, as terras com a demarcação concluída após 1988 e questionadas judicialmente.

O que é a tese do marco temporal?

O marco temporal é uma tese que pretende reduzir o alcance do direito constitucional dos povos indígenas à terra. De caráter restritivo, o marco temporal estabelece que estes povos só têm direito à demarcação de suas terras tradicionais caso comprovem que as ocupavam, ou estavam as reivindicando na Justiça Federal, na data da promulgação da Constituição Federal de 1988, 5 de outubro.

Como argumento em ações judiciais contra a demarcação de terras indígenas, a tese restritiva do marco temporal teve as primeiras abordagens em processos envolvendo a posse da Fazenda Caipe, território tradicional que faz parte da TI Xukuru do Ororubá, em Pernambuco, e da TI Buriti, do povo Terena, no Mato Grosso do Sul.

A atual conformação da tese, que estabelece a promulgação da Constituição de 1988 como marco, é formulada pela primeira vez no âmbito do STF, quando aparece junto a 19 condicionantes para a demarcação de terras indígenas, em 2009, no julgamento da TI Raposa Serra do Sol no voto do ministro-relator Carlos Ayres Britto, favorável à homologação de Raposa.

Essa decisão não teve efeitos vinculantes, ou seja, não obriga juízes, tribunais ou a administração pública a aplicar o mesmo entendimento. Entretanto, a tese do “marco temporal” e as condicionantes do caso Raposa Serra do Sol passaram a ser usadas para orientar outras demarcações de terras indígenas. Essa utilização indevida e questionável teve início ainda no governo Dilma Rousseff, pela AGU, com a Portaria 303/2012.

Por que a tese do marco temporal é tão ruim para os povos indígenas?

Condicionar as demarcações à presença dos povos indígenas nas terras em uma data específica, junto das 19 condicionantes, passou a ser a estratégia anti-indígena usada em processos de reintegração de posse e anulação de demarcações.

A tese restritiva do marco temporal nega a histórica vulnerabilidade dos indígenas ante as violências que permearam o processo pós-colonial, a abertura das frentes de expansão pelo Brasil e as violações de direitos durante o período da ditadura civil militar, conforme denunciou, recentemente, o relatório da Comissão Nacional da Verdade.

Conforme a tese do marco temporal, o direito indígena à terra se converte em crime: a ocupação tradicional, respaldada pela Constituição, torna-se mera invasão de propriedade privada e sujeita a responsabilizações criminais e repressão policial.

A partir de 1988, o verdadeiro marco é o consenso jurídico, científico e social de que a sobrevivência física e cultural dos indígenas depende necessariamente da posse de suas terras tradicionais, tal como estabelece a própria Constituição. Anular processos de demarcação com base no “marco temporal”, além de se mostrar juridicamente questionável, tem como efeito direto condenar os indígenas ao relento da assimilação forçada, paradigma que a Constituição pretende superar.

O que é o caso de repercussão geral no STF?

Em abril de 2019, o STF reconheceu por unanimidade a “repercussão geral” do Recurso Extraordinário (RE) 1.017.365. O processo servirá como referência, segundo a corte, para a “definição do estatuto jurídico-constitucional das relações de posse das áreas de tradicional ocupação indígena à luz das regras dispostas no artigo 231 do texto constitucional”.

Em outras palavras, neste processo o tribunal definirá qual a sua interpretação do artigo 231 da Constituição Federal, que trata dos direitos dos povos indígenas, inclusive ao reconhecimento de suas terras (mais informações aqui).

Em disputa, basicamente, estão as teses do indigenato, que trata o direito dos povos indígenas à demarcação de suas terras como um direito “originário”, anterior ao próprio Estado, e a tese do marco temporal, defendida pela bancada ruralista e outros setores econômicos interessados na exploração das terras indígenas.

O processo trata, no mérito, de uma reintegração de posse movida contra o povo Xokleng, em Santa Catarina. A repercussão geral, entretanto, faz com que esse julgamento extrapole o caso específico e tenha consequências para todos os povos e terras indígenas do Brasil, já que o que fica decidido vincula obrigatoriamente as demais instâncias do Poder Judiciário e a administração pública

Qual a participação dos povos indígenas neste processo?

A disputa que deu origem ao processo se dá em torno da revisão de limites da TI Ibirama La-Klãnô (SC) e está diretamente relacionada à história do povo Xokleng. Ainda em meados do século XX, os Xokleng eram perseguidos pelos chamados “bugreiros”, caçadores de índios responsáveis por limpar as terras de “bugres”, expressão pejorativa para designar os povos indígenas na época, e liberá-las para a ocupação de não indígenas.

Em maio de 2019, o povo Xokleng foi admitido como parte no processo de repercussão geral do STF. Esse é um direito previsto no artigo 232 da Constituição Federal e seu reconhecimento ainda é uma luta das comunidades indígenas em todo o país.

Em função da tutela a que os povos indígenas estiveram submetidos até a Constituição de 1988, contudo, a maioria dos processos ainda é julgada – muitas vezes com decisões extremamente negativas às comunidades – sem que os povos indígenas participem ou sequer tomem conhecimento das ações.

Além dos Xokleng, admitidos como parte do processo porque a ação trata, no mérito, da demarcação de sua terra tradicional, diversos outros povos indígenas, por meio de suas organizações, participam do caso de repercussão geral como amicus curiae ou “amigos da corte”, fornecendo informações e subsídios ao julgamento.

Qual a participação da sociedade civil no processo?

Diversas outras organizações da sociedade civil, como as de defesa dos direitos indígenas ou humanos, também pediram habilitação no processo como “amigos da corte”. Várias dessas organizações apresentaram fundamentos que justificam a inconstitucionalidade do Parecer 001 e seus prejuízos para a garantia dos direitos indígenas previstos na Constituição Federal.

Qual a relação entre o Parecer da AGU e o caso de repercussão geral?

O principal argumento da AGU para a publicação do Parecer 001/2017 foi a de que o órgão estava apenas aplicando as definições que o STF já tinha estabelecido acerca da demarcação de terras indígenas. Isso contraria a orientação do próprio STF, que já decidiu, em alguns processos, que essas definições não se aplicam automaticamente a outros casos.

O MPF também elenca uma série de decisões que demonstram que as teses assumidas pela AGU estão muito longe de ser uma “jurisprudência consolidada” sobre a demarcação de terras indígenas.

Pelo contrário: a decisão unânime do STF, ao reconhecer a repercussão geral do caso Xokleng, indica que os 11 ministros entendem que este assunto ainda carece de definições. Esta é a prova cabal de que o tema não está pacificado no Judiciário brasileiro e que, portanto o fundamento do parecer 001 (jurisprudência consolidada do STF sobre o marco temporal e as 19 condicionantes) é inexistente.

E o que está em jogo agora?

Considerando todo este contexto, em março de 2020, os Xokleng e um conjunto de organizações que atuam como amici curiae no processo de repercussão geral ingressaram com um pedido de tutela provisória incidental, solicitando ao relator, Edson Fachin, que suspendesse os efeitos do Parecer 001/2017 da AGU sobre todas as terras indígenas do Brasil até que o julgamento do caso fosse concluído.

O povo Xokleng e as organizações indígenas, indigenistas e de direitos humanos também pediram que ações de reintegração de posse contra indígenas fossem suspensas em meio à pandemia, para evitar expor povos e comunidades à contaminação por covid-19.

Em decisão monocrática do dia 6 de maio, Fachin suspendeu todas as ações de reintegração de posse contra indígenas e as que visavam anular demarcações de terras tradicionais. No dia 7 de maio, em decisão liminar, o ministro também suspendeu os efeitos do Parecer 001/2017 da AGU, e determinou que o pleno do STF decida se referenda ou não esta última decisão.

É esta a importante decisão que será tomada pelo STF, em julgamento ainda sem data definida.

Mudanças climáticas extremas afetam até voos de aviões (Folha de S.Paulo)

Tempestades, fumaça de incêndios e calor, que reduz a força de ascensão das aeronaves, prejudicam companhias aéreas

23.ago.2021 às 22h15

Claire Bushey, Philip Georgiadis – Financial Times

Algumas companhias de aviação e aeroportos começaram a se planejar para um futuro no qual abalos climáticos severos afetam os cronogramas de voos com mais frequência, agora que a mudança do clima está fazendo com que aumente a probabilidade de calor extremo e grandes tempestades.

Este mês, tempestades forçaram o cancelamento de mais de 300 voos no aeroporto O’Hare, de Chicago, e no aeroporto de Dalas/Fort Worth, no Texas. Em julho, oito voos foram cancelados em Denver e outros 300 sofreram atrasos devido aos incêndios florestais que atingiram a região do Pacífico Noroeste dos Estados Unidos. O calor extremo afetou decolagens em Las Vegas e no Colorado no começo deste verão [do final de junho ao final de setembro, no hemisfério norte].

As perturbações se alinham a uma tendência: cancelamentos e atrasos de voos causados pelo clima se tornaram muito mais frequentes nos Estados Unidos e na Europa durante as duas últimas décadas, demonstram dados das autoridades regulatórias. Embora seja difícil vincular qualquer tempestade ou onda de calor individual à mudança do clima, estudos científicos determinaram que elas se tornarão mais frequentes ou intensas à medida que o planeta se aquece.

A ICAO (Organização Internacional da Aviação Civil), o órgão vinculado à ONU que estabelece normas para o setor, constatou em uma pesquisa de 2019 entre seus países membros que três quartos dos respondentes afirmavam que seus setores de transporte aéreo já estavam experimentando algum impacto causado pela mudança no clima.

“É algo que absolutamente ocupa nossos pensamentos, com relação a se poderemos continuar mantendo nosso cronograma de voos, especialmente se considerarmos o crescimento que temos planejado para o futuro”, disse David Kensick, vice-presidente de operações mundiais da United Airlines. “Com a mudança no clima, estamos vendo um clima cada vez mais difícil de prever, e por isso teremos de lidar melhor com as situações criadas por ele”.

As companhias de aviação respondem por cerca de 2% das emissões mundiais de gases causadores do efeito estufa, ainda que, se outras substâncias emitidas por aviões forem consideradas, alguns estudos indiquem que seu impacto sobre o clima pode ser ainda maior.

O impacto potencial da mudança do clima sobre o setor é abrangente. Em curto prazo, as condições climáticas intensas criam dores de cabeça operacionais. Desvios forçados e cancelamentos de voos aumentam os custos de um setor que perdeu bilhões de dólares durante a pandemia.

Em prazo mais longo, as companhias de aviação acreditam que as mudanças nos padrões do clima alterarão as rotas de voo e o consumo de combustível. Provavelmente, voos entre a Europa e os Estados Unidos demorarão mais tempo, quando a “jet stream” que existe por sobre o Atlântico Norte mudar, por exemplo.

“A aviação será vítima da mudança do clima, além de ser vista, por muitas pessoas, como um dos vilões”, disse Paul Williams, professor de ciência atmosférica na Universidade de Reading, no Reino Unido.

O número de atrasos atribuídos ao mau tempo no espaço aéreo europeu subiu de 2,5 milhões em 2003 a um pico de 6,5 milhões em 2019, de acordo com dados da Eurocontrol, embora parte dessa alta possa ser atribuída ao crescimento do setor. Como proporção das causas gerais de atraso, problemas de clima subiram de 23% para 27% no mesmo período.

A proporção de voos cancelados nos Estados Unidos por conta do clima aumentou de aproximadamente 35% do total em 2004 para 54% em 2019, de acordo com a FAA (Administração Federal da Aviação) americana.

Mark Searle, diretor mundial de segurança na Associação Internacional do Transporte Aéreo (IATA), disse que as companhias de aviação haviam se adaptado ao longo dos anos à mudança do clima.

“Existe uma situação evoluindo, mas não é como se estivéssemos à beira do precipício”, ele disse. “Na verdade, nós a estamos administrando muito bem”.

Para os aeroportos, isso pode significar preparação para níveis de mar mais elevados. O novo terminal de passageiros do aeroporto de Changi, em Cingapura, foi construído apenas 5,5 metros acima do nível médio do mar. A Avinor, que opera aeroportos ao longo da costa da Noruega, determinou que todas as pistas de aterrissagem novas sejam construídas pelo menos sete metros acima do nível do mar.

No caso das companhias de aviação, será necessário recorrer à tecnologia. A American Airlines e a United Airlines melhoraram sua capacidade de prever a proximidade de relâmpagos, permitindo que o trabalho nos pátios continue por mais tempo, antes de uma tempestade que se aproxima, sem colocar em risco o pessoal de terra.

Em diversos de seus aeroportos centrais, a United Airlines, sediada em Chicago, também criou sistemas de taxiagem automática que permitem que aviões sejam conduzidos aos terminais mesmo que tempestades impeçam que agentes de rampa os orientem até os portões.

O clima severo exige pessoal adicional. As operadoras são forçadas a pagar horas extras quando seu pessoal de embarque e dos call centers enfrenta demanda adicional gerada por passageiros tentando reorganizar suas viagens. As empresas terão de calcular se compensa mais pagar o adicional por horas extras, criar turnos adicionais de trabalho ou deixar que os passageiros arquem com as consequências dos problemas.

“Haverá custo adicional de qualquer forma se –e essa é uma questão em aberto– as companhias de aviação decidirem que querem lidar com isso”, disse Jon Jager, analista da Cirium, uma empresa de pesquisa sobre aviação.

Embora os passageiros tipicamente culpem as companhias de aviação pelos problemas que encontram, as regras dos Estados Unidos, Reino Unido e União Europeia não exigem que elas indenizem os passageiros por problemas causados pelo clima. “A Mãe Natureza serve como desculpa para livrar as companhias de aviação de problemas”, disse Jager.

Perturbações surgem não só com tempestades, mas com extremos de calor. Aviões enfrentam dificuldade para decolar em temperaturas muito elevadas, porque o ar quente é menos denso, o que significa que as asas criam menos empuxo aerodinâmico. Quanto mais quente a temperatura, mais leve um avião precisa estar para decolar, especialmente em aeroportos com pistas curtas e em áreas quentes.

Williams, o cientista atmosférico, publicou um estudo no qual constata que, para um Airbus A320 decolando da ilha grega de Chios, a carga útil teve de ser reduzida em cerca de 130 quilos por ano, ao longo de três décadas –o que equivale, em linhas gerais, ao peso de um passageiro e sua bagagem.

A Iata está negociando com seus integrantes sobre a adoção de novas metas relacionadas à mudança do clima neste ano. As metas atuais do setor, adotadas em 2009, incluem reduzir à metade o nível de emissões de 2005, até 2050, e que todo crescimento seja neutro em termos de emissões de carbono, de 2020 em diante.

Mas em muitas áreas do setor, especialmente na Europa e Estados Unidos, existe uma convicção de que metas mais duras, incluindo um compromisso de zerar as emissões líquidas de poluentes, são necessárias.

“Acreditamos que provavelmente devemos ir além, e estamos trabalhando nisso”, disse Alexandre de Juniac, que está encerrando seu mandato como presidente da Iata, ao Financial Times alguns meses atrás.

Williams disse que a abordagem do setor de aviação quanto à mudança do clima parecia estar mudando.

“Historicamente, havia muita gente cética sobre a mudança do clima no setor de aviação, mas percebi uma mudança”, ele disse. “Agora o setor é muito mais honesto”.

Financial Times, tradução de Paulo Migliacci

Climate Change Brings a Flood of Hyperbole/The Climate Has a Gun (The Wall Street Journal)

wsj.com

Opinion | The Climate Has a Gun (The Wall Street Journal)

Those who dismiss risk of climate change often appeal to uncertainty, but they have it backward.

Aug. 17, 2021 1:14 pm ET 2 minutes


In “Climate Change Brings a Flood of Hyperbole” (op-ed, Aug. 11), Steven Koonin put himself in the unenviable position of playing down climate change precisely while we are experiencing unprecedented heat waves, storms, fires, droughts, and floods that exceed model-based expectations.

Mr. Koonin claims that regional projections are “meant to scare people.” But the paper he cites for support addresses the “unfolding of what may become catastrophic changes to Earth’s climate” and argues that “being able to anticipate what would otherwise be surprises in extreme weather and climate variations” requires better models. In other words, our current models cannot rule out a catastrophic future.

Model uncertainty is two-edged. If we’d been lucky, we’d be discovering that we overestimated the danger. But all indicators suggest the opposite. Those who dismiss climate risk often appeal to uncertainty, but they have it backward. Climate uncertainty is like not knowing how many shots Dirty Harry fired from his .44-caliber Magnum. Now that it’s pointed at our head, it’s dawning on us that we’ve probably miscalculated. By the time we’re sure, it’s too late. We’ve got to ask ourselves one question: Do we feel lucky? Well, do we?

Adj. Prof. Mark BosloughUniversity of New Mexico


wsj.com

Opinion | Climate Change Brings a Flood of Hyperbole (The Wall Street Journal)

Despite constant warnings of catastrophe, things aren’t anywhere near as dire as the media say.

Steven E. Koonin – Aug. 10, 2021 6:33 pm ET


The Intergovernmental Panel on Climate Change has issued its latest report assessing the state of the climate and projecting its future. As usual, the media and politicians are exaggerating and distorting the evidence in the report. They lament an allegedly broken climate and proclaim, yet again, that we are facing the “last, best chance” to save the planet from a hellish future. In fact, things aren’t—and won’t be—anywhere near as dire.

The new report, titled AR6, is almost 4,000 pages, written by several hundred government-nominated scientists over the past four years. It should command our attention, especially because this report will be a crucial element of the coming United Nations Climate Change Conference in Glasgow. Leaders from 196 countries will come together there in November, likely to adopt more-aggressive nonbinding pledges to reduce greenhouse-gas emissions.

Previous climate-assessment reports have misrepresented scientific research in the “conclusions” presented to policy makers and the media. The summary of the most recent U.S. government climate report, for instance, said heat waves across the U.S. have become more frequent since 1960, but neglected to mention that the body of the report shows they are no more common today than they were in 1900. Knowledgeable independent scientists need to scrutinize the latest U.N. report because of the major societal and economic disruptions that would take place on the way to a “net zero” world, including the elimination of fossil-fueled electricity, transportation and heat, as well as complete transformation of agricultural methods.

It is already easy to see things in this report that you almost certainly won’t learn from the general media coverage. Most important, the model muddle continues. We are repeatedly told “the models say.” But the complicated computer models used to project future temperature, rainfall and so on remain deficient. Some models are far more sensitive to greenhouse gases than others. Many also disagree on the baseline temperature for the Earth’s surface.

The latest models also don’t reproduce the global climate of the past. The models fail to explain why rapid global warming occurred from 1910 to 1940, when human influences on the climate were less significant. The report also presents an extensive “atlas” of future regional climates based on the models. Sounds authoritative. But two experts, Tim Palmer and Bjorn Stevens, write in the Proceedings of the National Academy of Sciences that the lack of detail in current modeling approaches makes them “not fit” to describe regional climate. The atlas is mainly meant to scare people.

Financiamento climático: a conta não fecha (Página22)

pagina22.com.br

Bruno Toledo – 10 de agosto de 2021


Em 2009, países desenvolvidos prometeram destinar ao menos US$ 100 bilhões anuais aos países pobres a partir de 2020. Passado o prazo, a meta segue distante de ser atingida

Nos escombros do fracasso diplomático da Conferência do Clima de Copenhague (COP 15), em 2009, uma das poucas novidades que se salvaram foi a promessa de países desenvolvidos de ampliar os recursos oferecidos às nações mais pobres para financiar a ação contra a mudança do clima, de forma escalonada, ao longo da década de 2010. Ao final desse período, em 2020, a ideia era que esses recursos somassem ao menos US$ 100 bilhões anuais, valor que passaria a servir como “piso” para o financiamento da ação climática dali em diante. 

Passados oito meses do prazo definido pelos países ricos em Copenhague, a promessa de financiamento climático de US$ 100 bilhões não poderia estar mais distante de ser uma realidade. Dados da Organização para Cooperação e Desenvolvimento Econômico (OCDE) indicam que o volume de recursos mobilizados em 2018, último ano com informações totalizadas, foi de cerca de US$ 80 bilhões. 

Economistas e especialistas em financiamento para o clima duvidam que os dados referentes aos anos de 2019 e 2020 indiquem um cenário diferente disso. Pior: é muito provável que a pandemia tenha prejudicado a disponibilidade de novos recursos financeiros para ação climática nos países pobres. A incerteza quanto à retomada econômica pós-pandemia também afeta as expectativas para o futuro de curto prazo: com os governos e as empresas na ponta dos pés, enquanto não houver uma normalização efetiva da atividade econômica, dificilmente haverá recursos adicionais para a ação climática internacional. 

O problema é que, com a crise climática se intensificando e a pandemia aprofundando o abismo do desenvolvimento entre países ricos e pobres, o financiamento externo para ação climática nas nações em desenvolvimento virou uma questão de vida ou morte. Sem dinheiro, esses países dificilmente terão condições de tirar do papel seus compromissos de mitigação apresentados no Acordo de Paris. A falta de uma sinalização dos países ricos quanto ao cumprimento dessa promessa ameaça gerar uma crise diplomática capaz de prejudicar as conversas na próxima Conferência do Clima (COP 26), programada para novembro em Glasgow, na Escócia, e colocar um incômodo ponto de interrogação no futuro do Acordo de Paris.

Tropeços do passado reforçam incertezas

Desde o começo, a incerteza em torno da viabilidade prática do compromisso financeiro estabelecido pelos governos ricos em 2009 era considerável. Mesmo com o sucesso diplomático obtido em 2015, na COP 21, quando os países aprovaram o Acordo de Paris, o financiamento climático seguiu como um problema político relevante na agenda de negociação.

Os anos subsequentes à Conferência de Paris não ajudaram: a articulação política internacional que tinha possibilitado a aprovação do Acordo na COP 21, encabeçada por Estados Unidos, China e União Europeia, se desfez depois da eleição do negacionista Donald Trump para a Casa Branca. Além de retirar os EUA do Acordo de Paris, Trump também voltou atrás nas promessas financeiras feitas pelo antecessor, Barack Obama. 

Sem os EUA, a economia mais rica do planeta, qualquer compromisso financeiro internacional seria inviável, especialmente para a agenda climática. A União Europeia tentou assumir o protagonismo nessa questão, reforçando os desembolsos financeiros junto ao Fundo Climático Verde (GCF, sigla em inglês), estabelecido pela Convenção das Nações Unidas sobre Mudança do Clima (UNFCCC) para receber e administrar os recursos prometidos em Copenhague. Nos últimos anos, o bloco europeu destinou cerca de US$ 20 bilhões anuais, consolidando-se como o principal doador do GCF.

Ao mesmo tempo, os EUA de Trump limitaram-se a cumprir compromissos pregressos de financiamento que somaram pouco mais de US$ 2,5 bilhões. Para se ter ideia, a estimativa em 2009 era de que os americanos assumissem cerca de 40% do bolo do financiamento climático anual a partir de 2020 – ou seja, ao menos US$ 40 bilhões, somando recursos públicos e privados. 

O humor mudou um pouco em 2020. Mesmo com a pandemia, a grande novidade foi o retorno dos Estados Unidos à arena multilateral para o clima, com a vitória de Joe Biden. Diferentemente de Trump, Biden colocou a questão climática no centro de sua plataforma eleitoral e dos esforços de recuperação econômica pós-pandemia no país. Além de retornar ao Acordo de Paris, o novo governo dos EUA prometeu recuperar o tempo perdido com novos compromissos financeiros para ação climática nos países pobres.

Em abril, durante a Cúpula sobre o Clima realizada pela Casa Branca com líderes internacionais, Biden prometeu dobrar o volume de financiamento climático americano para US$ 5,7 bilhões até 2024. O dinheiro adicional é obviamente bem-vindo, mas a bagatela não esconde a realidade: os EUA seguirão muito distantes daquilo que deveria ser sua parcela justa de responsabilidade nessa questão. 

Essa realidade ficou ainda mais evidente nas últimas semanas, com o fracasso do G-7 e do G-20 em chegar a um acordo em torno de novos compromissos financeiros para a ação climática nos países em desenvolvimento. Havia uma grande expectativa de que esses “clubes”, tendo em vista a COP 26 em novembro, apresentassem ao menos alguma sinalização de dinheiro novo para as nações mais pobres tirarem do papel seus planos climáticos nacionais submetidos no âmbito do Acordo de Paris. No entanto, a decepção foi gritante.

Em xeque, o espírito do Acordo de Paris

Negociadores de países como Índia, Bangladesh e pequenas nações insulares do Pacífico não esconderam a irritação com a falta de novos compromissos financeiros por parte dos governos mais ricos. Ambientalistas também criticaram esse ponto, ressaltando o óbvio: sem recursos, a ação climática nos países pobres ficará inviabilizada, o que coloca em xeque o espírito do Acordo de Paris – por meio do qual todas as nações, ricas ou pobres, comprometeram-se a agir contra a mudança do clima. 

“A confiança [entre os países] está em jogo”, observou a negociadora Diann Black-Layne, de Antígua e Barbuda, ao Climate Home pouco após a cúpula do G-7, em junho passado. “O Acordo de Paris foi construído com base na confiança, e pode desmoronar se ela for quebrada”. Sem um compromisso renovado e ampliado para facilitar a ação climática no mundo em desenvolvimento, “só vai ficar mais difícil daqui em diante conseguir o tipo de consenso político necessário” para agir contra a crise climática em nível global. 

Sem chuvas, Brasil pode ter estagnação econômica e inflação, diz analista (Folha de S.Paulo)

www1.folha.uol.com.br

Crise de energia pode derrubar o PIB e aumentar a inflação no ano que vem, aponta relatório da RPS Capital

Douglas Gavras – 19 de agosto de 2021


O Brasil pode entrar em um quadro de estagflação (combinação de fraqueza econômica e preços em alta), caso não volte a chover no quarto trimestre do ano, segundo avaliação dos analistas da RPS Capital.

Na visão deles, a economia brasileira tem absorvido vários choques ao longo do ano, com desorganização de cadeias produtivas globais e, mais recentemente, aumento do custo do frete, com um novo surto de Covid na China.

“Se o período úmido for ruim, a gente pode ter complicações e o risco não é pequeno. O cenário de estiagem precisa passar até outubro, quando ocorre a transição desse período mais chuvoso”, diz Gabriel Barros, da RPS.

Para o analista, o governo tem adotado algumas medidas, que vão na direção correta, mas não são suficientes para evitar um cenário preocupante nos reservatórios das usinas.

“O que o governo tem anunciado é mais focado em grandes consumidores, ao deslocar o pico de carga da indústria para suavizar a curva”, diz. Como a situação é dramática, no entanto, deveria ser adotado um plano mais amplo de economia de energia.

Ele lembra que a inflação de alimentos ainda deve pesar no bolso, combinada com o aumento de preços da energia.

A inflação medida pelo IPCA (Índice Nacional de Preços ao Consumidor Amplo) subiu 0,96% em julho, o maior resultado para o mês desde 2002, quando a alta foi de 1,19%.

No ano, o indicador acumula alta de 4,76% e, em 12 meses, 8,99%. Segundo o IBGE (Instituto Brasileiro de Geografia e Estatística), oito dos nove grupos pesquisados apresentaram alta no mês. A maior pressão veio do aumento de 3,10% na habitação, pela alta de 7,88% na energia elétrica.

Além disso, a economia se beneficiou de um avanço na vacinação, o que deve movimentar o setor de serviços no segundo semestre. “Esses negócios estão em um momento de recompor preços e a inflação de serviços mostrou que está viva”, diz o analista.

Conforme o setor for reabrindo, a inflação como um todo também deve ficar mais alta. “São vários choques sequenciais e acontecendo ao mesmo tempo, criando uma tempestade perfeita para o BC”, diz o economista.

Diante desse quadro, caso o período de seca seja prolongado e não tenha chuva no fim do ano, cresce a possibilidade de que a economia não aguente mais um choque, explica Barros. “Uma seca mais aguda poderia gerar um cenário de estagflação.”

A geração hidrelétrica continua representando a maior parcela do parque gerador do país, que já representou 90% durante o apagão de 2001 e está em torno de 70%. Com a seca histórica, os reservatórios atingiram nível crítico e o governo precisou acionar térmicas (mais caras) para manter a geração.

“A reabertura da economia ajuda, mas tem de ter energia. Sem energia, isso vai derrubar o PIB (Produto Interno Bruto) e aumentar a inflação no ano que vem.”

O crescimento de 2022, que está sendo revisto para baixo, pode ficar ainda mais fraco sem chuvas. Uma redução compulsória de carga vai reduzir o crescimento, isso afeta diretamente o PIB.

Segundo o mais recente Boletim Focus, do Banco Central, a perspectiva de crescimento da economia é de 2,04% —sendo que já foi de 2,1% há um mês e de 2,5% no começo do ano.

Desmatamento pode aumentar chance de novas pandemias, diz relatório de Harvard (Folha de S.Paulo)

www1.folha.uol.com.br

Cientistas destacam que há pouco investimento contra surgimento de doenças

Phillippe Watanabe – 20 de agosto de 2021


Um relatório da Universidade Harvard reforça que alterações humanas no uso da terra e a destruição de florestas tropicais são fatores que podem aumentar as chances de surgimento de doenças com potencial pandêmico. Os pesquisadores apontam a conservação ambiental como uma das estratégias para evitar novas doenças.

O Harvard Global Health Institute e o Center for Climate, Health, and the Global Environment, da escola de saúde pública de Harvard, reuniram pesquisadores para analisar a literatura científica disponível até o momento e apontar caminhos para prevenção de novas pandemias.

Um dos pontos citados diretamente pelo relatório é o desmatamento. Os cientistas apontam como exemplo o aumento, após processos de desmate na América Central, de roedores como reservatórios de hantavírus —que, em caso de contaminação de humanos, pode levar à síndrome pulmonar por hantavírus.

A expansão de áreas agricultáveis também está ligada ao surgimento de novas doenças. Isso ocorre, afirmam os pesquisadores, pelo potencial de tal ação aproximar humanos a rebanhos de animais silvestres.

“Cerca de 22% da área terrestre em hotspots de biodiversidade, muitas vezes sobrepostos a hotspots de doenças emergentes, é ameaçada por expansão agrícola”, afirma o documento.

O relatório aponta que processos de urbanização descontrolados e sem planejamento podem ter um papel no surgimento de doenças, pelas mudanças no uso de terra e por possíveis grandes concentrações de pessoas e condições de vida ruins.

Há ainda fazendas de animais como outro ponto importante em eventos de spillover, ou transbordamento, em tradução do inglês, de zoonoses —basicamente, quando um vírus salta de uma espécie para uma nova, como para humanos. Os pesquisadores apontam a baixa diversidade genética e o elevado número de animais mantidos em alguns desses locais.

Os cientistas dão como exemplo a transmissão —inicialmente entre suínos e depois para trabalhadores agrícolas— do vírus nipah, na Malásia, em fazendas de porcos com altas concentrações de animais.

Além disso, a caça, o consumo e o comércio de animais selvagens também podem provocar o spillover.

A crise climática é mais um fator que deve impactar nos riscos de aparecimento de novas doenças no mundo, considerando as alterações que ocorrerão em ecossistemas. Segundo os pesquisadores, existe a possibilidade de habitats adequados para espécies diminuírem, o que poderia promover mais encontros entre vida selvagem e humanos, e, com isso, mais eventos de spillover.

“A redução de habitats e disponibilidade de néctar para morcegos, por exemplo, têm pressionado esses animais a buscar fontes alternativas de alimento em áreas urbanas e arredores”, afirmam os cientistas no relatório.

Por fim, os pesquisadores convocados por Harvard apontam estratégias para evitar eventos de spillover. A conservação ambiental é a primeira a ser destacada no relatório.

Outras estratégias listadas são restrições ao consumo de animais selvagens, investigações sobre vírus na vida selvagem e uma rede global de vigilância de patógenos em humanos, animais criados para abate e vida silvestre, entre outras iniciativas.

Segundo o documento, são baixos os investimentos destinados a impedir spillover. “Não mais do que US$ 4 bilhões [R$ 21,5 bilhões] são gastos a cada ano em todo o mundo em atividades de prevenção de transbordamento. A Covid-19 sozinha resultou em uma perda de PIB global estimada em US$ 4 trilhões [R$ 21,5 trilhões], ou cerca de US$ 40 bilhões [R$ 215 bilhões] por ano durante um século”, aponta o relatório.

The one number you need to know about climate change (MIT Technology Review)

technologyreview.com

David Rotman – April 24, 2019

The social cost of carbon could guide us toward intellinget policies – only if we knew what it was.

In contrast to the existential angst currently in fashion around climate change, there’s a cold-eyed calculation that its advocates, mostly economists, like to call the most important number you’ve never heard of.

It’s the social cost of carbon. It reflects the global damage of emitting one ton of carbon dioxide into the sky, accounting for its impact in the form of warming temperatures and rising sea levels. Economists, who have squabbled over the right number for a decade, see it as a powerful policy tool that could bring rationality to climate decisions. It’s what we should be willing to pay to avoid emitting that one more ton of carbon.

Welcome to climate change

This story was part of our May 2019 issue

For most of us, it’s a way to grasp how much our carbon emissions will affect the world’s health, agriculture, and economy for the next several hundred years. Maximilian Auffhammer, an economist at the University of California, Berkeley, describes it this way: it’s approximately the damage done by driving from San Francisco to Chicago, assuming that about a ton of carbon dioxide spits out of the tailpipe over those 2,000 miles.

Common estimates of the social cost of that ton are $40 to $50. The cost of the fuel for the journey in an average car is currently around $225. In other words, you’d pay roughly 20% more to take the social cost of the trip into account.

The number is contentious, however. A US federal working group in 2016, convened by President Barack Obama, calculated it at around $40, while the Trump administration has recently put it at $1 to $7. Some academic researchers cite numbers as high as $400 or more.

Why so wide a range? It depends on how you value future damages. And there are uncertainties over how the climate will respond to emissions. But another reason is that we actually have very little insight into just how climate change will affect us over time. Yes, we know there’ll be fiercer storms and deadly wildfires, heat waves, droughts, and floods. We know the glaciers are melting rapidly and fragile ocean ecosystems are being destroyed. But what does that mean for the livelihood or life expectancy of someone in Ames, Iowa, or Bangalore, India, or Chelyabinsk, Russia?

For the first time, vast amounts of data on the economic and social effects of climate change are becoming available, and so is the computational power to make sense of it. Taking this opportunity to compute a precise social cost of carbon could help us decide how much to invest and which problems to tackle first.

“It is the single most important number in the global economy,” says Solomon Hsiang, a climate policy expert at Berkeley. “Getting it right is incredibly important. But right now, we have almost no idea what it is.”

That could soon change.

The cost of death

In the past, calculating the social cost of carbon typically meant estimating how climate change would slow worldwide economic growth. Computer models split the world into at most a dozen or so regions and then averaged the predicted effects of climate change to get the impact on global GDP over time. It was at best a crude number.

Over the last several years, economists, data scientists, and climate scientists have worked together to create far more detailed and localized maps of impacts by examining how temperatures, sea levels, and precipitation patterns have historically affected things like mortality, crop yields, violence, and labor productivity. This data can then be plugged into increasingly sophisticated climate models to see what happens as the planet continues to warm.

The wealth of high-resolution data makes a far more precise number possible—at least in theory. Hsiang is co-director of the Climate Impact Lab, a team of some 35 scientists from institutions including the University of Chicago, Berkeley, Rutgers, and the Rhodium Group, an economic research organization. Their goal is to come up with a number by looking at about 24,000 different regions and adding together the diverse effects that each will experience over the coming hundreds of years in health, human behavior, and economic activity.

It’s a huge technical and computational challenge, and it will take a few years to come up with a single number. But along the way, the efforts to better understand localized damages are creating a nuanced and disturbing picture of our future.

So far, the researchers have found that climate change will kill far more people than once thought. Michael Greenstone, a University of Chicago economist who co-directs the Climate Impact Lab with Hsiang, says that previous mortality estimates had looked at seven wealthy cities, most in relatively cool climates. His group looked at data gleaned from 56% of the world’s population. It found that the social cost of carbon due to increased mortality alone is $30, nearly as high as the Obama administration’s estimate for the social cost of all climate impacts. An additional 9.1 million people will die every year by 2100, the group estimates, if climate change is left unchecked (assuming a global population of 12.7 billion people).

Unfairly Distributed

However, while the Climate Impact Lab’s analysis showed that 76% of the world’s population would suffer from higher mortality rates, it found that warming temperatures would actually save lives in a number of northern regions. That’s consistent with other recent research; the impacts of climate change will be remarkably uneven.

The variations are significant even within some countries. In 2017, Hsiang and his collaborators calculated climate impacts county by county in the United States. They found that every degree of warming would cut the country’s GDP by about 1.2%, but the worst-hit counties could see a drop of around 20%.

If climate change is left to run unchecked through the end of the century, the southern and southwestern US will be devastated by rising rates of mortality and crop failure. Labor productivity will slow, and energy costs (especially due to air-conditioning) will rise. In contrast, the northwestern and parts of the northeastern US will benefit.

“It is a massive restructuring of wealth,” says Hsiang. This is the most important finding of the last several years of climate economics, he adds. By examining ever smaller regions, you can see “the incredible winners and losers.” Many in the climate community have been reluctant to talk about such findings, he says. “But we have to look [the inequality] right in the eye.”

The social cost of carbon is typically calculated as a single global number. That makes sense, since the damage of a ton of carbon emitted in one place is spread throughout the world. But last year Katharine Ricke, a climate scientist at UC San Diego and the Scripps Institution of Oceanography, published the social costs of carbon for specific countries to help parse out regional differences.

India is the big loser. Not only does it have a fast-growing economy that will be slowed, but it’s already a hot country that will suffer greatly from getting even hotter. “India bears a huge share of the global social cost of carbon—more than 20%,” says Ricke. It also stands out for how little it has actually contributed to the world’s carbon emissions. “It’s a serious equity issue,” she says.

Estimating the global social cost of carbon also raises a vexing question: How do you put a value on future damages? We should invest now to help our children and grandchildren avoid suffering, but how much? This is hotly and often angrily debated among economists.

A standard tool in economics is the discount rate, used to calculate how much we should invest now for a payoff years from now. The higher the discount rate, the less you value the future benefit. William Nordhaus, who won the 2018 Nobel Prize in economics for pioneering the use of models to show the macroeconomic effects of climate change, has used a discount rate of around 4%. The relatively high rate suggests we should invest conservatively now. In sharp contrast, a landmark 2006 report by British economist Nicholas Stern used a discount rate of 1.4%, concluding that we should begin investing much more heavily to slow climate change. 

There’s an ethical dimension to these calculations. Wealthy countries whose prosperity has been built on fossil fuels have an obligation to help poorer countries. The climate winners can’t abandon the losers. Likewise, we owe future generations more than just financial considerations. What’s the value of a world free from the threat of catastrophic climate events—one with healthy and thriving natural ecosystems?

Outrage

Enter the Green New Deal (GND). It’s the sweeping proposal issued earlier this year by Representative Alexandria Ocasio-Cortez and other US progressives to address everything from climate change to inequality. It cites the dangers of temperature increases beyond the UN goal of 1.5 °C and makes a long list of recommendations. Energy experts immediately began to bicker over its details: Is achieving 100% renewables in the next 12 years really feasible? (Probably not.) Should it include nuclear power, which many climate activists now argue is essential for reducing emissions?

In reality, the GND has little to say about actual policies and there’s barely a hint of how it will attack its grand challenges, from providing a secure retirement for all to fostering family farms to ensuring access to nature. But that’s not the point. The GND is a cry of outrage against what it calls “the twin crises of climate change and worsening income inequality.” It’s a political attempt to make climate change part of the wider discussion about social justice. And, at least from the perspective of climate policy, it’s right in arguing that we can’t tackle global warming without considering broader social and economic issues.

The work of researchers like Ricke, Hsiang, and Greenstone supports that stance. Not only do their findings show that global warming can worsen inequality and other social ills; they provide evidence that aggressive action is worth it. Last year, researchers at Stanford calculated that limiting warming to 1.5 °C would save upwards of $20 trillion worldwide by the end of the century. Again, the impacts were mixed—the GDPs of some countries would be harmed by aggressive climate action. But the conclusion was overwhelming: more than 90% of the world’s population would benefit. Moreover, the cost of keeping temperature increases limited to 1.5 °C would be dwarfed by the long-term savings.

Nevertheless, the investments will take decades to pay for themselves. Renewables and new clean technologies may lead to a boom in manufacturing and a robust economy, but the Green New Deal is wrong to paper over the financial sacrifices we’ll need to make in the near term.

That is why climate remedies are such a hard sell. We need a global policy—but, as we’re always reminded, all politics is local. Adding 20% to the cost of that San Francisco–Chicago trip might not seem like much, but try to convince a truck driver in a poor county in Florida that raising the price of fuel is wise economic policy. A much smaller increase sparked the gilets jaunes riots in France last winter. That is the dilemma, both political and ethical, that we all face with climate change.

Study: Evolution now accepted by majority of Americans (EurekaAlert!)

News Release 20-Aug-2021

Peer-Reviewed Publication

University of Michigan

The level of public acceptance of evolution in the United States is now solidly above the halfway mark, according to a new study based on a series of national public opinion surveys conducted over the last 35 years.

“From 1985 to 2010, there was a statistical dead heat between acceptance and rejection of evolution,” said lead researcher Jon D. Miller of the Institute for Social Research at the University of Michigan. “But acceptance then surged, becoming the majority position in 2016.”

Examining data over 35 years, the study consistently identified aspects of education—civic science literacy, taking college courses in science and having a college degree—as the strongest factors leading to the acceptance of evolution.

“Almost twice as many Americans held a college degree in 2018 as in 1988,” said co-author Mark Ackerman, a researcher at Michigan Engineering, the U-M School of Information and Michigan Medicine. “It’s hard to earn a college degree without acquiring at least a little respect for the success of science.”

The researchers analyzed a collection of biennial surveys from the National Science Board, several national surveys funded by units of the National Science Foundations, and a series focused on adult civic literacy funded by NASA. Beginning in 1985, these national samples of U.S. adults were asked to agree or disagree with this statement: “Human beings, as we know them today, developed from earlier species of animals.”

The series of surveys showed that Americans were evenly divided on the question of evolution from 1985 to 2007. According to a 2005 study of the acceptance of evolution in 34 developed nations, led by Miller, only Turkey, at 27%, scored lower than the United States. But over the last decade, until 2019, the percentage of American adults who agreed with this statement increased from 40% to 54%.

The current study consistently identified religious fundamentalism as the strongest factor leading to the rejection of evolution. While their numbers declined slightly in the last decade, approximately 30% of Americans continue to be religious fundamentalists as defined in the study. But even those who scored highest on the scale of religious fundamentalism shifted toward acceptance of evolution, rising from 8% in 1988 to 32% in 2019.

Miller predicted that religious fundamentalism would continue to impede the public acceptance of evolution. 

“Such beliefs are not only tenacious but also, increasingly, politicized,” he said, citing a widening gap between Republican and Democratic acceptance of evolution. 

As of 2019, 34% of conservative Republicans accepted evolution compared to 83% of liberal Democrats.

The study is published in the journal Public Understanding of Science.

Besides Miller and Ackerman, the authors are Eugenie Scott and Glenn Branch of the National Center for Science Education; Belén Laspra of the University of Oviedo in Spain; and Carmelo Polino of the University of Oviedo and Centre Redes in Argentina; and Jordan Huffaker of U-M.

Study abstract: Public acceptance of evolution in the United States, 1985-2020 

A PDF of the study is available upon request


Journal

Public Understanding of Science

Wildfires, Heatwaves, And The IPCC Report: Yet Climate Policy Is Losing Steam (Forbes)

forbes.com

Nives Dolsak and Aseem Prakash – Aug 14, 2021,08:29pm EDT


The recent IPCC report is a grim reminder of the seriousness of the climate crisis. The wildfires in the Western United States and Canada, the zombie fires in Siberia, heatwaves in Southern Europe and the Pacific Northwest, and floods in Germany and China should motivate aggressive climate action.

Disasters are supposed to focus policy attention, which political scientist John Kingdon described as opening the “policy window.” As “focusing events,” drastic weather episodes could create opportunities to enact new climate policies. But, of course, a lot depends on the skill of policy entrepreneurs. As Rahm Immanuel had famously noted, politicians should not allow a serious crisis to go to waste.

And yet, climate policy seems to be losing steam. The U.S. Senate has substantially slashed Biden’s proposal for new climate spending. China continues to build coal-fired electricity plants. Brazil has announced a plan to support its coal industry.

And to top it all, Jake Sullivan, U.S. National Security Advisor, is imploring OPEC countries to pump more oil! The White House press release notes: “President Biden has made clear that he wants Americans to have access to affordable and reliable energy, including at the pump.” Yes, one can smell 2022 mid-term elections because Democrats do not want to be held responsible for high gas prices, a highly emotive pocketbook issue. However, these statements cause enormous policy confusion about Biden’s commitment to making tough choices on climate issues. If zero emissions are to be achieved by 2050, the White House should allow the prices to rise. Moreover, if Biden supports increasing oil supply abroad, why is he opposing it in the U.S., as Texas Governor Greg Abbott noted?

Models of Policy Change

There are different pathways to policy change. The “information deficit” model suggests that policy change is hampered when policy elites do not have sufficient information. Once these elites are “educated” and there is an “epistemic consensus,” policy change takes place. With easy accessibility to well-written and carefully crafted IPCC reports, it is difficult to accept that policy elites lack information about climate change. Perhaps, what is taking place is “motivated reasoning”: individuals seek information that coheres with their prior beliefs and leads them to their desired conclusions. This means that policy elites are not empty vessels waiting to be nourished by the nectar of new knowledge. Instead, they seek information that they want to hear. Information deficit explanations do not work well in highly polarized political contexts.

Political explanations begin with the premise that most policy institutions favor the status quo. This is partly due to the institutional design (such as the Senate Filibuster) that many democracies deliberately adopt to prevent concentration of power. But sometimes, dramatic events can shatter the status quo, as elites begin to rethink their priorities. If political entrepreneurs can stitch together a coalition, policy change can happen. And sometimes, even without policy windows opening up, these entrepreneurs can create policies that can appeal to multiple constituencies. After all, Baptists and Bootleggers came together to push for prohibition. Politics, rather than the lack of scientific information, is probably leading to policy sluggishness.  

Why is Climate Policy Stalling?

Additional issues are also contributing to climate policy lethargy. Humans have a limited attention span. Climate issues are getting neglected because the policy space is getting crowded by new and sensational non-climate issues. Taliban’s rapid advance in Afghanistan is stunning, and its aftermath is most disturbing. Western countries are in a panic mode to evacuate embassies with “Saigon type” exit from Kabul. The Afghanistan crisis is creating a new wave of refugees seeking safety in Europe, abetting a nationalist backlash. The debate on “who lost Afghanistan” will probably dominate the U.S. policy discourse with the usual blame game.  

Closer to home, the resurgence of COVID and the debate about masks and vaccines are igniting political passions. School and college reopening controversy will probably take a chunk of policy space and attention span.

Other dramatic issues will make demands on the attention span as well: crime waves in many cities (the top issue in the New York Mayoral race), the Cuomo scandal, and Newsom’s recall.

Is there Hope on the Climate Front?

The good news is that the renewable energy industry is growing despite COVID-induced recession. A key reason is that the prices of both solar and wind are now  competitive with coal. This means that electric utilities will deploy their political muscle to get favorable renewable policies at the state level. For example, the legislature in a Red state such as Indiana has prohibited county governments from using zoning ordinances against renewable energy.

The automobile industry seems to be pushing EVs as well. Although the Senate’s $1.2 trillion infrastructure plan has provided only $7.5 billion for E.V. charging stations (as opposed to $15 billion Biden had asked for), the automobile industry and electric utilities (with their massive new investments in renewables) are now getting locked into a new technological trajectory . This means that they have strong incentives to create a national charging station network.

Although the federal government may be underperforming on climate issues, the private sector has embraced them. Wall Street also seems to be keeping pace with Main Street and the Silicon Valley. Of course, one might view industry’s newfound love for Environmental-Social-Governance (ESG) issues as hype, simply replacing Corporate Social Responsibility (CSR). It remains to be seen if climate leaders such as BlackRock can bring about measurable change in corporate policies on climate issues.

In sum, the climate policy optimism of the first 100 days of the Biden administration seems to be wearing off. This is disturbing because Republicans are expected to retake the House (and possibly the Senate as well) in the 2022 midterm elections. Thus, the window of opportunity to enact aggressive federal climate policy is slowly closing. Climate policy requires vigorous political entrepreneurship to bring about policy change in the next 12 months.

Clima, gênero e a interseccionalidade (Clima Info)

climainfo.org.br


Por Tatiane Matheus*

O planeta Terra vive uma emergência climática e a necessidade de soluções e ações se tornou ainda mais urgente com a crise mundial ocasionada pela pandemia da COVID-19. Um novo pacto social, político e econômico verde precisa ser debatido e muitas coisas precisam ser colocadas em prática para não haver um colapso ainda maior.

As mulheres não são (e nem devem ser vistas como) vítimas, nem heroínas. Mas, sim, estão nos grupos dos mais vulneráveis na emergência climática. Em diferentes aspectos, não apenas por questões de gênero. Raça, etnia, classe social, região, por exemplo, podem fazer com que esses impactos sejam vivenciados de formas distintas.

Mesmo representando a metade da população mundial e sendo as mais impactadas pelos efeitos do aquecimento global, as mulheres não possuem uma representatividade proporcional nas principais esferas de decisão; nem nas possíveis soluções das quais poderiam ser beneficiadas, elas são contempladas.

Até nos postos de trabalho gerados pelos investimentos em setores da Economia Verde – aqueles que colaboram para a redução dos efeitos da emergência climática – as mulheres têm igualdade. Importantes esferas de decisão, como as Conferências do Clima (COPs), também não possuem a devida proporcionalidade de gênero nos postos de liderança, como divulgado no final do ano passado: “Mulheres pedem igualdade de gênero no comando da COP26”.

O grupo de Trabalho sobre Gênero & Clima do Observatório do Clima também entende que as mulheres têm suas vidas significativamente afetadas pelas mudanças climáticas e muitos problemas são potencializados pelas injustiças estruturais em relação ao gênero.

Interseccionalidade é o estudo da sobreposição ou intersecção de identidades sociais e sistemas relacionados de opressão, dominação ou discriminação que nos permite compreender melhor as desigualdades e a sobreposição dessas opressões e discriminações existentes em nossa sociedade. Levando-se em conta o conceito de intersecção, mulheres indígenas, mulheres quilombolas, mulheres negras, mulheres da periferia, agricultoras, mães solteiras e chefes de família são impactadas de formas distintas. Até mesmo ao buscar responder a pergunta: Por que a produção de artigos científicos por mulheres caiu brutalmente durante a pandemia, vamos encontrar entre as respostas possíveis que a divisão sexual do trabalho doméstico e de cuidado existente em nossa sociedade acabam impactando as mulheres.

“Não existe hierarquia de opressão, já aprendemos. Identidades sobressaltam aos olhos ocidentais, mas a interseccionalidade se refere ao que faremos politicamente com a matriz de opressão responsável por produzir diferenças, depois de enxergá-las como identidades. Uma vez no fluxo das estruturas, o dinamismo identitário produz novas formas de viver, pensar e sentir, podendo ficar subsumidas a certas identidades insurgentes, ressignificadas pelas opressões”, como explica a doutora em Estudos Interdisciplinares de Gênero, Mulheres e Feminismos pela Universidade Federal da Bahia, Carla Akotirene, em seu livro Interseccionalidade.

Um exemplo emblemático sobre com muitas questões importantes podem se tornar “imperceptíveis” se não trouxermos o olhar da interseccionalidade é o discurso da intelectual Sojourner Truth, em 1851, “E eu não sou uma mulher?” em uma convenção pelos Direitos das Mulheres, onde ela questiona o conceito de “mulher universal”, sob seu ponto de vista de uma ex-escrava. Para se buscar uma retomada verde inclusiva — um novo pacto social econômico que seja de fato inclusivo —, deve-se  levar em conta o conceito de interseccionalidade para que possa tirar da invisibilidade muitas pessoas.

De acordo com o Programa das Nações Unidas para o Desenvolvimento (PNUD), os países pobres são os que mais sofrem as consequências imediatas da mudança climática por causa das condições desfavoráveis pré-existentes. Apesar de todas as regiões do planeta estarem sendo afetadas, os danos serão maiores para aqueles que tenham mais vulnerabilidades socioeconômicas e a sua localização geográfica.

Segundo as estimativas do Parlamento Europeu, 70% das 1,3 bilhões de pessoas em situação de pobreza em todo o mundo são mulheres.  Entretanto, o relatório da ONU Mulheres (2020) mostra que apenas cinco dos 75 estados membros da Organização das Nações Unidas reconheceram que as considerações de gênero são importantes para responder aos riscos de segurança relacionados ao clima. A pandemia causada pelo novo coronavírus (Sars-Cov-2) trouxe à tona muitas diferenças sociais que já eram óbvias, mas não eram enxergadas — talvez não quisessem enxergá-las ou eram invisíveis por serem naturalizadas — por muitos.

Como apontado no artigo “Por que somente o investimento econômico em ‘setores verdes’ não basta”, para se reduzir as desigualdades estruturais de gênero e raça presentes no Brasil e trazer um desenvolvimento sustentável para uma retomada verde inclusiva são necessários: debater o tema e dar a devida nomenclatura para casos de racismo ambiental, falta de equidade de gênero e outras desigualdades;  ter ações coordenadas que busquem políticas macroeconômicas e de desenvolvimento, políticas industriais e setoriais que considerem as dimensões sociais, ambientais e climáticas; gerar apoio para que micro e pequenas empresas atuem nos novos setores da economia de baixo carbono; buscar desenvolvimento de habilidades e competências profissionais; priorizar saúde e segurança no trabalho; ampliar ofertas de proteção social; defender os direitos universais e os serviços públicos;  e criar políticas públicas e ações que promovam a garantia dos Direitos Fundamentais do Trabalho.

______

*Tatiane Matheus é pesquisadora no ClimaInfo e membro do Grupo de Trabalho de Gênero & Clima do Observatório do Clima.

ClimaInfo, 8 de março de 2021.

How the world already prevented far worse warming this century (MIT Technology Review)

technologyreview.com

The Montreal Protocol was designed to heal the ozone layer. It may have also fended off several degrees of warming—and a collapse of forests and croplands.

James Temple – August 18, 2021


The world has already banded together to enact an international treaty that prevented significant global warming this century—even though that wasn’t the driving goal.

In 1987, dozens of nations adopted the Montreal Protocol, agreeing to phase out the use of chlorofluorocarbons and other chemicals used in refrigerants, solvents, and other industrial products that were breaking down Earth’s protective ozone layer.

It was a landmark achievement, the most successful example of nations pulling together in the face of a complex, collective threat to the environment. Three decades later, the atmospheric ozone layer is slowly recovering, preventing additional levels of ultraviolet radiation that cause cancer, eye damage, and other health problems.

But the virtues of the agreement, ultimately ratified by every country, are more widespread than its impact on the ozone hole. Many of those chemicals are also powerful greenhouse gases. So as a major side benefit, their reduction over the last three decades has already eased warming and could cut as much as 1 ˚C off worldwide average temperatures by 2050.

Now, a new study in Nature highlights yet another crucial, if inadvertent, bonus: reducing the strain that ultraviolet radiation from the sun puts on plants, inhibiting photosynthesis and slowing growth. The Montreal Protocol avoided “a catastrophic collapse of forests and croplands” that would have added hundreds of billions of tons of carbon to the atmosphere, Anna Harper, a senior lecturer in climate science at the University of Exeter and a coauthor of the paper, said in an email.

The Nature paper, published August 18, found that if production of ozone-depleting substances had continued ticking up 3% each year, the additional UV radiation would have curtailed the growth of trees, grasses, ferns, flowers, and crops across the globe.

The world’s plants would absorb less carbon dioxide, releasing as much as 645 billion tons of carbon from the land to the atmosphere this century. That could drive global warming up to 1 ˚C higher over the same period. It would also have devastating effects on agricultural yields and food supplies around the globe.

The impact of rising CFCs levels on plants, plus their direct warming effect in the atmosphere, could have pushed temperatures around 2.5 ˚C higher this century, the researchers found. That would all come on top of the already dire warming projections for 2100.

“While it was originally intended as an ozone protection treaty, the Montreal Protocol has been a very successful climate treaty,” says Paul Young, a climate scientist at Lancaster University and another author of the paper.

All of which poses a question: Why can’t the world enact a similarly aggressive and effective international treaty designed explicitly to address climate change? At least some scholars think there are crucial but largely overlooked lessons in the success of the Montreal Protocol, which are becoming newly relevant as global warming accelerates and the next UN climate conference approaches.

A fresh look

At this point, the planet will continue warming for the next several decades no matter what, as the dire UN climate report warned last week. But how much worse it gets still depends heavily on how aggressively the world cuts climate pollution in the coming decades.

To date, nations have failed, both through the Kyoto Treaty and the Paris climate accord, to pull together an agreement with sufficiently ambitious and binding commitments to phase out greenhouse-gas emissions. Countries will assemble at the next UN conference in Glasgow in early November, with the explicit goal of stepping up those targets under the Paris agreement.

Scholars have written lengthy papers and entire books examining lessons from the Montreal Protocol, and the commonalities and differences between the respective efforts on CFCs and greenhouse gases.

A common view is that the relevance is limited. CFCs were a far simpler problem to solve because they were produced by a single sector—mostly by a few major companies like DuPont—and used in a limited set of applications.

On the other hand, nearly every component of every sector of every nation pumps out greenhouse gases. Fossil fuels are the energy source that drives the global economy, and most of our machines and physical infrastructure are designed around them.

But Edward Parson, a professor of environmental law at the University of California, Los Angeles, says it’s time to take a fresh look at the lessons from the Montreal Protocol.

That’s because as the dangers of climate change become more evident and dire, more and more countries are pushing for stricter rules, and companies are increasingly approaching the stage that those like DuPont did: switching from steadfastly disputing the scientific findings to grudgingly accepting that new rules were inevitable, so they had better figure out how to operate and profit under them.

In other words, we’re reaching a point where enacting more proscriptive rules may be feasible, so it’s crucial to use the opportunity to create effective ones.

Strict rules, consistently enforced

Parson is the author of Protecting the Ozone Layer: Science and Strategy, an in-depth history of the Montreal Protocol published in 2003. He stresses that phasing out ozone-depleting compounds was a more complex problem than is often appreciated, because a sizable fraction of the worldwide economy relied on them in one way or another.

He adds that one of the most persistent misunderstandings about the deal is the notion that the industry had already developed commercially comparable alternative products and therefore was more willing to go along with the agreement in the end.

On the contrary, the development of alternatives happened after the regulations were in place. Rapid innovation continued as the rules tightened, and industry, experts, and technical bodies hashed out how much progress could be achieved and how quickly. That produced ever more and better alternatives “in a repeated positive feedback,” Parson says.

To be sure, the prospect of lucrative new markets also helped.

“DuPoint’s decision to support a CFC ban was based on a belief that it could obtain a significant competitive advantage through the sale of new chemical substitutes because of its proven research and development capabilities to develop chemicals, its (limited) progress already made in developing substitutes and the potential for higher profits in selling new speciality chemicals,” a pair of MIT researchers wrote in an analysis in the late 1990s.

All of this suggests the world shouldn’t wait around for innovations that will make it cheaper and easier to address climate change. Countries need to implement rules that increasingly ratchet down emissions, forcing industries to figure out cleaner ways of generating energy, growing food, producing products, and moving things and people around the world.

Another lesson is to adopt sector-wide rules that force all companies in all countries to abide by the same regulations, avoiding the so-called free-rider problem. This could be especially key for high-emitting companies with stiff international competition. For steel, cement, and other industrial sectors, developing and switching to new products will almost inevitably increase costs at first.

Still, Parson says, there are limits to the comparisons here. The oil and gas sector isn’t in the same position as DuPont, able to reengineer substitutable products and largely keep its businesses and markets intact.

The fossil-fuel sector is certainly making the case that it can carry on in climate-friendly ways, talking up means of capturing emissions from power plants, balancing out pollution through reforestation projects and other sorts of offsets, or sucking carbon out of the atmosphere.

But as studies and articles continually show, it’s difficult to ensure that companies are doing these things in reliable, verifiable, long-lasting, and credible ways. Those tensions are likely to continue complicating international efforts to enact the firm rules required and ensure we’re making the progress that we must.

Still, the Montreal Protocol offers a reminder that international rules binding the global behavior of companies and regulating their products do work, if strictly and consistently enforced. Companies will adapt to survive—even to thrive.