Arquivo mensal: setembro 2021

A theory of my own mind (AEON)

Knowing the content of one’s own mind might seem straightforward but in fact it’s much more like mindreading other people

https://pbs.twimg.com/media/D9xE74lW4AEArgC.jpg:large
Tokyo, 1996. Photo by Harry Gruyaert/Magnum

Stephen M Fleming is professor of cognitive neuroscience at University College London, where he leads the Metacognition Group. He is author of Know Thyself: The Science of Self-awareness (2021). Edited by Pam Weintraub

23 September 2021

In 1978, David Premack and Guy Woodruff published a paper that would go on to become famous in the world of academic psychology. Its title posed a simple question: does the chimpanzee have a theory of mind?

In coining the term ‘theory of mind’, Premack and Woodruff were referring to the ability to keep track of what someone else thinks, feels or knows, even if this is not immediately obvious from their behaviour. We use theory of mind when checking whether our colleagues have noticed us zoning out on a Zoom call – did they just see that? A defining feature of theory of mind is that it entails second-order representations, which might or might not be true. I might think that someone else thinks that I was not paying attention but, actually, they might not be thinking that at all. And the success or failure of theory of mind often turns on an ability to appropriately represent another person’s outlook on a situation. For instance, I can text my wife and say: ‘I’m on my way,’ and she will know that by this I mean that I’m on my way to collect our son from nursery, not on my way home, to the zoo, or to Mars. Sometimes this can be difficult to do, as captured by a New Yorker cartoon caption of a couple at loggerheads: ‘Of course I care about how you imagined I thought you perceived I wanted you to feel.’

Premack and Woodruff’s article sparked a deluge of innovative research into the origins of theory of mind. We now know that a fluency in reading minds is not something humans are born with, nor is it something guaranteed to emerge in development. In one classic experiment, children were told stories such as the following:

Maxi has put his chocolate in the cupboard. While Maxi is away, his mother moves the chocolate from the cupboard to the drawer. When Maxi comes back, where will he look for the chocolate?

Until the age of four, children often fail this test, saying that Maxi will look for the chocolate where it actually is (the drawer), rather than where he thinks it is (in the cupboard). They are using their knowledge of the reality to answer the question, rather than what they know about where Maxi had put the chocolate before he left. Autistic children also tend to give the wrong answer, suggesting problems with tracking the mental states of others. This test is known as a ‘false belief’ test – passing it requires one to realise that Maxi has a different (and false) belief about the world.

Many researchers now believe that the answer to Premack and Woodruff’s question is, in part, ‘no’ – suggesting that fully fledged theory of mind might be unique to humans. If chimpanzees are given an ape equivalent of the Maxi test, they don’t use the fact that another chimpanzee has a false belief about the location of the food to sneak in and grab it. Chimpanzees can track knowledge states – for instance, being aware of what others see or do not see, and knowing that, when someone is blindfolded, they won’t be able to catch them stealing food. There is also evidence that they track the difference between true and false beliefs in the pattern of their eye movements, similar to findings in human infants. Dogs also have similarly sophisticated perspective-taking abilities, preferring to choose toys that are in their owner’s line of sight when asked to fetch. But so far, at least, only adult humans have been found to act on an understanding that other minds can hold different beliefs about the world to their own.

Research on theory of mind has rapidly become a cornerstone of modern psychology. But there is an underappreciated aspect of Premack and Woodruff’s paper that is only now causing ripples in the pond of psychological science. Theory of mind as it was originally defined identified a capacity to impute mental states not only to others but also to ourselves. The implication is that thinking about others is just one manifestation of a rich – and perhaps much broader – capacity to build what philosophers call metarepresentations, or representations of representations. When I wonder whether you know that it’s raining, and that our plans need to change, I am metarepresenting the state of your knowledge about the weather.

Intriguingly, metarepresentations are – at least in theory – symmetric with respect to self and other: I can think about your mind, and I can think about my own mind too. The field of metacognition research, which is what my lab at University College London works on, is interested in the latter – people’s judgments about their own cognitive processes. The beguiling question, then – and one we don’t yet have an answer to – is whether these two types of ‘meta’ are related. A potential symmetry between self-knowledge and other-knowledge – and the idea that humans, in some sense, have learned to turn theory of mind on themselves – remains largely an elegant hypothesis. But an answer to this question has profound consequences. If self-awareness is ‘just’ theory of mind directed at ourselves, perhaps it is less special than we like to believe. And if we learn about ourselves in the same way as we learn about others, perhaps we can also learn to know ourselves better.

A common view is that self-knowledge is special, and immune to error, because it is gained through introspection – literally, ‘looking within’. While we might be mistaken about things we perceive in the outside world (such as thinking a bird is a plane), it seems odd to say that we are wrong about our own minds. If I think that I’m feeling sad or anxious, then there is a sense in which I am feeling sad or anxious. We have untrammelled access to our own minds, so the argument goes, and this immediacy of introspection means that we are rarely wrong about ourselves.

This is known as the ‘privileged access’ view of self-knowledge, and has been dominant in philosophy in various guises for much of the 20th century. René Descartes relied on self-reflection in this way to reach his conclusion ‘I think, therefore I am,’ noting along the way that: ‘I know clearly that there is nothing that can be perceived by me more easily or more clearly than my own mind.’

An alternative view suggests that we infer what we think or believe from a variety of cues – just as we infer what others think or feel from observing their behaviour. This suggests that self-knowledge is not as immediate as it seems. For instance, I might infer that I am anxious about an upcoming presentation because my heart is racing and my breathing is heavier. But I might be wrong about this – perhaps I am just feeling excited. This kind of psychological reframing is often used by sports coaches to help athletes maintain composure under pressure.

The philosopher most often associated with the inferential view is Gilbert Ryle, who proposed in The Concept of Mind (1949) that we gain self-knowledge by applying the tools we use to understand other minds to ourselves: ‘The sorts of things that I can find out about myself are the same as the sorts of things that I can find out about other people, and the methods of finding them out are much the same.’ Ryle’s idea is neatly summarised by another New Yorker cartoon in which a husband says to his wife: ‘How should I know what I’m thinking? I’m not a mind reader.’

Many philosophers since Ryle have considered the strong inferential view as somewhat crazy, and written it off before it could even get going. The philosopher Quassim Cassam, author of Self-knowledge for Humans (2014), describes the situation:

Philosophers who defend inferentialism – Ryle is usually mentioned in this context – are then berated for defending a patently absurd view. The assumption that intentional self-knowledge is normally immediate … is rarely defended; it’s just seen as obviously correct.

But if we take a longer view of history, the idea that we have some sort of special, direct access to our minds is the exception, rather than the rule. For the ancient Greeks, self-knowledge was not all-encompassing, but a work in progress, and something to be striven toward, as captured by the exhortation to ‘know thyself’ carved on the Temple of Delphi. The implication is that most of us don’t know ourselves very well. This view persisted into medieval religious traditions: the Italian priest and philosopher Saint Thomas Aquinas suggested that, while God knows himself by default, we need to put in time and effort to know our own minds. And a similar notion of striving toward self-awareness is found in Eastern traditions, with the founder of Chinese Taoism, Lao Tzu, endorsing a similar goal: ‘To know that one does not know is best; not to know but to believe that one knows is a disease.’

Self-awareness is something that can be cultivated

Other aspects of the mind – most famously, perception – also appear to operate on the principles of an (often unconscious) inference. The idea is that the brain isn’t directly in touch with the outside world (it’s locked up in a dark skull, after all) – and instead has to ‘infer’ what is really out there by constructing and updating an internal model of the environment, based on noisy sensory data. For instance, you might know that your friend owns a Labrador, and so you expect to see a dog when you walk into her house, but don’t know exactly where in your visual field the dog will appear. This higher-level expectation – the spatially invariant concept of ‘dog’ – provides the relevant context for lower levels of the visual system to easily interpret dog-shaped blurs that rush toward you as you open the door.

Adelson’s checkerboard. Courtesy Wikipedia

Elegant evidence for this perception-as-inference view comes from a range of striking visual illusions. In one called Adelson’s checkerboard, two patches with the same objective luminance are perceived as lighter and darker because the brain assumes that, to reflect the same amount of light, the one in shadow must have started out brighter. Another powerful illusion is the ‘light from above’ effect – we have an automatic tendency to assume that natural light falls from above, whereas uplighting – such as when light from a fire illuminates the side of a cliff – is less common. This can lead the brain to interpret the same image as either bumps or dips in a surface, depending on whether the shadows are consistent with light falling from above. Other classic experiments show that information from one sensory modality, such as sight, can act as a constraint on how we perceive another, such as sound – an illusion used to great effect in ventriloquism. The real skill of ventriloquists is being able to talk without moving the mouth. Once this is achieved, the brains of the audience do the rest, pulling the sound to its next most likely source, the puppet.

These striking illusions are simply clever ways of exposing the workings of a system finely tuned for perceptual inference. And a powerful idea is that self-knowledge relies on similar principles – whereas perceiving the outside world relies on building a model of what is out there, we are also continuously building and updating a similar model of ourselves – our skills, abilities and characteristics. And just as we can sometimes be mistaken about what we perceive, sometimes the model of ourselves can also be wrong.

Let’s see how this might work in practice. If I need to remember something complicated, such as a shopping list, I might judge I will fail unless I write it down somewhere. This is a metacognitive judgment about how good my memory is. And this model can be updated – as I grow older, I might think to myself that my recall is not as good as it used to be (perhaps after experiencing myself forgetting things at the supermarket), and so I lean more heavily on list-writing. In extreme cases, this self-model can become completely decoupled from reality: in functional memory disorders, patients believe their memory is poor (and might worry they have dementia) when it is actually perfectly fine when assessed with objective tests.

We now know from laboratory research that metacognition, just like perception, is also subject to powerful illusions and distortions – lending credence to the inferential view. A standard measure here is whether people’s confidence tracks their performance on simple tests of perception, memory and decision-making. Even in otherwise healthy people, judgments of confidence are subject to systematic illusions – we might feel more confident about our decisions when we act more quickly, even if faster decisions are not associated with greater accuracy. In our research, we have also found surprisingly large and consistent differences between individuals on these measures – one person might have limited insight into how well they are doing from one moment to the next, while another might have good awareness of whether are likely to be right or wrong.

This metacognitive prowess is independent of general cognitive ability, and correlated with differences in the structure and function of the prefrontal and parietal cortex. In turn, people with disease or damage to these brain regions can suffer from what neurologists refer to as anosognosia – literally, the absence of knowing. For instance, in Alzheimer’s disease, patients can suffer a cruel double hit – the disease attacks not only brain regions supporting memory, but also those involved in metacognition, leaving people unable to understand what they have lost.

This all suggests – more in line with Socrates than Descartes – that self-awareness is something that can be cultivated, that it is not a given, and that it can fail in myriad interesting ways. And it also provides newfound impetus to seek to understand the computations that might support self-awareness. This is where Premack and Woodruff’s more expansive notion of theory of mind might be long overdue another look.

Saying that self-awareness depends on similar machinery to theory of mind is all well and good, but it begs the question – what is this machinery? What do we mean by a ‘model’ of a mind, exactly?

Some intriguing insights come from an unlikely quarter – spatial navigation. In classic studies, the psychologist Edward Tolman realised that the rats running in mazes were building a ‘map’ of the maze, rather than just learning which turns to make when. If the shortest route from a starting point towards the cheese is suddenly blocked, then rats readily take the next quickest route – without having to try all the remaining alternatives. This suggests that they have not just rote-learned the quickest path through the maze, but instead know something about its overall layout.

A few decades later, the neuroscientist John O’Keefe found that cells in the rodent hippocampus encoded this internal knowledge about physical space. Cells that fired in different locations became known as ‘place’ cells. Each place cell would have a preference for a specific position in the maze but, when combined together, could provide an internal ‘map’ or model of the maze as a whole. And then, in the early 2000s, the neuroscientists May-Britt Moser, Edvard Moser and their colleagues in Norway found an additional type of cell – ‘grid’ cells, which fire in multiple locations, in a way that tiles the environment with a hexagonal grid. The idea is that grid cells support a metric, or coordinate system, for space – their firing patterns tell the animal how far it has moved in different directions, a bit like an in-built GPS system.

There is now tantalising evidence that similar types of brain cell also encode abstract conceptual spaces. For instance, if I am thinking about buying a new car, then I might think about how environmentally friendly the car is, and how much it costs. These two properties map out a two-dimensional ‘space’ on which I can place different cars – for instance, a cheap diesel car will occupy one part of the space, and an expensive electric car another part of the space. The idea is that, when I am comparing these different options, my brain is relying on the same kind of systems that I use to navigate through physical space. In one experiment by Timothy Behrens and his team at the University of Oxford, people were asked to imagine morphing images of birds that could have different neck and leg lengths – forming a two-dimensional bird space. A grid-like signature was found in the fMRI data when people were thinking about the birds, even though they never saw them presented in 2D.

Clear overlap between brain activations involved in metacognition and mindreading was observed

So far, these lines of work – on abstract conceptual models of the world, and on how we think about other minds – have remained relatively disconnected, but they are coming together in fascinating ways. For instance, grid-like codes are also found for conceptual maps of the social world – whether other individuals are more or less competent or popular – suggesting that our thoughts about others seem to be derived from an internal model similar to those used to navigate physical space. And one of the brain regions involved in maintaining these models of other minds – the medial prefrontal cortex (PFC) – is also implicated in metacognition about our own beliefs and decisions. For instance, research in my group has discovered that medial prefrontal regions not only track confidence in individual decisions, but also ‘global’ metacognitive estimates of our abilities over longer timescales – exactly the kind of self-estimates that were distorted in the patients with functional memory problems.

Recently, the psychologist Anthony G Vaccaro and I surveyed the accumulating literature on theory of mind and metacognition, and created a brain map that aggregated the patterns of activations reported across multiple papers. Clear overlap between brain activations involved in metacognition and mindreading was observed in the medial PFC. This is what we would expect if there was a common system building models not only about other people, but also of ourselves – and perhaps about ourselves in relation to other people. Tantalisingly, this very same region has been shown to carry grid-like signatures of abstract, conceptual spaces.

At the same time, computational models are being built that can mimic features of both theory of mind and metacognition. These models suggest that a key part of the solution is the learning of second-order parameters – those that encode information about how our minds are working, for instance whether our percepts or memories tend to be more or less accurate. Sometimes, this system can become confused. In work led by the neuroscientist Marco Wittmann at the University of Oxford, people were asked to play a game involving tracking the colour or duration of simple stimuli. They were then given feedback about both their own performance and that of other people. Strikingly, people tended to ‘merge’ their feedback with those of others – if others were performing better, they tended to think they themselves were performing a bit better too, and vice-versa. This intertwining of our models of self-performance and other-performance was associated with differences in activity in the dorsomedial PFC. Disrupting activity in this area using transcranial magnetic stimulation (TMS) led to more self-other mergence – suggesting that one function of this brain region is not only to create models of ourselves and others, but also to keep these models apart.

Another implication of a symmetry between metacognition and mindreading is that both abilities should emerge around the same time in childhood. By the time that children become adept at solving false-belief tasks – around the age of four – they are also more likely to engage in self-doubt, and recognise when they themselves were wrong about something. In one study, children were first presented with ‘trick’ objects: a rock that turned out to be a sponge, or a box of Smarties that actually contained not sweets but pencils. When asked what they first thought the object was, three-year-olds said that they knew all along that the rock was a sponge and that the Smarties box was full of pencils. But by the age of five, most children recognised that their first impression of the object was false – they could recognise they had been in error.

Indeed, when Simon Baron-Cohen, Alan Leslie and Uta Frith outlined their influential theory of autism in the 1980s, they proposed that theory of mind was only ‘one of the manifestations of a basic metarepresentational capacity’. The implication is that there should also be noticeable differences in metacognition that are linked to changes in theory of mind. In line with this idea, several recent studies have shown that autistic individuals also show differences in metacognition. And in a recent study of more than 450 people, Elisa van der Plas, a PhD student in my group, has shown that theory of mind ability (measured by people’s ability to track the feelings of characters in simple animations) and metacognition (measured by the degree to which their confidence tracks their task performance) are significantly correlated with each other. People who were better at theory of mind also formed their confidence differently – they were more sensitive to subtle cues, such as their response times, that indicated whether they had made a good or bad decision.

Recognising a symmetry between self-awareness and theory of mind might even help us understand why human self-awareness emerged in the first place. The need to coordinate and collaborate with others in large social groups is likely to have prized the abilities for metacognition and mindreading. The neuroscientist Suzana Herculano-Houzel has proposed that primates have unusually efficient ways of cramming neurons into a given brain volume – meaning there is simply more processing power devoted to so-called higher-order functions – those that, like theory of mind, go above and beyond the maintenance of homeostasis, perception and action. This idea fits with what we know about the areas of the brain involved in theory of mind, which tend to be the most distant in terms of their connections to primary sensory and motor areas.

A symmetry between self-awareness and other-awareness also offers a subversive take on what it means for other agents such as animals and robots to be self-aware. In the film Her (2013), Joaquin Phoenix’s character Theodore falls in love with his virtual assistant, Samantha, who is so human-like that he is convinced she is conscious. If the inferential view of self-awareness is correct, there is a sense in which Theodore’s belief that Samantha is aware is sufficient to make her aware, in his eyes at least. This is not quite true, of course, because the ultimate test is if she is able to also recursively model Theodore’s mind, and create a similar model of herself. But being convincing enough to share an intimate connection with another conscious agent (as Theodore does with Samantha), replete with mindreading and reciprocal modelling, might be possible only if both agents have similar recursive capabilities firmly in place. In other words, attributing awareness to ourselves and to others might be what makes them, and us, conscious.

A simple route for improving self-awareness is to take a third-person perspective on ourselves

Finally, a symmetry between self-awareness and other-awareness also suggests novel routes towards boosting our own self-awareness. In a clever experiment conducted by the psychologists and metacognition experts Rakefet Ackerman and Asher Koriat in Israel, students were asked to judge both how well they had learned a topic, and how well other students had learned the same material, by watching a video of them studying. When judging themselves, they fell into a trap – they believed that spending less time studying was a signal of being confident in knowing the material. But when judging others, this relationship was reversed: they (correctly) judged that spending longer on a topic would lead to better learning. These results suggest that a simple route for improving self-awareness is to take a third-person perspective on ourselves. In a similar way, literary novels (and soap operas) encourage us to think about the minds of others, and in turn might shed light on our own lives.

There is still much to learn about the relationship between theory of mind and metacognition. Most current research on metacognition focuses on the ability to think about our experiences and mental states – such as being confident in what we see or hear. But this aspect of metacognition might be distinct from how we come to know our own, or others’, character and preferences – aspects that are often the focus of research on theory of mind. New and creative experiments will be needed to cross this divide. But it seems safe to say that Descartes’s classical notion of introspection is increasingly at odds with what we know of how the brain works. Instead, our knowledge of ourselves is (meta)knowledge like any other – hard-won, and always subject to revision. Realising this is perhaps particularly useful in an online world deluged with information and opinion, when it’s often hard to gain a check and balance on what we think and believe. In such situations, the benefits of accurate metacognition are myriad – helping us recognise our faults and collaborate effectively with others. As the poet Robert Burns tells us:

O wad some Power the giftie gie us
To see oursels as ithers see us!
It wad frae mony a blunder free us…

(Oh, would some Power give us the gift
To see ourselves as others see us!
It would from many a blunder free us )

Is There a Secularocene? (Political Theology Network)

A Snapshot of Sea Ice by NASA Goddard Space Flight Center CC BY-NC 2.0

By Mohamad Amer Meziane – September 17, 2021

If modernity is the Anthropocene and if secularization is a defining feature of modernity’s birth, then it is natural to ask: did secularization engender climate change?

Why is secularization never connected to climate change? And why is climate change not connected to secularization? If modernity is the Anthropocene and if secularization is a defining feature of modernity’s birth, then it is natural to ask: did secularization engender climate change?

I aim to open a new space in the study of both secularism and the Anthropocene, of religion and climate change. Further, I aim to create a philosophical bridge between influential currents in anthropology and the humanities. I build this bridge through the critique of Orientalism and the anthropology of secularism and Islam, respectively founded by Edward Said and Talal Asad, on one hand, and the literature on the Anthropocene influenced by scholars such as Donna Haraway and Bruno Latour, on the other.

I argue that secularization should be re-conceptualized not only as an imperial and racial but also as an ecological set of processes.

My perspective stems from a philosophical engagement with both the project and the concept of secularization. It therefore presupposes a critical understanding of what has been called ‘the secular’ as a name given to the result of the destruction of nature: the transformation of the earth itself by industrial and colonial powers. I propose an alternative definition of secularization, secularism, and secularity. As I argue fully in my first book, Des empires sous la terre, the Anthropocene is an outcome of secularization understood as a set of processes engendered by the imperial relations of power between Europe and the rest of the world.

Thinking Through the Secularocene

What is secularization? Neither a supposed decline of religion nor a simple continuation of Christianity by other means, secularization should be seen as a transformation of the earth itself by virtue of its connection with fossil empires and capitalism.

This perspective differs from scholars who have been engaged in criticizing the idea of secularization as a mythology of progress and privatization – a mythology to which 9/11 proved false. I argue that the concept of secularization should be redefined instead of being dissolved. It is only if one presupposes that secularization is reducible to the privatization of religion that the existence of political religion can be construed as testifying against the reality of secularization. When one opposes the permanence of religion or of Christianity to the reality of secularization, one is in fact reactivating the secularization thesis in its primitive, Hegelian version (developed by Marcel Gauchet) – that modernity is the secular realization of Christianity on earth – and, therefore, of all religions in the world.

In other words, before it can be seen as a process, secularization should be approached as an order which articulates philosophy and politics, discourse and practices throughout the 19th century in Western Europe. Secularization is the order which claims that the other-worldliness of religion and the divine must be abolished by virtue of its realization in this world. The first instance of this demand is Hegel’s absolute knowledge and his interpretation of the French Revolution as the realization of heaven on earth. The so-called ‘end of history’ is indeed the accomplishment of a secularizing process by which the divine becomes the institution of freedom through the modern state.

The first way in which secularization manifests its reality is discursive. As a discourse, it asserts that the modern West must be and therefore is Christianity itself, Christianity as the secular. Before it can become an analytical concept, the concept of secularization formulates a demand: Christianity and religions realize heaven and all forms of transcendence in this world. 

Is the reality of secularization solely discursive? No. The reality of the secular is the earth itself as it is transformed by industrial capitalism. This redefinition of the secular and of secularization allows us to think alternatively about this ‘global’ event called climate change. I argue that the Anthropocene should be seen as an effect of secularization, and that one might use the word Secularocene to describe this dimension of ‘colonial modernity.’

How did secularization lead to climate change, one might ask? By authorizing the extraction of coal through expropriating lands that belonged to the Church, and dismissing the reality of demons in the underground as superstitious, secularization allowed fossil industrialism to transform the planet. For this reason, secularization should be seen as a crucial aspect of what Marx calls the primitive accumulation of capital: an extra-economic process of expropriation structured by state violence deploying itself through racial, gender, class, and religious hierarchies.

The critique of secularism is more than the critique of a political doctrine demanding the privatization of religion. It is the critique of how the earth itself has been transformed. As such, philosophical secularism refers to an ontology that posits this world as the sole reality. It defines immanence, or earth, as the reality which must be opposed to transcendence, or “heaven”. The critique of heaven is not the condition of all critique, as Marx famously puts it. It is part of how capitalism operates. Hence, the critique of heaven has transformed the earth itself through the secularization of both empire and capital.

While genealogy authorizes us to think about the categories of religion and secularity critically, it should be integrated within a larger perspective if we are to rethink secularization by constructing an alternative narrative of its deployment beyond the tropes of religion’s decline. A post-genealogical philosophy of history is a theory, not of progress, but of how the earth has been transformed through imperial and capitalist processes of globalization. The very existence of climate change invites us to think past Foucault’s legacies in postcolonial thought. Beyond genealogy, the hypothesis of the Anthropocene – or of the Secularocene for that matter – might require that we integrate genealogical inquiries into a radically new form of philosophical history. After the genealogy of religion and the secular, a philosophy of global history might help us understand imperial secularization as the birth of the Anthropocene.

By Mohamad Amer Meziane

Mohamad Amer Meziane holds a PhD from the University of Paris 1 Panthéon-Sorbonne. He is currently a Postdoctoral Research Fellow and Lecturer at Columbia University. He is affiliated to the Institute of Religion Culture and Public Life, the Institute of African Studies and the Department of Religion.

We’re Finally Catching a Break in the Climate Fight (The Crucial Years/Bill McKibben)

As a new Oxford paper shows, the incredibly rapid fall in the cost of renewables offers hope–but only if movements can push banks and politicians hard enough

Bill McKibben – Sep 19, 2021

This is one of the first solar panels and batteries ever installed, in the state of Georgia in 1955. At the time it was the most expensive power on earth; now it’s the cheapest, and still falling fast.

So far in the global warming era, we’ve caught precious few breaks. Certainly not from physics: the temperature has increased at the alarming pace that scientists predicted thirty years ago, and the effects of that warming have increased even faster than expected. (“Faster Than Expected” is probably the right title for a history of climate change so far; if you’re a connoisseur of disaster, there is already a blog by that name). The Arctic is melting decades ahead of schedule, and the sea rising on an accelerated schedule, and the forest fires of the science fiction future are burning this autumn. And we haven’t caught any breaks from our politics either: it’s moved with the lumbering defensiveness one would expect from a system ruled by inertia and vested interest. And so it is easy, and completely plausible, to despair: we are on the bleeding edge of existential destruction.

            But one trend is, finally, breaking in the right direction, and perhaps decisively. The price of renewable energy is now falling nearly as fast as heat and rainfall records, and in the process perhaps offering us one possible way out. The public debate hasn’t caught up to the new reality—Bill Gates, in his recent bestseller on energy and climate, laments the “green premium” that must be paid for clean energy. But he (and virtually every other mainstream energy observer) is already wrong—and they’re all about to be spectacularly wrong, if the latest evidence turns out to be right.

            Last Wednesday, a team at Oxford University released a fascinating paper that I haven’t seen covered anywhere. Stirringly titled “Empirically grounded technology forecasts and the energy transition,” it makes the following argument: “compared to continuing with a fossil-fuel-based system, a rapid green energy transition will likely result in overall net savings of many trillions of dollars–even without accounting for climate damages or co-benefits of climate policy.” Short and muscular, the paper begins by pointing out that at the moment most energy technologies, from gas to solar, have converged on a price point of about $100 per megawatt hour. In the case of coal, gas, and oil, however, “after adjusting for inflation, prices now are very similar to what they were 140 years ago, and there is no obvious long-range trend.” Sun, wind, and batteries, however, have dropped exponentially at roughly ten percent a year for three decades. Solar power didn’t exist until the late 1950s; since that time it has dropped in price about three orders of magnitude.

            They note that all the forecasts over those years about how fast prices would drop were uniformly wrong, invariably underestimating by almost comic margins the drop in costs for renewable energy. This is a massive problem: “failing to appreciate cost improvement trajectories of renewables relative to fossil fuels not only leads to under-investment in critical emission reduction technologies, it also locks in higher cost energy infrastructure for decades to come.” That is, if economists don’t figure out that solar is going to get steadily cheaper, you’re going to waste big bucks building gas plants designed to last for decades. And indeed we have (and of course the cost of them is not the biggest problem; that would be the destruction of the planet.)

            Happily, the Oxford team demonstrates that there’s a much easier and more effective way to estimate future costs than the complicated calculations used in the past: basically, if you just figure out the historic rates of fall in the costs of renewable energy, you can project them forward into the future because the learning curve seems to keep on going. In their model, validated by thousands of runs using past data, by far the cheapest path for the future is a very fast transition to renewable energy: if you replace almost all fossil fuel use over the next twenty years, you save tens of trillions of dollars. (They also model the costs of using lots of nuclear power: it’s low in carbon but high in price).

            To repeat: the cost of fossil fuels is not falling; any technological learning curve for oil and gas is offset by the fact that we’ve already found the easy stuff, and now you must dig deeper. But the more solar and windpower you build, the more the price falls—because the price is only the cost of setting up the equipment, which we get better at all the time. The actual energy arrives every morning when the sun rises. This doesn’t mean it’s a miracle: you have to mine lithium and cobalt, you have to site windmills, and you have to try and do those things with as little damage as possible. But if it’s not a miracle, it’s something like a deus ex machina—and the point is that these machines are cheap.

            If we made policy with this fact in mind—if we pushed, as the new $3.5 trillion Senate bill does, for dramatic increases in renewable usage in short order, then we would not only be saving the planet, we’d be saving tons of money. That money would end up in our pockets—but it would be removed from the wallets of people who own oil wells and coal mines, which is precisely why the fossil fuel industry is working so hard to gum up the works, trying to slow down everything from electric cars to induction cooktops and using all their economic and political muscle to prolong the transition. Their economically outmoded system of energy generation can only be saved by political corruption, which sadly is the fossil fuel industry’s remaining specialty. So far the learning curve of their influence-peddling has been steep enough to keep carbon levels climbing.

            That’s why we need to pay attention to the only other piece of good news, the only other virtuous thing that’s happened faster than expected. And that’s been the growth of movements to take on the fossil fuel industry and push for change. If those keep growing—if enough of us divest and boycott and vote and march and go to jail—we may be able to push our politicians and our banks hard enough that they actually let us benefit from the remarkable fall in the price of renewable energy. Activists and engineers are often very different kinds of people—but their mostly unconscious alliance offers the only hope of even beginning to catch up with the runaway pace of global warming.

So if you’re a solar engineer working to drop the price of power ten percent a year, don’t you dare leave the lab—the rest of us will chip in to get you pizza and caffeine so you can keep on working. But if you’re not a solar engineer, then see you in the streets (perhaps at October’s ‘People vs Fossil Fuels’ demonstrations in DC). Because you’re the other half of this equation.

Battery-free electronics breakthrough allows devices to run forever without charging (The Independent)

independent.co.uk

Anthony Cuthbertson – Sept. 23, 2021


Researchers have unveiled a ground-breaking system that allows electronic devices to run without batteries for “an infinite lifetime”.

Computer engineers from Northwestern University and Delft University of Technology developed the BFree energy-harvesting technology in order to enable battery-free devices capable of running perpetually with only intermittent energy input.

The same team previously introduced the world’s first battery-free Game Boy last year, which is powered energy harvested from the user pushing the buttons.

The engineers hope the innovative BFree system will help cut the vast amounts of dead batteries that end up as e-waste in landfills around the world.

It will also allow amateur hobbyists and those within the Maker Movement to create their own battery-free electronic devices.

“Right now, it’s virtually impossible for hobbyists to develop devices with battery-free hardware, so we wanted to democratise our battery-free platform,” said Josiah Hester, an assistant professor of electrical and computer engineering at Northwestern University, who led the research .

“Makers all over the internet are asking how to extend their device’s battery life. They are asking the wrong question. We want them to forget about the battery and instead think about more sustainable ways to generate energy.”

In order to run perpetually with only intermittent energy – for example the sun going behind a cloud and no longer powering the device’s solar panel – the BFree system simply pauses the calculations it is running without losing memory or needing to run through a long list of operations before restarting when power returns.

The technology is part of a new trend known as ubiquitous computing, which aims to make computing available at any time and in any place through smart devices and the Internet of Things (IoT).

The research represents a significant advancement in this field by circumventing the need for a battery, and the associated charging and replacing that comes with them.

“Many people predict that we’re going to have a trillion devices in this IoT,” Dr Hester said.

“That means a trillion dead batteries or 100 million people replacing a dead battery every few minutes. That presents a terrible ecological cost to the environment.

“What we’re doing, instead, is truly giving power to the people. We want everyone to be able to effortlessly program devices in a more sustainable way.”

The research will be presented at the UbiComp 2021 conference on 22 September.

5 Economists Redefining… Everything. Oh Yes, And They’re Women (Forbes)

forbes.com

Avivah Wittenberg-Cox

May 31, 2020,09:56am EDT


Five female economists.
From top left: Mariana Mazzucato, Carlota Perez, Kate Raworth, Stephanie Kelton, Esther Duflo. 20-first

Few economists become household names. Last century, it was John Maynard Keynes or Milton Friedman. Today, Thomas Piketty has become the economists’ poster-boy. Yet listen to the buzz, and it is five female economists who deserve our attention. They are revolutionising their field by questioning the meaning of everything from ‘value’ and ‘debt’ to ‘growth’ and ‘GDP.’ Esther Duflo, Stephanie Kelton, Mariana Mazzucato, Carlota Perez and Kate Raworth are united in one thing: their amazement at the way economics has been defined and debated to date. Their incredulity is palpable.

It reminds me of many women I’ve seen emerge into power over the past decade. Like Rebecca Henderson, a Management and Strategy professor at Harvard Business School and author of the new Reimagining Capitalism in a World on Fire. “It’s odd to finally make it to the inner circle,” she says, “and discover just how strangely the world is being run.” When women finally make it to the pinnacle of many professions, they often discover a world more wart-covered frog than handsome prince. Like Dorothy in The Wizard of Oz, when they get a glimpse behind the curtain, they discover the machinery of power can be more bluster than substance. As newcomers to the game, they can often see this more clearly than the long-term players. Henderson cites Tom Toro’s cartoon as her mantra. A group in rags sit around a fire with the ruins of civilisation in the background. “Yes, the planet got destroyed” says a man in a disheveled suit, “but for a beautiful moment in time we created a lot of value for shareholders.”

You get the same sense when you listen to the female economists throwing themselves into the still very male dominated economics field. A kind of collective ‘you’re kidding me, right? These five female economists are letting the secret out – and inviting people to flip the priorities. A growing number are listening – even the Pope (see below).

All question concepts long considered sacrosanct. Here are four messages they share:

Get Over It – Challenge the Orthodoxy

Described as “one of the most forward-thinking economists of our times,” Mariana Mazzucato is foremost among the flame throwers.  A professor at University College London and the Founder/Director of the UCL Institute for Innovation and Public Purpose, she asks fundamental questions about how ‘value’ has been defined, who decides what that means, and who gets to measure it. Her TED talk, provocatively titled “What is economic value? And who creates it?” lays down the gauntlet. If some people are value creators,” she asks, what does that make everyone else? “The couch potatoes? The value extractors? The value destroyers?” She wants to make economics explicitly serve the people, rather than explain their servitude.

Stephanie Kelton takes on our approach to debt and spoofs the simplistic metaphors, like comparing national income and expenditure to ‘family budgets’ in an attempt to prove how dangerous debt is. In her upcoming book, The Deficit Myth (June 2020), she argues they are not at all similar; what household can print additional money, or set interest rates? Debt should be rebranded as a strategic investment in the future. Deficits can be used in ways good or bad but are themselves a neutral and powerful policy tool. “They can fund unjust wars that destabilize the world and cost millions their lives,” she writes, “or they can be used to sustain life and build a more just economy that works for the many and not just the few.” Like all the economists profiled here, she’s pointing at the mind and the meaning behind the money.

Get Green Growth – Reshaping Growth Beyond GDP

Kate Raworth, a Senior Research Associate at Oxford University’s Environmental Change Institute, is the author of Doughnut Economics. She challenges our obsession with growth, and its outdated measures. The concept of Gross Domestic Product (GDP), was created in the 1930s and is being applied in the 21st century to an economy ten times larger. GDP’s limited scope (eg. ignoring the value of unpaid labour like housework and parenting or making no distinction between revenues from weapons or water) has kept us “financially, politically and socially addicted to growth” without integrating its costs on people and planet. She is pushing for new visual maps and metaphors to represent sustainable growth that doesn’t compromise future generations. What this means is moving away from the linear, upward moving line of ‘progress’ ingrained in us all, to a “regenerative and distributive” model designed to engage everyone and shaped like … a doughnut (food and babies figure prominently in these women’s metaphors). 

Carlota Perez doesn’t want to stop or slow growth, she wants to dematerialize it. “Green won’t spread by guilt and fear, we need aspiration and desire,” she says. Her push is towards a redefinition of the ‘good life’ and the need for “smart green growth” to be fuelled by a desire for new, attractive and aspirational lifestyles. Lives will be built on a circular economy that multiplies services and intangibles which offer limitless (and less environmentally harmful) growth. She points to every technological revolution creating new lifestyles. She says we can see it emerging, as it has in the past, among the educated, the wealthy and the young: more services rather than more things, active and creative work, a focus on health and care, a move to solar power, intense use of the internet, a preference for customisation over conformity, renting vs owning, and recycling over waste. As these new lifestyles become widespread, they offer immense opportunities for innovation and new jobs to service them.

Get Good Government – The Strategic Role of the State

All these economists want the state to play a major role. Women understand viscerally how reliant the underdogs of any system are on the inclusivity of the rules of the game. “It shapes the context to create a positive sum game” for both the public and business, says Perez. You need an active state to “tilt the playing field toward social good.” Perez outlines five technological revolutions, starting with the industrial one. She suggests we’re halfway through the fifth, the age of Tech & Information. Studying the repetitive arcs of each revolution enables us to see the opportunity of the extraordinary moment we are in. It’s the moment to shape the future for centuries to come. But she balances economic sustainability with the need for social sustainability, warning that one without the other is asking for trouble.

Mariana Mazzucato challenges governments to be more ambitious. They gain confidence and public trust by remembering and communicating what they are there to do. In her mind that is ensuring the public good. This takes vision and strategy, two ingredients she says are too often sorely lacking. Especially post-COVID, purpose needs to be the driver determining the ‘directionality’ of focus, investments and public/ private partnerships. Governments should be using their power – both of investment and procurement – to orient efforts towards the big challenges on our horizon, not just the immediate short-term recovery. They should be putting conditions on the massive financial bail outs they are currently handing out. She points to the contrast in imagination and impact between airline bailouts in Austria and the UK. The Austrian airlines are getting government aid on the condition they meet agreed emissions targets. The UK is supporting airlines without any conditionality, a huge missed opportunity to move towards larger, broader goals of building a better and greener economy out of the crisis.

Get Real – Beyond the Formulae and Into the Field

All of these economists also argue for getting out of the theories and into the field. They reject the idea of nerdy theoretical calculations done within the confines of a university tower and challenge economists to experiment and test their formulae in the real world.

Esther Duflo, Professor of Poverty Alleviation and Development Economics at MIT, is the major proponent of bringing what is accepted practice in medicine to the field of economics: field trials with randomised control groups. She rails against the billions poured into aid without any actual understanding or measurement of the returns. She gently accuses us of being no better with our 21st century approaches to problems like immunisation, education or malaria than any medieval doctor, throwing money and solutions at things with no idea of their impact. She and her husband, Abhijit Banerjee, have pioneered randomised control trials across hundreds of locations in different countries of the world, winning a Nobel Prize for Economics in 2019 for the insights.

They test, for example, how to get people to use bed nets against malaria. Nets are a highly effective preventive measure but getting people to acquire and use them has been a hard nut to crack. Duflo set up experiments to answer the conundrums: If people have to pay for nets, will they value them more? If they are free, will they use them? If they get them free once, will this discourage future purchases? As it turns out, based on these comparisons, take-up is best if nets are initially given, “people don’t get used to handouts, they get used to nets,” and will buy them – and use them – once they understand their effectiveness. Hence, she concludes, we can target policy and money towards impact.

Mazzucato is also hands-on with a number of governments around the world, including Denmark, the UK, Austria, South Africa and even the Vatican, where she has just signed up for weekly calls contributing to a post-Covid policy. ‘I believe [her vision] can help to think about the future,’ Pope Francis said after reading her book, The Value of Everything: Making and Taking in the Global Economy. No one can accuse her of being stuck in an ivory tower. Like Duflo, she is elbow-deep in creating new answers to seemingly intractable problems.

She warns that we don’t want to go back to normal after Covid-19. Normal was what got us here. Instead, she invites governments to use the crisis to embed ‘directionality’ towards more equitable public good into their recovery strategies and investments. Her approach is to define ambitious ‘missions’ which can focus minds and bring together broad coalitions of stakeholders to create solutions to support them. The original NASA mission to the moon is an obvious precursor model. Why, anyone listening to her comes away thinking, did we forget purpose in our public spending? And why, when so much commercial innovation and profit has grown out of government basic research spending, don’t a greater share of the fruits of success return to promote the greater good?

Economics has long remained a stubbornly male domain and men continue to dominate mainstream thinking. Yet, over time, ideas once considered without value become increasingly visible. The move from outlandish to acceptable to policy is often accelerated by crisis. Emerging from this crisis, five smart economists are offering an innovative range of new ideas about a greener, healthier and more inclusive way forward. Oh, and they happen to be women.

Soon, satellites will be able to watch you everywhere all the time (MIT Technology Review)

Can privacy survive?

Christopher Beam

June 26, 2019


In 2013, police in Grants Pass, Oregon, got a tip that a man named Curtis W. Croft had been illegally growing marijuana in his backyard. So they checked Google Earth. Indeed, the four-month-old satellite image showed neat rows of plants growing on Croft’s property. The cops raided his place and seized 94 plants.

In 2018, Brazilian police in the state of Amapá used real-time satellite imagery to detect a spot where trees had been ripped out of the ground. When they showed up, they discovered that the site was being used to illegally produce charcoal, and arrested eight people in connection with the scheme.

Chinese government officials have denied or downplayed the existence of Uighur reeducation camps in Xinjiang province, portraying them as “vocational schools.” But human rights activists have used satellite imagery to show that many of the “schools” are surrounded by watchtowers and razor wire.

Every year, commercially available satellite images are becoming sharper and taken more frequently. In 2008, there were 150 Earth observation satellites in orbit; by now there are 768. Satellite companies don’t offer 24-hour real-time surveillance, but if the hype is to be believed, they’re getting close. Privacy advocates warn that innovation in satellite imagery is outpacing the US government’s (to say nothing of the rest of the world’s) ability to regulate the technology. Unless we impose stricter limits now, they say, one day everyone from ad companies to suspicious spouses to terrorist organizations will have access to tools previously reserved for government spy agencies. Which would mean that at any given moment, anyone could be watching anyone else.

The images keep getting clearer

Commercial satellite imagery is currently in a sweet spot: powerful enough to see a car, but not enough to tell the make and model; collected frequently enough for a farmer to keep tabs on crops’ health, but not so often that people could track the comings and goings of a neighbor. This anonymity is deliberate. US federal regulations limit images taken by commercial satellites to a resolution of 25 centimeters, or about the length of a man’s shoe. (Military spy satellites can capture images far more granular, although just how much more is classified.)

Ever since 2014, when the National Oceanic and Atmospheric Administration (NOAA) relaxed the limit from 50 to 25 cm, that resolution has been fine enough to satisfy most customers. Investors can predict oil supply from the shadows cast inside oil storage tanks. Farmers can monitor flooding to protect their crops. Human rights organizations have tracked the flows of refugees from Myanmar and Syria.

But satellite imagery is improving in a way that investors and businesses will inevitably want to exploit. The imaging company Planet Labs currently maintains 140 satellites, enough to pass over every place on Earth once a day. Maxar, formerly DigitalGlobe, which launched the first commercial Earth observation satellite in 1997, is building a constellation that will be able to revisit spots 15 times a day. BlackSky Global promises to revisit most major cities up to 70 times a day. That might not be enough to track an individual’s every move, but it would show what times of day someone’s car is typically in the driveway, for instance.

Some companies are even offering live video from space. As early as 2014, a Silicon Valley startup called SkyBox (later renamed Terra Bella and purchased by Google and then Planet) began touting HD video clips up to 90 seconds long. And a company called EarthNow says it will offer “continuous real-time” monitoring “with a delay as short as about one second,” though some think it is overstating its abilities. Everyone is trying to get closer to a “living map,” says Charlie Loyd of Mapbox, which creates custom maps for companies like Snapchat and the Weather Channel. But it won’t arrive tomorrow, or the next day: “We’re an extremely long way from high-res, full-time video of the Earth.”

Some of the most radical developments in Earth observation involve not traditional photography but rather radar sensing and hyperspectral images, which capture electromagnetic wavelengths outside the visible spectrum. Clouds can hide the ground in visible light, but satellites can penetrate them using synthetic aperture radar, which emits a signal that bounces off the sensed object and back to the satellite. It can determine the height of an object down to a millimeter. NASA has used synthetic aperture radar since the 1970s, but the fact that the US approved it for commercial use only last year is testament to its power—and political sensitivity. (In 1978, military officials supposedly blocked the release of radar satellite images that revealed the location of American nuclear submarines.)

While GPS data from cell phones is a legitimate privacy threat, you can at least decide to leave your phone at home. It’s harder to hide from a satellite camera.

Meanwhile, farmers can use hyperspectral sensing to tell where a crop is in its growth cycle, and geologists can use it to detect the texture of rock that might be favorable to excavation. But it could also be used, whether by military agencies or terrorists, to identify underground bunkers or nuclear materials. 

The resolution of commercially available imagery, too, is likely to improve further. NOAA’s 25-centimeter cap will come under pressure as competition from international satellite companies increases. And even if it doesn’t, there’s nothing to stop, say, a Chinese company from capturing and selling 10 cm images to American customers. “Other companies internationally are going to start providing higher-­resolution imagery than we legally allow,” says Therese Jones, senior director of policy for the Satellite Industry Association. “Our companies would want to push the limit down as far as they possibly could.”

What will make the imagery even more powerful is the ability to process it in large quantities. Analytics companies like Orbital Insight and SpaceKnow feed visual data into algorithms designed to let anyone with an internet connection understand the pictures en masse. Investors use this analysis to, for example, estimate the true GDP of China’s Guangdong province on the basis of the light it emits at night. But burglars could also scan a city to determine which families are out of town most often and for how long.

Satellite and analytics companies say they’re careful to anonymize their data, scrubbing it of identifying characteristics. But even if satellites aren’t recognizing faces, those images combined with other data streams—GPS, security cameras, social-media posts—could pose a threat to privacy. “People’s movements, what kinds of shops do you go to, where do your kids go to school, what kind of religious institutions do you visit, what are your social patterns,” says Peter Martinez, of the Secure World Foundation. “All of these kinds of questions could in principle be interrogated, should someone be interested.”

Like all tools, satellite imagery is subject to misuse. Its apparent objectivity can lead to false conclusions, as when the George W. Bush administration used it to make the case that Saddam Hussein was stockpiling chemical weapons in Iraq. Attempts to protect privacy can also backfire: in 2018, a Russian mapping firm blurred out the sites of sensitive military operations in Turkey and Israel—inadvertently revealing their existence, and prompting web users to locate the sites on other open-source maps.

Capturing satellite imagery with good intentions can have unintended consequences too. In 2012, as conflict raged on the border between Sudan and South Sudan, the Harvard-based Satellite Sentinel Project released an image that showed a construction crew building a tank-capable road leading toward an area occupied by the Sudanese People’s Liberation Army. The idea was to warn citizens about the approaching tanks so they could evacuate. But the SPLA saw the images too, and within 36 hours it attacked the road crew (which turned out to consist of Chinese civilians hired by the Sudanese government), killed some of them, and kidnapped the rest. As an activist, one’s instinct is often to release more information, says Nathaniel Raymond, a human rights expert who led the Sentinel project. But he’s learned that you have to take into account who else might be watching.

It’s expensive to watch you all the time

One thing that might save us from celestial scrutiny is the price. Some satellite entrepreneurs argue that there isn’t enough demand to pay for a constellation of satellites capable of round-the-clock monitoring at resolutions below 25 cm. “It becomes a question of economics,” says Walter Scott, founder of DigitalGlobe, now Maxar. While some companies are launching relatively cheap “nanosatellites” the size of toasters—the 120 Dove satellites launched by Planet, for example, are “orders of magnitude” cheaper than traditional satellites, according to a spokesperson—there’s a limit to how small they can get and still capture hyper-detailed images. “It is a fundamental fact of physics that aperture size determines the limit on the resolution you can get,” says Scott. “At a given altitude, you need a certain size telescope.” That is, in Maxar’s case, an aperture of about a meter across, mounted on a satellite the size of a small school bus. (While there are ways around this limit—interferometry, for example, uses multiple mirrors to simulate a much larger mirror—they’re complex and pricey.) Bigger satellites mean costlier launches, so companies would need a financial incentive to collect such granular data.

That said, there’s already demand for imagery with sub–25 cm resolution—and a supply of it. For example, some insurance underwriters need that level of detail to spot trees overhanging a roof, or to distinguish a skylight from a solar panel, and they can get it from airplanes and drones. But if the cost of satellite images came down far enough, insurance companies would presumably switch over.

Of course, drones can already collect better images than satellites ever will. But drones are limited in where they can go. In the US, the Federal Aviation Administration forbids flying commercial drones over groups of people, and you have to register a drone that weighs more than half a pound (227 grams) or so. There are no such restrictions in space. The Outer Space Treaty, signed in 1967 by the US, the Soviet Union, and dozens of UN member states, gives all states free access to space, and subsequent agreements on remote sensing have enshrined the principle of “open skies.” During the Cold War this made sense, as it allowed superpowers to monitor other countries to verify that they were sticking to arms agreements. But the treaty didn’t anticipate that it would one day be possible for anyone to get detailed images of almost any location.

And then there are the tracking devices we carry around in our pockets, a.k.a. smartphones. But while the GPS data from cell  phones is a legitimate privacy threat, you can at least decide to leave your phone at home. It’s harder to hide from a satellite camera. “There’s some element of ground truth—no pun intended—that satellites have that maybe your cell phone or digital record or what happens on Twitter [doesn’t],” says Abraham Thomas, chief data officer at the analytics company Quandl. “The data itself tends to be innately more accurate.”

The future of human freedom

American privacy laws are vague when it comes to satellites. Courts have generally allowed aerial surveillance, though in 2015 the New Mexico Supreme Court ruled that an “aerial search” by police without a warrant was unconstitutional. Cases often come down to whether an act of surveillance violates someone’s “reasonable expectation of privacy.” A picture taken on a public sidewalk: fair game. A photo shot by a drone through someone’s bedroom window: probably not. A satellite orbiting hundreds of miles up, capturing video of a car pulling into the driveway? Unclear.

That doesn’t mean the US government is powerless. It has no jurisdiction over Chinese or Russian satellites, but it can regulate how American customers use foreign imagery. If US companies are profiting from it in a way that violates the privacy of US citizens, the government could step in.

Raymond argues that protecting ourselves will mean rethinking privacy itself. Current privacy laws, he says, focus on threats to the rights of individuals. But those protections “are anachronistic in the face of AI, geospatial technologies, and mobile technologies, which not only use group data, they run on group data as gas in the tank,” Raymond says. Regulating these technologies will mean conceiving of privacy as applying not just to individuals, but to groups as well. “You can be entirely ethical about personally identifiable information and still kill people,” he says.

Until we can all agree on data privacy norms, Raymond says, it will be hard to create lasting rules around satellite imagery. “We’re all trying to figure this out,” he says. “It’s not like anything’s riding on it except the future of human freedom.”

Christopher Beam is a writer based in Los Angeles.

The space issue

This story was part of our July 2019 issue

O futuro sombrio previsto por agências de inteligência dos EUA para o mundo em 2040 (BBC Brasil)

Gordon Corera

20 abril 2021

Logo da CIA em sua sede
Previsões incluem incerteza e instabilidade crescentes e mais polarização e populismo

A Comunidade de Inteligência dos EUA (CI), federação de 17 agências governamentais independentes que realizam atividades de inteligência, divulgou uma pesquisa sobre o estado do mundo em 2040.

E o futuro é sombrio: o estudo alerta para uma volatilidade política e crescente competição internacional ou mesmo conflito.

O relatório intitulado “Globo Trends 2040 – A More Contested World” (“Tendências Globais 2040 – Um Mundo Mais Disputado”, em português) é uma tentativa de analisar as principais tendências, descrevendo uma série de cenários possíveis.

É o sétimo relatório desse tipo, publicado a cada quatro anos pelo Conselho Nacional de Inteligência desde 1997.

Não se trata de uma leitura relaxante para quem é um líder político ou diplomata internacional – ou espera ser um nos próximos anos.

Em primeiro lugar, o relatório foca nos fatores-chave que vão impulsionar a mudança.

Um deles é a volatilidade política.

“Em muitos países, as pessoas estão pessimistas sobre o futuro e estão cada vez mais desconfiadas de líderes e instituições que consideram incapazes ou relutantes em lidar com tendências econômicas, tecnológicas e demográficas disruptivas”, adverte o relatório.

Bandeiras dos EUA e China tremulando lado a lado
Tensão entre EUA e China pode dividir o mundo, diz relatório

Democracias vulneráveis

O estudo argumenta que as pessoas estão gravitando em torno de grupos com ideias semelhantes e fazendo demandas maiores e mais variadas aos governos em um momento em que esses mesmos governos estão cada vez mais limitados no que podem fazer.

“Essa incompatibilidade entre as habilidades dos governos e as expectativas do público tende a se expandir e levar a mais volatilidade política, incluindo crescente polarização e populismo dentro dos sistemas políticos, ondas de ativismo e movimentos de protesto e, nos casos mais extremos, violência, conflito interno, ou mesmo colapso do estado”, diz o relatório.

Expectativas não atendidas, alimentadas por redes sociais e tecnologia, podem criar riscos para a democracia.

“Olhando para o futuro, muitas democracias provavelmente serão vulneráveis a uma erosão e até mesmo ao colapso”, adverte o texto, acrescentando que essas pressões também afetarão os regimes autoritários.

Pandemia, uma ‘grande ruptura global’

O relatório afirma que a atual pandemia é a “ruptura global mais significativa e singular desde a 2ª Guerra Mundial”, que alimentou divisões, acelerou as mudanças existentes e desafiou suposições, inclusive sobre como os governos podem lidar com isso.

Uma loja fechada a cadeado exibe uma placa dizendo 'desculpe, estamos fechados até novo aviso do governo, desculpe por qualquer inconveniente, nos vemos em breve'
Analistas previram ‘grande pandemia de 2023’, mas não associaram à covid

O último relatório, de 2017, previu a possibilidade de uma “pandemia global em 2023” reduzir drasticamente as viagens globais para conter sua propagação.

Os autores reconhecem, no entanto, que não esperavam o surgimento da covid-19, que dizem ter “abalado suposições antigas sobre resiliência e adaptação e criado novas incertezas sobre a economia, governança, geopolítica e tecnologia”.

As mudanças climáticas e demográficas também vão exercer um impacto primordial sobre o futuro do mundo, assim como a tecnologia, que pode ser prejudicial, mas também trazer oportunidades para aqueles que a utilizarem de maneira eficaz e primeiro.

Competição geopolítica

Internacionalmente, os analistas esperam que a intensidade da competição pela influência global alcance seu nível mais alto desde a Guerra Fria nas próximas duas décadas em meio ao enfraquecimento contínuo da velha ordem, enquanto instituições como as Nações Unidas enfrentam dificuldades.

Mãos segurando um cartaz dizendo 'nós, o povo, significa todo mundo'
Pessoas estão gravitando em torno de grupos com ideias semelhantes e fazendo demandas maiores e mais variadas aos governos em um momento em que esses mesmos governos estão cada vez mais limitados no que podem fazer, diz relatório

Organizações não-governamentais, incluindo grupos religiosos e as chamadas “empresas superestrelas da tecnologia” também podem ter a capacidade de construir redes que competem com – ou até mesmo – driblam os Estados.

O risco de conflito pode aumentar, tornando-se mais difícil impedir o uso de novas armas.

O terrorismo jihadista provavelmente continuará, mas há um alerta de que terroristas de extrema direita e esquerda que promovem questões como racismo, ambientalismo e extremismo antigovernamental possam ressurgir na Europa, América Latina e América do Norte.

Os grupos podem usar inteligência artificial para se tornarem mais perigosos ou usar realidade aumentada para criar “campos de treinamento de terroristas virtuais”.

A competição entre os EUA e a China está no centro de muitas das diferenças nos cenários – se um deles se torna mais bem-sucedido ou se os dois competem igualmente ou dividem o mundo em esferas de influência separadas.

Um relatório de 2004 também previu um califado emergindo do Oriente Médio, como o que o autodenominado Estado Islâmico tentou criar na última década, embora o mesmo estudo – olhando para 2020 – não tenha capturado a competição com a China, que agora domina as preocupações de segurança dos EUA.

O objetivo geral é analisar futuros possíveis, em vez de acertar previsões.

Democracias mais fortes ou ‘mundo à deriva’?

Existem alguns cenários otimistas para 2040 – um deles foi chamado de “o renascimento das democracias”.

Isso envolve os EUA e seus aliados aproveitando a tecnologia e o crescimento econômico para lidar com os desafios domésticos e internacionais, enquanto as repressões da China e da Rússia (inclusive em Hong Kong) sufocam a inovação e fortalecem o apelo da democracia.

Mas outros são mais desanimadores.

“O cenário do mundo à deriva” imagina as economias de mercado nunca se recuperando da pandemia de Covid, tornando-se profundamente divididas internamente e vivendo em um sistema internacional “sem direção, caótico e volátil”, já que as regras e instituições internacionais são ignoradas por países, empresas e outros grupos.

Um cenário, porém, consegue combinar pessimismo com otimismo.

“Tragédia e mobilização” prevê um mundo em meio a uma catástrofe global no início de 2030, graças às mudanças climáticas, fome e agitação – mas isso, por sua vez, leva a uma nova coalizão global, impulsionada em parte por movimentos sociais, para resolver esses problemas.

Claro, nenhum dos cenários pode acontecer ou – mais provavelmente – uma combinação deles ou algo totalmente novo pode surgir. O objetivo, dizem os autores, é se preparar para uma série de futuros possíveis – mesmo que muitos deles pareçam longe de ser otimistas.

How big science failed to unlock the mysteries of the human brain (MIT Technology Review)

technologyreview.com

Large, expensive efforts to map the brain started a decade ago but have largely fallen short. It’s a good reminder of just how complex this organ is.

Emily Mullin

August 25, 2021


In September 2011, a group of neuroscientists and nanoscientists gathered at a picturesque estate in the English countryside for a symposium meant to bring their two fields together. 

At the meeting, Columbia University neurobiologist Rafael Yuste and Harvard geneticist George Church made a not-so-modest proposal: to map the activity of the entire human brain at the level of individual neurons and detail how those cells form circuits. That knowledge could be harnessed to treat brain disorders like Alzheimer’s, autism, schizophrenia, depression, and traumatic brain injury. And it would help answer one of the great questions of science: How does the brain bring about consciousness? 

Yuste, Church, and their colleagues drafted a proposal that would later be published in the journal Neuron. Their ambition was extreme: “a large-scale, international public effort, the Brain Activity Map Project, aimed at reconstructing the full record of neural activity across complete neural circuits.” Like the Human Genome Project a decade earlier, they wrote, the brain project would lead to “entirely new industries and commercial ventures.” 

New technologies would be needed to achieve that goal, and that’s where the nanoscientists came in. At the time, researchers could record activity from just a few hundred neurons at once—but with around 86 billion neurons in the human brain, it was akin to “watching a TV one pixel at a time,” Yuste recalled in 2017. The researchers proposed tools to measure “every spike from every neuron” in an attempt to understand how the firing of these neurons produced complex thoughts. 

The audacious proposal intrigued the Obama administration and laid the foundation for the multi-year Brain Research through Advancing Innovative Neurotechnologies (BRAIN) Initiative, announced in April 2013. President Obama called it the “next great American project.” 

But it wasn’t the first audacious brain venture. In fact, a few years earlier, Henry Markram, a neuroscientist at the École Polytechnique Fédérale de Lausanne in Switzerland, had set an even loftier goal: to make a computer simulation of a living human brain. Markram wanted to build a fully digital, three-dimensional model at the resolution of the individual cell, tracing all of those cells’ many connections. “We can do it within 10 years,” he boasted during a 2009 TED talk

In January 2013, a few months before the American project was announced, the EU awarded Markram $1.3 billion to build his brain model. The US and EU projects sparked similar large-scale research efforts in countries including Japan, Australia, Canada, China, South Korea, and Israel. A new era of neuroscience had begun. 

An impossible dream?

A decade later, the US project is winding down, and the EU project faces its deadline to build a digital brain. So how did it go? Have we begun to unwrap the secrets of the human brain? Or have we spent a decade and billions of dollars chasing a vision that remains as elusive as ever? 

From the beginning, both projects had critics.

EU scientists worried about the costs of the Markram scheme and thought it would squeeze out other neuroscience research. And even at the original 2011 meeting in which Yuste and Church presented their ambitious vision, many of their colleagues argued it simply wasn’t possible to map the complex firings of billions of human neurons. Others said it was feasible but would cost too much money and generate more data than researchers would know what to do with. 

In a blistering article appearing in Scientific American in 2013, Partha Mitra, a neuroscientist at the Cold Spring Harbor Laboratory, warned against the “irrational exuberance” behind the Brain Activity Map and questioned whether its overall goal was meaningful. 

Even if it were possible to record all spikes from all neurons at once, he argued, a brain doesn’t exist in isolation: in order to properly connect the dots, you’d need to simultaneously record external stimuli that the brain is exposed to, as well as the behavior of the organism. And he reasoned that we need to understand the brain at a macroscopic level before trying to decode what the firings of individual neurons mean.  

Others had concerns about the impact of centralizing control over these fields. Cornelia Bargmann, a neuroscientist at Rockefeller University, worried that it would crowd out research spearheaded by individual investigators. (Bargmann was soon tapped to co-lead the BRAIN Initiative’s working group.)

There isn’t a single, agreed-upon theory of how the brain works, and not everyone in the field agreed that building a simulated brain was the best way to study it.

While the US initiative sought input from scientists to guide its direction, the EU project was decidedly more top-down, with Markram at the helm. But as Noah Hutton documents in his 2020 film In Silico, Markram’s grand plans soon unraveled. As an undergraduate studying neuroscience, Hutton had been assigned to read Markram’s papers and was impressed by his proposal to simulate the human brain; when he started making documentary films, he decided to chronicle the effort. He soon realized, however, that the billion-dollar enterprise was characterized more by infighting and shifting goals than by breakthrough science.

In Silico shows Markram as a charismatic leader who needed to make bold claims about the future of neuroscience to attract the funding to carry out his particular vision. But the project was troubled from the outset by a major issue: there isn’t a single, agreed-upon theory of how the brain works, and not everyone in the field agreed that building a simulated brain was the best way to study it. It didn’t take long for those differences to arise in the EU project. 

In 2014, hundreds of experts across Europe penned a letter citing concerns about oversight, funding mechanisms, and transparency in the Human Brain Project. The scientists felt Markram’s aim was premature and too narrow and would exclude funding for researchers who sought other ways to study the brain. 

“What struck me was, if he was successful and turned it on and the simulated brain worked, what have you learned?” Terry Sejnowski, a computational neuroscientist at the Salk Institute who served on the advisory committee for the BRAIN Initiative, told me. “The simulation is just as complicated as the brain.” 

The Human Brain Project’s board of directors voted to change its organization and leadership in early 2015, replacing a three-member executive committee led by Markram with a 22-member governing board. Christoph Ebell, a Swiss entrepreneur with a background in science diplomacy, was appointed executive director. “When I took over, the project was at a crisis point,” he says. “People were openly wondering if the project was going to go forward.”

But a few years later he was out too, after a “strategic disagreement” with the project’s host institution. The project is now focused on providing a new computational research infrastructure to help neuroscientists store, process, and analyze large amounts of data—unsystematic data collection has been an issue for the field—and develop 3D brain atlases and software for creating simulations.

The US BRAIN Initiative, meanwhile, underwent its own changes. Early on, in 2014, responding to the concerns of scientists and acknowledging the limits of what was possible, it evolved into something more pragmatic, focusing on developing technologies to probe the brain. 

New day

Those changes have finally started to produce results—even if they weren’t the ones that the founders of each of the large brain projects had originally envisaged. 

Last year, the Human Brain Project released a 3D digital map that integrates different aspects of human brain organization at the millimeter and micrometer level. It’s essentially a Google Earth for the brain. 

And earlier this year Alipasha Vaziri, a neuroscientist funded by the BRAIN Initiative, and his team at Rockefeller University reported in a preprint paper that they’d simultaneously recorded the activity of more than a million neurons across the mouse cortex. It’s the largest recording of animal cortical activity yet made, if far from listening to all 86 billion neurons in the human brain as the original Brain Activity Map hoped.

The US effort has also shown some progress in its attempt to build new tools to study the brain. It has speeded the development of optogenetics, an approach that uses light to control neurons, and its funding has led to new high-density silicon electrodes capable of recording from hundreds of neurons simultaneously. And it has arguably accelerated the development of single-cell sequencing. In September, researchers using these advances will publish a detailed classification of cell types in the mouse and human motor cortexes—the biggest single output from the BRAIN Initiative to date.

While these are all important steps forward, though, they’re far from the initial grand ambitions. 

Lasting legacy

We are now heading into the last phase of these projects—the EU effort will conclude in 2023, while the US initiative is expected to have funding through 2026. What happens in these next years will determine just how much impact they’ll have on the field of neuroscience.

When I asked Ebell what he sees as the biggest accomplishment of the Human Brain Project, he didn’t name any one scientific achievement. Instead, he pointed to EBRAINS, a platform launched in April of this year to help neuroscientists work with neurological data, perform modeling, and simulate brain function. It offers researchers a wide range of data and connects many of the most advanced European lab facilities, supercomputing centers, clinics, and technology hubs in one system. 

“If you ask me ‘Are you happy with how it turned out?’ I would say yes,” Ebell said. “Has it led to the breakthroughs that some have expected in terms of gaining a completely new understanding of the brain? Perhaps not.” 

Katrin Amunts, a neuroscientist at the University of Düsseldorf, who has been the Human Brain Project’s scientific research director since 2016, says that while Markram’s dream of simulating the human brain hasn’t been realized yet, it is getting closer. “We will use the last three years to make such simulations happen,” she says. But it won’t be a big, single model—instead, several simulation approaches will be needed to understand the brain in all its complexity. 

Meanwhile, the BRAIN Initiative has provided more than 900 grants to researchers so far, totaling around $2 billion. The National Institutes of Health is projected to spend nearly $6 billion on the project by the time it concludes. 

For the final phase of the BRAIN Initiative, scientists will attempt to understand how brain circuits work by diagramming connected neurons. But claims for what can be achieved are far more restrained than in the project’s early days. The researchers now realize that understanding the brain will be an ongoing task—it’s not something that can be finalized by a project’s deadline, even if that project meets its specific goals.

“With a brand-new tool or a fabulous new microscope, you know when you’ve got it. If you’re talking about understanding how a piece of the brain works or how the brain actually does a task, it’s much more difficult to know what success is,” says Eve Marder, a neuroscientist at Brandeis University. “And success for one person would be just the beginning of the story for another person.” 

Yuste and his colleagues were right that new tools and techniques would be needed to study the brain in a more meaningful way. Now, scientists will have to figure out how to use them. But instead of answering the question of consciousness, developing these methods has, if anything, only opened up more questions about the brain—and shown just how complex it is. 

“I have to be honest,” says Yuste. “We had higher hopes.”

Emily Mullin is a freelance journalist based in Pittsburgh who focuses on biotechnology.

Pew’s new global survey of climate change attitudes finds promising trends but deep divides (The Conversation)

theconversation.com

September 14, 2021 10.00am EDT

By Kate T. Luong (Postdoctoral Research Fellow, George Mason University), Ed Maibach (Director of Center for Climate Communication, George Mason University), and John Kotcher (Assistant Professor of Communications, George Mason University)


People’s views about climate change, from how worried they are about it affecting them to how willing they are to do something about it, have shifted in developed countries around the world in recent years, a new survey by the Pew Research Center finds.

The study polled more than 16,000 adults in 17 countries considered to be advanced economies. Many of these countries have been large contributors to climate change and will be expected to lead the way in fixing it.

In general, the survey found that a majority of people are concerned about global climate change and are willing to make lifestyle changes to reduce its effects.

However, underneath this broad pattern lie more complicated trends, such as doubt that the international community can effectively reduce climate change and deep ideological divides that can hinder the transition to cleaner energy and a climate-friendly world. The survey also reveals an important disconnect between people’s attitudes and the enormity of the challenge climate change poses.

Here’s what stood out to us as professionals who study the public’s response to climate change.

Strong concern and willingness to take action

In all the countries surveyed in early 2021 except Sweden, between 60% and 90% of the citizens reported feeling somewhat or very concerned about the harm they would personally face from climate change. While there was a clear increase in concern in several countries between 2015, when Pew conducted the same survey, and 2021, this number did not change significantly in the U.S.

Chart of responses to question on concern about climate change harming the people surveyed personally
CC BY-ND

Similarly, in all countries except Japan, at least 7 out of 10 people said they are willing to make some or a lot of changes in how they live and work to help address global climate change.

Across most countries, young people were much more likely than older generations to report higher levels of both concern about climate change and willingness to change their behaviors.

Perceptions about government responses

Clearly, on a global level, people are highly concerned about this existential threat and are willing to change their everyday behaviors to mitigate its impacts. However, focusing on changing individual behaviors alone will not stop global warming.

In the U.S., for example, about 74% of greenhouse gas emissions are from fossil fuel combustion. People can switch to driving electric vehicles or taking electric buses and trains, but those still need power. To pressure utilities to shift to renewable energy requires policy-level changes, both domestically and internationally.

When we look at people’s attitudes regarding how their own country is handling climate change and how effective international actions would be, the results painted a more complex picture.

On average, most people evaluated their own government’s handling of climate change as “somewhat good,” with the highest approval numbers in Sweden, the United Kingdom, Singapore and New Zealand. However, data shows that such positive evaluations are not actually warranted. The 2020 U.N. Emissions Gap Report found that greenhouse gas emissions have continued to rise. Many countries, including the U.S., are projected to miss their target commitments to reduce emissions by 2030; and even if all countries achieve their targets, annual emissions need to be reduced much further to reach the goals set by the Paris climate agreement.

When it comes to confidence in international actions to address climate change, the survey respondents were more skeptical overall. Although the majority of people in Germany, the Netherlands, South Korea and Singapore felt confident that the international community can significantly reduce climate change, most respondents in the rest of the countries surveyed did not. France and Sweden had the lowest levels of confidence with more than 6 in 10 people being unconvinced.

Together, these results suggest that people generally believe climate change to be a problem that can be solved by individual people and governments. Most people say they are willing to change their lifestyles, but they may not have an accurate perception of the scale of actions needed to effectively address global climate change. Overall, people may be overly optimistic about their own country’s capability and commitment to reduce emissions and fight climate change, and at the same time, underestimate the value and effectiveness of international actions.

These perceptions may reflect the fact that the conversation surrounding climate change so far has been dominated by calls to change individual behaviors instead of emphasizing the necessity of collective and policy-level actions. Addressing these gaps is an important goal for people who are working in climate communication and trying to increase public support for stronger domestic policies and international collaborations.

Deep ideological divide in climate attitudes

As with most surveys about climate change attitudes, the new Pew report reveals a deep ideological divide in several countries.

Perhaps not surprisingly, the U.S. leads in ideological differences for all but one question. In the U.S., 87% of liberals are somewhat or very concerned about the personal harms from climate change, compared to only 28% of conservatives – a stark 59-point difference. This difference persists for willingness to change one’s lifestyle (49-point difference), evaluation of government’s handling of climate change (41-point difference), and perceived economic impacts of international actions (41-point difference).

And the U.S. is not alone; large ideological differences were also found in Canada, Australia and the Netherlands. In fact, only Australians were more divided than Americans on how their government is handling the climate crisis.

This ideological divide is not new, but the size of the gap between people on the two ends of the ideological spectrum is astounding. The differences lie not only in how to handle the issue or who should be responsible but also in the scope and severity of climate change in the first place. Such massive, entrenched differences in public understanding and acceptance of the scientific facts regarding climate change will present significant challenges in enacting much-needed policy changes.

Better understanding of the cultural, political and media dynamics that shape those differences might reveal helpful insights that could ease the path toward progress in slowing climate change.

Metaverso pode ser nova Internet e vira prioridade das Big Techs (MIT Technology Review)

mittechreview.com.br

by Guilherme Ravache

setembro 10, 2021


Em maio, afirmei, aqui na MIT Technology Review Brasil, que o “Brasil tem chance de liderar a corrida pelo metaverso”. Em apenas três meses muito aconteceu e o metaverso se tornou um termo cada vez mais presente na mídia, e principalmente, uma nova estratégia de gigantes de tecnologia. O termo foi mencionado por CEOs em várias recentes conferências de anúncio de resultados no segundo trimestre. Mark Zuckerberg, do Facebook, Satya Nadella, da Microsoft, David Baszucki, da Roblox, e Shar Dubey, da Match Group, afirmaram que o metaverso iria pautar a estratégia de suas empresas.

Do Vale do Silício a Shenzhen, as empresas de tecnologia aumentam suas apostas nesse setor. Para os não iniciados, “o metaverso é a terminologia utilizada para indicar um tipo de mundo virtual que tenta replicar a realidade através de dispositivos digitais. É um espaço coletivo e virtual compartilhado, constituído pela soma de ‘realidade virtual’, ‘realidade aumentada’ e ‘Internet’”, como afirma a página do termo na Wikipédia. A expressão foi cunhada pelo escritor Neal Stephenson em seu romance de 1992, “Snow Crash”. Depois, Ernest Cline usou o mesmo conceito para criar o Oásis em seu romance “Ready Player One”, que virou filme de Steven Spielberg.

Mark Zuckerberg, fundador e CEO do Facebook, parece ter se tornado o mais recente convertido ao metaverso. O executivo deu uma série de entrevistas recentemente afirmando que o Facebook vai apostar o seu futuro no metaverso. “Nós vamos realizar uma transição de ser empresa vista primariamente como de redes sociais para sermos uma empresa de mertaverso”, disse Zuckerberg.

Em julho, o Facebook disse que estava criando uma equipe de produto para trabalhar no metaverso que faria parte de seu grupo de AR e VR, no Facebook Reality Labs. Dias atrás tivemos uma demonstração do que está por vir. O Facebook convidou um grupo de jornalistas para conhecer seu Horizon Workrooms. O app é a primeira tentativa da rede social de criar uma experiência de Realidade Virtual especificamente para as pessoas trabalharem juntas.

Segundo o jornalista Alex Heath, que participou da demonstração, até 16 pessoas em VR podem estar juntas em uma sala de trabalho, enquanto outras 34 pessoas podem entrar em uma videochamada sem usar um fone de ouvido. Um aplicativo de desktop complementar permite que você faça uma transmissão ao vivo da tela do seu computador sobre o seu espaço de mesa virtual. Graças ao rastreamento manual e às câmeras frontais, uma representação virtual do seu teclado físico fica embaixo da tela para digitar em um aplicativo web simples que o Facebook criou para fazer anotações e gerenciar calendários. Ou seja, você entra em um mundo virtual para realizar a reunião com seus colegas.

Facebook não deve liderar o metaverso

Zuckerberg fala de realidade virtual há anos. Ainda em 2014, quando o Facebook comprou a Oculus por US$ 2 bilhões, ele afirmou com entusiasmo que a compra permitiria experiências virtuais imersivas nas quais você se sentiria “presente em outro lugar com outras pessoas”. De certa forma, o metaverso é uma sequência dos planos do Facebook iniciados há quase uma década.

O Facebook é um player gigante a ser reconhecido, mas minha aposta é que não será o vencedor na corrida pelo metaverso. Da mesma forma que a IBM não se tornou a líder nos computadores pessoais ou na nuvem, o Google nunca conseguiu construir uma presença sólida nas redes sociais ou no setor de mensagens instantâneas e nem a Microsoft e muito menos a Nokia se tornaram as líderes em smartphones, o Facebook, apesar de seu entusiasmo, não deve liderar essa corrida.

Basicamente, porque mesmo tendo a vontade e os recursos, usualmente falta às empresas líderes a cultura para operar nesses novos mercados. E não estou dizendo que o Facebook será um player irrelevante, longe disso. Os bilhões de dólares que a empresa já investiu no desenvolvimento do Oculos Quest e toda a tecnologia de hardware criada para uso em realidade virtual (e consequentemente o metaverso) são impressionantes e levaram a avanços indiscutíveis.

“O metaverso, o sonho de um tecnólogo, é o pesadelo do Facebook. Ele tornaria a rede social irrelevante”, afirmou Scott Galloway, professor de marketing. “O ativo mais valioso do Facebook é seu gráfico social, seu conjunto de dados de usuários, links entre usuários e seu conteúdo compartilhado. Em um futuro metaverso, nós todos teremos identidades no metaverso e qualquer um pode abrir um espaço virtual para compartilhar fotos da festa de aniversário de seu filho de 10 anos ou discutir sobre vacinas”, conclui.

Quem tem potencial no metaverso?

De um ponto de vista ocidental, eu apostaria minhas fichas na Roblox e na Epic Games como novos líderes do metaverso de maneira mais ampla. Nas aplicações empresariais, a vantagem seria da Microsoft.

Da perspectiva hardware/software Nvidia e Apple levam vantagem por já terem a capacidade de desenvolverem seus próprios chips (o Facebook compra chips prontos da Qualcomm). Uma vasta biblioteca de chips de Inteligência Artificial e o software necessário para executá-los também são peças essenciais do metaverso.

Do outro lado do mundo, Tencent, Bytedance e Sea são competidores robustos, mas as duas primeiras se vêem diante da crescente regulação chinesa e a terceira tem seu foco na construção de um e-commerce competitivo na Ásia.

A Microsoft tem uma grande vantagem não somente por sua gigantesca comunidade de desenvolvedores criando soluções corporativas e sua robusta presença no mundo corporativo. A Microsoft também está trazendo jogos em nuvem para seus consoles Xbox. Em breve, os assinantes do Xbox Game Pass Ultimate nos consoles Xbox Series X / S e Xbox One poderão transmitir mais de 100 jogos sem baixá-los. Segundo a Microsoft, as métricas de desempenho do serviço serão 1080p e 60 frames por segundo. O Xbox Cloud Gaming se tornou disponível para dispositivos móveis e PC em junho de 2021. A Microsoft também anunciou esta semana que o próximo capítulo da popular série Halo, Halo Infinite, será lançado em 8 de dezembro de 2021.

O poder da comunidade

Há anos a Microsoft desenvolve hardware de mixed reality para aplicações corporativas. Seu HoloLens é um dos mais usados no mercado. Realidade mista ou realidade híbrida é a tecnologia que une características da realidade virtual com a realidade aumentada. Ela insere objetos virtuais no mundo real e permite a interação do usuário com os objetos, produzindo novos ambientes nos quais itens físicos e virtuais coexistem e interagem em tempo real.

No ano passado, a Nvidia lançou sua plataforma Omniverse “para conectar mundos 3D em um universo virtual compartilhado.” O presidente-executivo, Jensen Huang, usou a maior conferência anual da empresa, em outubro, para creditar publicamente “Snow Crash”, de Stephenson, como a inspiração original para o conceito de um sucessor de realidade virtual para a Internet, afirmando que “o metaverso está chegando”.

Mas o que definirá os vencedores do metaverso não será apenas o dinheiro, a vontade de fazer liderar esse movimento ou a propriedade intelectual de uma empresa. É a capacidade de envolver comunidades, seja para as pessoas congregarem no metaverso ou desenvolverem as experiências desse ambiente digital que criará os vencedores.

Games, Netflix e onde gastamos nosso tempo

Os games são uma parte essencial do metaverso, mas o metaverso não irá se limitar aos jogos. Eles são apenas a porta de entrada, um primeiro passo nesse sentido. Reed Rastings, CEO da Netflix, já disse que o Netflix “compete com (e perde para) o Fortnite mais do que a HBO”. Recentemente, a Netflix inclusive anunciou que a partir de 2022 entrará no segmento de jogos, oferecendo games em seu app.

Como aponta o ensaísta Matthew Ball, o mercado de games é enorme e cresce rapidamente, mas essa não é a única razão para a entrada da Netflix em games. “Embora seja comum ouvir que ‘os jogos agora têm quatro vezes o tamanho da bilheteria global dos cinemas’, a bilheteria é menos de 1/15 da receita total de vídeo globalmente. Em outras palavras, os jogos provavelmente vão arrecadar cerca de US$ 180 bilhões em 2021, enquanto os vídeos excederão US$ 650 bilhões”, diz Ball. Ou seja, na guerra pela atenção do consumidor o videogame e o metaverso têm um potencial enorme e a receita de games mostra que esse ainda é um mercado bastante incipiente em comparação ao vídeo como um todo.

Vale lembrar que somente em 2021 a Netflix deve investir US$ 19 bilhões na produção de conteúdo original. Mesmo assim, a empresa tem perdido assinantes nos Estados Unidos e Canadá. A entrada do HBO Max, Paramount+ e diversos novos concorrentes ajudam a explicar a queda, mas os games também são um elemento a ser considerado. E no final do dia, a Netflix está no mercado de vender entretenimento, e estar próximo da indústria de games não é uma ideia ruim.

Nossas crianças, nosso futuro

Mas assim como o Facebook, se sobra dinheiro e vontade/necessidade de reter nossa atenção, falta o elemento da comunidade de desenvolvedores para uma entrada relevante no metaverso. Ao observarmos o Roblox fica mais fácil de entender como esse elemento se aplica.

O Roblox é muito mais que um jogo, é uma plataforma onde qualquer um pode criar um jogo (ou experiência). Hoje, já são mais de 8 milhões de desenvolvedores criando essas experiências. São mais de 20 milhões de experiências, que vão desde adotar um animal de estimação no Adopt Me! ou aprender sobre história em uma visita virtual ao Coliseu.

Desde 2008, quando a plataforma foi lançada, os usuários já passaram mais de 30,6 bilhões de horas engajados no jogo. No segundo trimestre, a receita da Roblox aumentou 127% em relação ao segundo trimestre de 2020, indo para US$ 454,1 milhões. A média de usuários ativos diários (DAUs) foi de 43,2 milhões, um aumento de 29% ano após ano.

Perceba a ironia de que enquanto Facebook e Netflix estagnaram no crescimento de usuários, a Roblox continua aumentando sua base mesmo com a pandemia diminuindo o isolamento social e permitindo que muitos retornem às suas atividades.

Mas provavelmente os grandes números do Roblox e da Epic Games (dona do Fortnite), que tem o capital fechado e não divulga números da mesma maneira que o Roblox, são o aspecto menos interessante das possibilidades que oferecem.

O metaverso é o novo terceiro lugar 

Como já escrevi aqui na MIT Tech Review ao falar sobre o impacto dos games no e-commerce, o crescimento dos jogos eletrônicos está diretamente ligado à transformação dos games em um “Terceiro Lugar”. O termo foi cunhado pelo sociólogo Ray Oldenburg e se refere a lugares onde as pessoas passam o tempo entre a casa (“primeiro” lugar) e o trabalho (“segundo” lugar). São espaços onde as pessoas trocam ideias, se divertem e estabelecem relacionamentos. Igrejas, cafés e parques são exemplos de “Terceiro Lugar”. Ter um terceiro lugar para socializar fora de casa e do trabalho é crucial para o bem-estar, pois traz um sentimento de conexão e pertencimento. E os videogames são cada vez mais um “Terceiro Lugar”. Historicamente, as atividades e o desenvolvimento da comunidade eram offline, mas graças aos avanços da tecnologia os videogames se tornaram sociais.

Não por acaso, são cada vez mais frequentes shows e eventos dentro do Roblox e Fortnite (Travis Scott reuniu milhares de pessoas e o diretor Christopher Nolan fez uma premiere). As marcas têm investido pesadamente para entrar nesse universo. De olho nos 43 milhões de usuários que acessam o Roblox diariamente, a Netflix anunciou em julho um novo ponto de encontro virtual baseado na série Stranger Things. Mais recentemente, a Roblox anunciou o lançamento do Vans World, um metaverso de skate interativo da marca Vans dentro do mundo dos jogos. Ele é inspirado nos locais da marca, como a House of Vans e outros destinos de skate, o Vans World é um espaço 3D contínuo onde os fãs podem praticar suas manobras com outras pessoas e experimentar os equipamentos da marca.

“A Roblox é o novo ponto de encontro social, muito parecido com o shopping local na década de 1980, onde os adolescentes se reuniam”, afirma Christina Wootton, vice-presidente de parcerias de marca da Roblox. “O Starcourt Mall virtual é um cenário semelhante reinventado dentro do Roblox que abre possibilidades únicas para envolver e aumentar o público global do programa.”

Vale assistir a essa apresentação de fevereiro de David Baszucki, CEO da Roblox. Nela, o executivo detalha a estratégia de crescimento da empresa com seu potencial de criar experiências, inclusive educativas e comerciais, com uma crescente comunidade.

Brasil pode ser protagonista no metaverso

De tempos em tempos acontece um alinhamento de estrelas que pode beneficiar um mercado. E o Brasil possivelmente se vê diante dessa oportunidade. Na China, o governo cria um ambiente cada vez mais inóspito para empresas e desenvolvedores. Nos Estados Unidos, existe o dinheiro e a escala de usuários, mas falta engajamento e mão de obra. Não é fácil apostar no metaverso em um país que sobram empregos e faltam candidatos. Na Europa há desenvolvedores, principalmente no Leste Europeu, mas a fragmentação é gigantesca.

Enrico Machado, brasileiro que desenvolve Roblox, é um exemplo do potencial de milhares de usuários acostumados a uma base desde a infância. Ele começou a jogar Roblox com 11 anos de idade. Aos 15 já era um desenvolvedor. Hoje, na faculdade, cursa sistemas da informação e trabalha em um grande estúdio brasileiro desenvolvendo apenas jogos para Roblox.

“O Roblox está muito popular. Ele funciona a partir de microtransações. Você pode comprar coisas nos jogos que as pessoas criam e os desenvolvedores ganham dinheiro com isso. Hoje tem muita gente fazendo uma grana absurda. É tipo o mercado de futebol que você tem que você tem alguns caras que estão no topo da Pirâmide. Para cada Neymar você tem milhões de pessoas que gostariam de ser Neymar, essa relação é parecida. Mas qualquer pessoa pode ganhar um dinheiro razoável”, diz Machado.

Ele garante que não é muito difícil ganhar um dinheiro razoável na plataforma.

Tem muita consumidor querendo jogar. Então, se você entende o básico de comunidade, de design, de jogo, de programação, você sai do zero e em um curto espaço de tempo já começa a ganhar uma graninha se você focar nisso”.

Machado trabalha em um estúdio com outras dezenas de desenvolvedores. “No estúdio fazemos reuniões e tudo mais para para aplicar as melhores práticas para todos os jogos. Estou aprendendo bastante com eles. Eu sei programar, sei fazer um joguinho bonitinho mas eu não entendo nada de game design. Eu não sei como fazer um jogo de sucesso. Você sabe que existem melhores práticas, mas com um grupo maior fica mais fácil. Conhecer essas práticas é tão importante quanto saber programar”, garante.

Milhões de desenvolvedores se unindo 

Não é um caso isolado. Como Machado existem milhares de jovens no Brasil trabalhando em estúdios enormes desenvolvendo Roblox. E diferentemente de outras linguagens, a utilizada pela Roblox é acessível e de fácil aprendizado. Além disso, não é essencial ter um computador super poderoso ou uma conexão ultra-rápida.

Não por acaso o Brasil já é o quinto mercado de games do mundo, tem uma das maiores comunidades de usuários do planeta, um crescente mercado de streaming e ícones de jogos eletrônicos como Nobru.

A Wildlife, unicórnio brasileiro avaliado em mais de US$ 1,3 bilhão, já conta com mais de 800 colaboradores em países como Brasil, Estados Unidos, Argentina e Irlanda. Criada em 2011, a empresa tem mais de 60 jogos mobile.

O metaverso precisa de tecnologia e software, mas o fator determinante é uma engajada comunidade de desenvolvedores e usuários. Por essas razões, o Roblox e o Fortnite estão na dianteira. Já o Brasil tem todos os elementos para ser o líder global neste setor. Mas nada garante que isso irá acontecer. Montreal, no Canadá, oferece pistas sobre como podemos acelerar esse processo ao criar incentivos para atrair e reunir empresas, desenvolvedores e investimentos. Mas esse será assunto para a próxima coluna.

O metaverso deverá se tornar a próxima Internet e muitos gigantes de hoje vão perder influência. Mas assim como a Internet criou uma nova indústria, com novos empregos e novos bilionários, o metaverso repetirá essa história e possivelmente em uma escala ainda maior. É irônico que Stephenson tenha dito para a revista Vanity Fair, em 2017, que quando escrevia “Snow Crash” e criava o metaverso, estava “apenas inventando merda”. Décadas depois, os CEOs levam essa “invenção” cada vez mais a sério.


Este artigo foi produzido por Guilherme Ravache, jornalista, consultor digital e colunista da MIT Technology Review Brasil.

Is everything in the world a little bit conscious? (MIT Technology Review)

technologyreview.com

Christof Koch – August 25, 2021

The idea that consciousness is widespread is attractive to many for intellectual and, perhaps, also emotional reasons. But can it be tested? Surprisingly, perhaps it can.

Panpsychism is the belief that consciousness is found throughout the universe—not only in people and animals, but also in trees, plants, and bacteria. Panpsychists hold that some aspect of mind is present even in elementary particles. The idea that consciousness is widespread is attractive to many for intellectual and, perhaps, also emotional reasons. But can it be empirically tested? Surprisingly, perhaps it can. That’s because one of the most popular scientific theories of consciousness, integrated information theory (IIT), shares many—though not all—features of panpsychism.

As the American philosopher Thomas Nagel has argued, something is conscious if there is “something that it is like to be” that thing in the state that it is in. A human brain in a state of wakefulness feels like something specific. 

IIT specifies a unique number, a system’s integrated information, labeled by the Greek letter φ (pronounced phi). If φ is zero, the system does not feel like anything; indeed, the system does not exist as a whole, as it is fully reducible to its constituent components. The larger φ, the more conscious a system is, and the more irreducible. Given an accurate and complete description of a system, IIT predicts both the quantity and the quality of its experience (if any). IIT predicts that because of the structure of the human brain, people have high values of φ, while animals have smaller (but positive) values and classical digital computers have almost none.

A person’s value of φ is not constant. It increases during early childhood with the development of the self and may decrease with onset of dementia and other cognitive impairments. φ will fluctuate during sleep, growing larger during dreams and smaller in deep, dreamless states. 

IIT starts by identifying five true and essential properties of any and every conceivable conscious experience. For example, experiences are definite (exclusion). This means that an experience is not less than it is (experiencing only the sensation of the color blue but not the moving ocean that brought the color to mind), nor is it more than it is (say, experiencing the ocean while also being aware of the canopy of trees behind one’s back). In a second step, IIT derives five associated physical properties that any system—brain, computer, pine tree, sand dune—has to exhibit in order to feel like something. A “mechanism” in IIT is anything that has a causal role in a system; this could be a logical gate in a computer or a neuron in the brain. IIT says that consciousness arises only in systems of mechanisms that have a particular structure. To simplify somewhat, that structure must be maximally integrated—not accurately describable by breaking it into its constituent parts. It must also have cause-and-effect power upon itself, which is to say the current state of a given mechanism must constrain the future states of not only that particular mechanism, but the system as a whole. 

Given a precise physical description of a system, the theory provides a way to calculate the φ of that system. The technical details of how this is done are complicated, but the upshot is that one can, in principle, objectively measure the φ of a system so long as one has such a precise description of it. (We can compute the φ of computers because, having built them, we understand them precisely. Computing the φ of a human brain is still an estimate.)

Debating the nature of consciousness might at first sound like an academic exercise, but it has real and important consequences.

Systems can be evaluated at different levels—one could measure the φ of a sugar-cube-size piece of my brain, or of my brain as a whole, or of me and you together. Similarly, one could measure the φ of a silicon atom, of a particular circuit on a microchip, or of an assemblage of microchips that make up a supercomputer. Consciousness, according to the theory, exists for systems for which φ is at a maximum. It exists for all such systems, and only for such systems. 

The φ of my brain is bigger than the φ values of any of its parts, however one sets out to subdivide it. So I am conscious. But the φ of me and you together is less than my φ or your φ, so we are not “jointly” conscious. If, however, a future technology could create a dense communication hub between my brain and your brain, then such brain-bridging would create a single mind, distributed across four cortical hemispheres. 

Conversely, the φ of a supercomputer is less than the φs of any of the circuits composing it, so a supercomputer—however large and powerful—is not conscious. The theory predicts that even if some deep-learning system could pass the Turing test, it would be a so-called “zombie”—simulating consciousness, but not actually conscious. 

Like panpsychism, then, IIT considers consciousness an intrinsic, fundamental property of reality that is graded and most likely widespread in the tree of life, since any system with a non-zero amount of integrated information will feel like something. This does not imply that a bee feels obese or makes weekend plans. But a bee can feel a measure of happiness when returning pollen-laden in the sun to its hive. When a bee dies, it ceases to experience anything. Likewise, given the vast complexity of even a single cell, with millions of proteins interacting, it may feel a teeny-tiny bit like something. 

Debating the nature of consciousness might at first sound like an academic exercise, but it has real and important consequences. Most obviously, it matters to how we think about people in vegetative states. Such patients may groan or otherwise move unprovoked but fail to respond to commands to signal in a purposeful manner by moving their eyes or nodding. Are they conscious minds, trapped in their damaged body, able to perceive but unable to respond? Or are they without consciousness?

Evaluating such patients for the presence of consciousness is tricky. IIT proponents have developed a procedure that can test for consciousness in an unresponsive person. First they set up a network of EEG electrodes that can measure electrical activity in the brain. Then they stimulate the brain with a gentle magnetic pulse, and record the echoes of that pulse. They can then calculate a mathematical measure of the complexity of those echoes, called a perturbational complexity index (PCI).

In healthy, conscious individuals—or in people who have brain damage but are clearly conscious—the PCI is always above a particular threshold. On the other hand, 100% of the time, if healthy people are asleep, their PCI is below that threshold (0.31). So it is reasonable to take PCI as a proxy for the presence of a conscious mind. If the PCI of someone in a persistent vegetative state is always measured to be below this threshold, we can with confidence say that this person is not covertly conscious. 

This method is being investigated in a number of clinical centers across the US and Europe. Other tests seek to validate the predictions that IIT makes about the location and timing of the footprints of sensory consciousness in the brains of humans, nonhuman primates, and mice. 

Unlike panpsychism, the startling claims of IIT can be empirically tested. If they hold up, science may have found a way to cut through a knot that has puzzled philosophers for as long as philosophy has existed.

Christof Koch is the chief scientist of the MindScope program at the Allen Institute for Brain Science in Seattle.

The Mind issue

This story was part of our September 2021 issue

A notável atualidade do Animismo (Outras Palavras)

Ele foi visto pela velha antropologia como “forma mais primitiva” de religião. Mas, surpresa: sugere respostas a questões cruciais de hoje: o divórcio entre cultura e natureza e a tendência da ciência a tratar como objeto tudo o que não é “humano”

Publicado 02/09/2021 às 17:46 – Atualizado 02/09/2021 às 18:00

Por Renato Sztutman, na Revista Cult, parceira editorial de Outras Palavras

Muito se tem falado hoje em dia sobre o animismo. E mais, muito se tem falado sobre uma necessidade de retomar o animismo – uma forma de responder ao projeto racionalista da modernidade, que transformou o ambiente em algo inerte, opaco, sinônimo de recurso, mercadoria. Em tempos de pandemia, constatamos que algo muito importante se perdeu na relação entre os sujeitos humanos e o mundo que eles habitam, e isso estaria na origem da profunda crise que vivemos.

Animismo é, em princípio, um conceito antropológico, proposto por Edward Tylor, em Primitive Culture (1871), para se referir à forma mais “primitiva” de religião, aquela que atribui “alma” a todos os habitantes do cosmos e que precederia o politeísmo e o monoteísmo. O termo “alma” provém do latim anima – sopro, princípio vital. Seria a causa mesma da vida, bem como algo capaz de se desprender do corpo, viajar para outros planos e tempos. O raciocínio evolucionista de autores como Tylor foi refutado por diferentes correntes da antropologia ao longo do século 20, embora possamos dizer que ainda seja visto entranhado no senso comum da modernidade. A ideia de uma religião embrionária, fundada em crenças desprovidas de lógica, perdeu lugar no discurso dos antropólogos, que passaram a buscar racionalidades por trás de diferentes práticas mágico-religiosas.

Uma reabilitação importante do conceito antropológico de animismo aparece com Philippe Descola, em sua monografia “La nature domestique” (1986), sobre os Achuar da Amazônia equatoriana. Descola demonstrou que, quando os Achuar dizem que animais e plantas têm wakan (“alma” ou, mais precisamente, intencionalidade, faculdade de comunicação ou inteligência), isso não deve ser interpretado de maneira metafórica ou como simbolismo. Isso quer dizer que o modo de os Achuar descreverem o mundo é diverso do modo como o fazem os naturalistas (baseados nos ditames da Ciência moderna), por não pressuporem uma linha intransponível entre o que costumamos chamar Natureza e Cultura. O animismo não seria mera crença, representação simbólica ou forma primitiva de religião, mas, antes de tudo, uma ontologia, modo de descrever tudo o que existe, associada a práticas. Os Achuar engajam-se em relações efetivas com outras espécies, o que faz com que, por exemplo, mulheres sejam tidas como mães das plantas que cultivam, e homens como cunhados dos animais de caça.

Para Descola, a ontologia naturalista não pode ser tomada como único modo de descrever o mundo, como fonte última de verdade. Outros três regimes ontológicos deveriam ser considerados de maneira simétrica, entre eles o animismo. Esse ponto foi desenvolvido de maneira exaustiva em Par-delà nature et culture (2005), no qual o autor se lança em uma aventura comparatista cruzando etnografias de todo o globo. O animismo inverte o quadro do naturalismo: se neste último caso a identificação entre humanos e não humanos passa pelo plano da fisicalidade (o que chamamos corpo, organismo ou biologia), no animismo essa mesma identificação se dá no plano da interioridade (o que chamamos alma, espírito ou subjetividade). Para os naturalistas, a alma seria privilégio da espécie humana, já para os animistas é uma mesma “alma humana” que se distribui entre todos os seres do cosmos.

A ideia de perspectivismo, que autores como Eduardo Viveiros de Castro e Tânia Stolze Lima atribuem a cosmologias ameríndias, estende e transforma a de animismo. O perspectivismo seria, grosso modo, uma teoria ou metafísica indígena que afirma que (idealmente) diferentes espécies se têm como humanas, mas têm as demais como não humanas. Tudo o que existe no cosmos pode ser sujeito, mas todos não podem ser sujeitos ao mesmo tempo, o que implica uma disputa. Diz-se, por exemplo, que onças veem-se como humanas e veem humanos como presas. O que os humanos veem como sangue é, para elas, cerveja de mandioca, bebida de festa. Onças e outros animais (mas também plantas, astros, fenômenos meteorológicos) são, em suma, humanos “para si mesmos”. Um xamã ameríndio seria capaz de mudar de perspectiva, de se colocar no lugar de outrem e ver como ele o vê, portanto de compreender que a condição humana é partilhada por outras criaturas.

Como insiste Viveiros de Castro em A inconstância da alma selvagem (2002), a perspectiva está nos corpos, conjuntos de afecções mais do que organismos. A mudança de perspectiva seria, assim, uma metamorfose somática e se ancoraria na ideia de um fundo comum de humanidade, numa potencialidade anímica distribuída horizontalmente no cosmos. Se o perspectivismo é o avesso do antropocentrismo, ele não se separa de certo antropomorfismo, fazendo com que prerrogativas humanas deixem de ser exclusividade da espécie humana, assumindo formas as mais diversas.

O livro de Davi Kopenawa e Bruce Albert, A queda do céu (2010), traz exemplos luminosos desses animismos e perspectivismos amazônicos. Toda a narrativa de Kopenawa está baseada em sua formação como xamã yanomami, que se define pelo trato com os espíritos xapiripë, seres antropomórficos que nada mais são que “almas” ou “imagens” (tradução que Albert prefere dar para o termo utupë) dos “ancestrais animais” (yaroripë). Segundo a mitologia yanomami, os animais eram humanos em tempos primordiais, mas se metamorfosearam em seus corpos atuais. O que uniria humanos e animais seria justamente utupë, e é como utupë que seus ancestrais aparecem aos xamãs. Quando os xamãs yanomami inalam a yãkoana (pó psicoativo), seus olhos “morrem” e – mudando de perspectiva – eles acessam a realidade invisível dos xapiripë, que se apresentam em uma grande festa, dançando e cantando, adornados e brilhosos. O xamanismo yanomami – apoiando-se em experiências de transe e sonho – é um modo de conhecer e descrever o mundo. É nesse sentido que Kopenawa diz dos brancos, “povo da mercadoria”, que eles não conhecem a terra-floresta (urihi), pois não sabem ver. Onde eles identificam uma natureza inerte, os Yanomami apreendem um emaranhado de relações. O conhecimento dessa realidade oculta é o que permitiria a esses xamãs impedir a queda do céu, catalisada pela ação destrutiva dos brancos. E assim, insiste Kopenawa, esse conhecimento passa a dizer respeito não apenas aos Yanomami, mas a todos os habitantes do planeta.

Embora distintas, as propostas de Descola e de Ingold buscam na experiência animista um contraponto às visões naturalistas e racionalistas, que impõem uma barreira entre o sujeito (humano) e o mundo. Como propõe Viveiros de Castro, essa crítica consiste na “descolonização do pensamento”, pondo em xeque o excepcionalismo humano e a pretensão de uma ontologia exclusiva detida pelos modernos. Contraponto e descolonização que não desembocam de modo algum na negação das ciências modernas, mas que exigem imaginar que é possível outra ciência ou que é possível reencontrar o animismo nas ciências. Tal tem sido o esforço de autores como Bruno Latour e Isabelle Stengers, expoentes mais expressivos dos science studies: mostrar que a ciência em ação desmente o discurso oficial, para o qual conhecer é desanimar (dessubjetivar) o mundo, reduzi-lo a seu caráter imutável, objetivo.

No livro Sobre o culto moderno dos deuses “fatiches” (1996), Latour aproxima a ideia de fetiche nas religiões africanas à ideia de fato nas ciências modernas. Um fetiche é um objeto de culto (ou mesmo uma divindade) feito por humanos e que, ao mesmo tempo, age sobre eles. Com seu trabalho etnográfico em laboratórios, Latour sugeriu que os fatos científicos não são meramente “dados”, mas dependem de interações e articulações em rede. Num laboratório, moléculas e células não seriam simplesmente objetos, mas actantes imprevisíveis, constantemente interrogados pelo pesquisador. Em seu pioneiro Jamais fomos modernos (1991), Latour assume que fatos científicos são em certo sentido feitos, e só serão aceitos como fatos quando submetidos à prova das controvérsias, isto é, quando conseguirem ser estabilizados como verdades.

Isabelle Stengers vai além da analogia entre fatos (“fatiches”) e fetiches para buscar na história das ciências modernas a tensão constitutiva com as práticas ditas mágicas. Segundo ela, as ciências modernas se estabelecem a partir da desqualificação de outras práticas, acusadas de equívoco ou charlatanismo. Ela acompanha, por exemplo, como a química se divorciou da alquimia, e a psicanálise, do magnetismo e da hipnose. Em suma, as ciências modernas desqualificam aquilo que está na sua origem. E isso, segundo Stengers, não pode ser dissociado do lastro entre a história das ciências e a do capitalismo. Em La sorcellerie capitaliste (A feitiçaria do capitalismo, 2005), no diálogo com a ativista neopagã Starhawk, Stengers e Philippe Pignarre lembram que o advento da ciência moderna e do capitalismo nos séculos 17 e 18 não se separa da perseguição às práticas de bruxaria lideradas por mulheres. Se o capitalismo, ancorado na propriedade privada e no patriarcado, emergia com a política dos cercamentos (expulsão dos camponeses das terras comuns), a revolução científica se fazia às custas da destruição de práticas mágicas. Stengers e Pignarre encontram no ativismo de Starhawk e de seu grupo Reclaim, que despontou na Califórnia no final dos anos 1980, um exemplo de resistência anticapitalista. Para Starhawk, resistir ao capitalismo é justamente retomar (reclaim) práticas – no caso, a tradição wicca, de origem europeia – que foram sacrificadas para que ele florescesse.

Retomar a magia, retomar o animismo seria, para Stengers, uma forma de existência e de resistência. Como escreveu em Cosmopolíticas (1997), quando falamos de práticas desqualificadas pelas ciências modernas, não deveríamos apenas incorrer em um ato de tolerância. Não se trata de considerar a magia uma crença ou “cultura”, como fez-se na antropologia da época de Tylor e até pouco tempo atrás. Ir além da “maldição da tolerância” é levar a sério asserções indígenas, por exemplo, de que uma rocha tem vida ou uma árvore pensa. Stengers não está interessada no animismo como “outra” ontologia: isso o tornaria inteiramente exterior à experiência moderna. Ela tampouco se interessa em tomar o animismo como verdade única, nova ontologia que viria desbancar as demais. Mais importante seria experimentá-lo, seria fazê-lo funcionar no mundo moderno.

Que outra ciência seria capaz de retomar o animismo hoje? Eis uma questão propriamente stengersiana. Hoje vivemos mundialmente uma crise sanitária em proporções jamais vistas, que não pode ser dissociada da devastação ambiental e do compromisso estabelecido entre as ciências e o mercado. A outra ciência, diriam Latour e Stengers, seria a do sistema terra e do clima, que tem como marco a teoria de Gaia, elaborada por James Lovelock e Lynn Margulis nos anos 1970. Gaia é para esses cientistas a Terra como um organismo senciente, a Terra como resultante de um emaranhado de relações entre seres vivos e não vivos. Poderíamos dizer que Gaia é um conceito propriamente animista que irrompe no seio das ciências modernas, causando desconfortos e ceticismos. O que Stengers chama de “intrusão de Gaia”, em sua obra No tempo das catástrofes (2009), é uma reação ou resposta do planeta aos efeitos destruidores do capitalismo, é a ocorrência cada vez mais frequente de catástrofes ambientais e o alerta para um eventual colapso do globo. Mas é também, ou sobretudo, um chamado para a conexão entre práticas não hegemônicas – científicas, artísticas, políticas – e a possibilidade de recriar uma inteligência coletiva e imaginar novos mundos.

O chamado de Stengers nos obriga a pensar a urgência de uma conexão efetiva entre as ciências modernas e as ciências indígenas, uma conexão que retoma o animismo, reconhecendo nele um modo de engajar humanos ao mundo, contribuindo assim para evitar ou adiar a destruição do planeta. Como escreve Ailton Krenak, profeta de nosso tempo, em Ideias para adiar o fim do mundo (2019), “quando despersonalizamos o rio, a montanha, quando tiramos deles os seus sentidos, considerando que isso é atributo exclusivo de humanos, nós liberamos esses lugares para que se tornem resíduos da atividade industrial e extrativista”. Em outras palavras, quando desanimamos o mundo, o deixamos à mercê de um poder mortífero. Retomar o animismo surge como um chamado de sobrevivência, como uma chance para reconstruir a vida e o sentido no tempo pós-pandêmico que há de vir.