Arquivo da categoria: Uncategorized

Our brains exist in a state of “controlled hallucination” (MIT Technology Review)

technologyreview.com

Matthew Hutson – August 25, 2021

Three new books lay bare the weirdness of how our brains process the world around us.

Eventually, vision scientists figured out what was happening. It wasn’t our computer screens or our eyes. It was the mental calculations that brains make when we see. Some people unconsciously inferred that the dress was in direct light and mentally subtracted yellow from the image, so they saw blue and black stripes. Others saw it as being in shadow, where bluish light dominates. Their brains mentally subtracted blue from the image, and came up with a white and gold dress. 

Not only does thinking filter reality; it constructs it, inferring an outside world from ambiguous input. In Being You, Anil Seth, a neuroscientist at the University of Sussex, relates his explanation for how the “inner universe of subjective experience relates to, and can be explained in terms of, biological and physical processes unfolding in brains and bodies.” He contends that “experiences of being you, or of being me, emerge from the way the brain predicts and controls the internal state of the body.” 

Prediction has come into vogue in academic circles in recent years. Seth and the philosopher Andy Clark, a colleague at Sussex, refer to predictions made by the brain as “controlled hallucinations.” The idea is that the brain is always constructing models of the world to explain and predict incoming information; it updates these models when prediction and the experience we get from our sensory inputs diverge. 

“Chairs aren’t red,” Seth writes, “just as they aren’t ugly or old-fashioned or avant-garde … When I look at a red chair, the redness I experience depends both on properties of the chair and on properties of my brain. It corresponds to the content of a set of perceptual predictions about the ways in which a specific kind of surface reflects light.” 

Seth is not particularly interested in redness, or even in color more generally. Rather his larger claim is that this same process applies to all of perception: “The entirety of perceptual experience is a neuronal fantasy that remains yoked to the world through a continuous making and remaking of perceptual best guesses, of controlled hallucinations. You could even say that we’re all hallucinating all the time. It’s just that when we agree about our hallucinations, that’s what we call reality.”

Cognitive scientists often rely on atypical examples to gain understanding of what’s really happening. Seth takes the reader through a fun litany of optical illusions and demonstrations, some quite familiar and others less so. Squares that are in fact the same shade appear to be different; spirals printed on paper appear to spontaneously rotate; an obscure image turns out to be a woman kissing a horse; a face shows up in a bathroom sink. Re-creating the mind’s psychedelic powers in silicon, an artificial-intelligence-powered virtual-reality setup that he and his colleagues created produces a Hunter Thompson–esque menagerie of animal parts emerging piecemeal from other objects in a square on the Sussex University campus. This series of examples, in Seth’s telling, “chips away at the beguiling but unhelpful intuition that consciousness is one thing—one big scary mystery in search of one big scary solution.” Seth’s perspective might be unsettling to those who prefer to believe that things are as they seem to be: “Experiences of free will are perceptions. The flow of time is a perception.” 

Seth is on comparatively solid ground when he describes how the brain shapes experience, what philosophers call the “easy” problems of consciousness. They’re easy only in comparison to the “hard” problem: why subjective experience exists at all as a feature of the universe. Here he treads awkwardly, introducing the “real” problem, which is to “explain, predict, and control the phenomenological properties of conscious experience.” It’s not clear how the real problem differs from the easy problems, but somehow, he says, tackling it will get us some way toward resolving the hard problem. Now that would be a neat trick.

Where Seth relates, for the most part, the experiences of people with typical brains wrestling with atypical stimuli, in Coming to Our Senses, Susan Barry, an emeritus professor of neurobiology at Mount Holyoke college, tells the stories of two people who acquired new senses later in life than is usual. Liam McCoy, who had been nearly blind since he was an infant, was able to see almost clearly after a series of operations when he was 15 years old. Zohra Damji was profoundly deaf until she was given a cochlear implant at the unusually late age of 12. As Barry explains, Damji’s surgeon “told her aunt that, had he known the length and degree of Zohra’s deafness, he would not have performed the operation.” Barry’s compassionate, nuanced, and observant exposition is informed by her own experience:

At age forty-eight, I experienced a dramatic improvement in my vision, a change that repeatedly brought me moments of childlike glee. Cross-eyed from early infancy, I had seen the world primarily through one eye. Then, in mid-life, I learned, through a program of vision therapy, to use my eyes together. With each glance, everything I saw took on a new look. I could see the volume and 3D shape of the empty space between things. Tree branches reached out toward me; light fixtures floated. A visit to the produce section of the supermarket, with all its colors and 3D shapes, could send me into a sort of ecstasy. 

Barry was overwhelmed with joy at her new capacities, which she describes as “seeing in a new way.” She takes pains to point out how different this is from “seeing for the first time.” A person who has grown up with eyesight can grasp a scene in a single glance. “But where we perceive a three-dimensional landscape full of objects and people, a newly sighted adult sees a hodgepodge of lines and patches of colors appearing on one flat plane.” As McCoy described his experience of walking up and down stairs to Barry: 

The upstairs are large alternating bars of light and dark and the downstairs are a series of small lines. My main focus is to balance and step IN BETWEEN lines, never on one … Of course going downstairs you step in between every line but upstairs you skip every other bar. All the while, when I move, the stairs are skewing and changing.

Even a sidewalk was tricky, at first, to navigate. He had to judge whether a line “indicated the junction between flat sidewalk blocks, a crack in the cement, the outline of a stick, a shadow cast by an upright pole, or the presence of a sidewalk step,” Barry explains. “Should he step up, down, or over the line, or should he ignore it entirely?” As McCoy says, the complexity of his perceptual confusion probably cannot be fully explained in terms that sighted people are used to.

The same, of course, is true of hearing. Raw audio can be hard to untangle. Barry describes her own ability to listen to the radio while working, effortlessly distinguishing the background sounds in the room from her own typing and from the flute and violin music coming over the radio. “Like object recognition, sound recognition depends upon communication between lower and higher sensory areas in the brain … This neural attention to frequency helps with sound source recognition. Drop a spoon on a tiled kitchen floor, and you know immediately whether the spoon is metal or wood by the high- or low-frequency sound waves it produces upon impact.” Most people acquire such capacities in infancy. Damji didn’t. She would often ask others what she was hearing, but had an easier time learning to distinguish sounds that she made herself. She was surprised by how noisy eating potato chips was, telling Barry: “To me, potato chips were always such a delicate thing, the way they were so lightweight, and so fragile that you could break them easily, and I expected them to be soft-sounding. But the amount of noise they make when you crunch them was something out of place. So loud.” 

As Barry recounts, at first Damji was frightened by all sounds, “because they were meaningless.” But as she grew accustomed to her new capabilities, Damji found that “a sound is not a noise anymore but more like a story or an event.” The sound of laughter came to her as a complete surprise, and she told Barry it was her favorite. As Barry writes, “Although we may be hardly conscious of background sounds, we are also dependent upon them for our emotional well-being.” One strength of the book is in the depth of her connection with both McCoy and Damji. She spent years speaking with them and corresponding as they progressed through their careers: McCoy is now an ophthalmology researcher at Washington University in St. Louis, while Damji is a doctor. From the details of how they learned to see and hear, Barry concludes, convincingly, that “since the world and everything in it is constantly changing, it’s surprising that we can recognize anything at all.”

In What Makes Us Smart, Samuel Gershman, a psychology professor at Harvard, says that there are “two fundamental principles governing the organization of human intelligence.” Gershman’s book is not particularly accessible; it lacks connective tissue and is peppered with equations that are incompletely explained. He writes that intelligence is governed by “inductive bias,” meaning we prefer certain hypotheses before making observations, and “approximation bias,” which means we take mental shortcuts when faced with limited resources. Gershman uses these ideas to explain everything from visual illusions to conspiracy theories to the development of language, asserting that what looks dumb is often “smart.”

“The brain is evolution’s solution to the twin problems of limited data and limited computation,” he writes. 

He portrays the mind as a raucous committee of modules that somehow helps us fumble our way through the day. “Our mind consists of multiple systems for learning and decision making that only exchange limited amounts of information with one another,” he writes. If he’s correct, it’s impossible for even the most introspective and insightful among us to fully grasp what’s going  on inside our own head. As Damji wrote in a letter to Barry: 

When I had no choice but to learn Swahili in medical school in order to be able to talk to the patients—that is when I realized how much potential we have—especially when we are pushed out of our comfort zone. The brain learns it somehow.

Matthew Hutson is a contributing writer at The New Yorker and a freelance science and tech writer.

The Mind issue

This story was part of our September 2021 issue

A real-time revolution will up-end the practice of macroeconomics (The Economist)

economist.com

The Economist Oct 23rd 2021


DOES ANYONE really understand what is going on in the world economy? The pandemic has made plenty of observers look clueless. Few predicted $80 oil, let alone fleets of container ships waiting outside Californian and Chinese ports. As covid-19 let rip in 2020, forecasters overestimated how high unemployment would be by the end of the year. Today prices are rising faster than expected and nobody is sure if inflation and wages will spiral upward. For all their equations and theories, economists are often fumbling in the dark, with too little information to pick the policies that would maximise jobs and growth.

Yet, as we report this week, the age of bewilderment is starting to give way to greater enlightenment. The world is on the brink of a real-time revolution in economics, as the quality and timeliness of information are transformed. Big firms from Amazon to Netflix already use instant data to monitor grocery deliveries and how many people are glued to “Squid Game”. The pandemic has led governments and central banks to experiment, from monitoring restaurant bookings to tracking card payments. The results are still rudimentary, but as digital devices, sensors and fast payments become ubiquitous, the ability to observe the economy accurately and speedily will improve. That holds open the promise of better public-sector decision-making—as well as the temptation for governments to meddle.

The desire for better economic data is hardly new. America’s GNP estimates date to 1934 and initially came with a 13-month time lag. In the 1950s a young Alan Greenspan monitored freight-car traffic to arrive at early estimates of steel production. Ever since Walmart pioneered supply-chain management in the 1980s private-sector bosses have seen timely data as a source of competitive advantage. But the public sector has been slow to reform how it works. The official figures that economists track—think of GDP or employment—come with lags of weeks or months and are often revised dramatically. Productivity takes years to calculate accurately. It is only a slight exaggeration to say that central banks are flying blind.

Bad and late data can lead to policy errors that cost millions of jobs and trillions of dollars in lost output. The financial crisis would have been a lot less harmful had the Federal Reserve cut interest rates to near zero in December 2007, when America entered recession, rather than in December 2008, when economists at last saw it in the numbers. Patchy data about a vast informal economy and rotten banks have made it harder for India’s policymakers to end their country’s lost decade of low growth. The European Central Bank wrongly raised interest rates in 2011 amid a temporary burst of inflation, sending the euro area back into recession. The Bank of England may be about to make a similar mistake today.

The pandemic has, however, become a catalyst for change. Without the time to wait for official surveys to reveal the effects of the virus or lockdowns, governments and central banks have experimented, tracking mobile phones, contactless payments and the real-time use of aircraft engines. Instead of locking themselves in their studies for years writing the next “General Theory”, today’s star economists, such as Raj Chetty at Harvard University, run well-staffed labs that crunch numbers. Firms such as JPMorgan Chase have opened up treasure chests of data on bank balances and credit-card bills, helping reveal whether people are spending cash or hoarding it.

These trends will intensify as technology permeates the economy. A larger share of spending is shifting online and transactions are being processed faster. Real-time payments grew by 41% in 2020, according to McKinsey, a consultancy (India registered 25.6bn such transactions). More machines and objects are being fitted with sensors, including individual shipping containers that could make sense of supply-chain blockages. Govcoins, or central-bank digital currencies (CBDCs), which China is already piloting and over 50 other countries are considering, might soon provide a goldmine of real-time detail about how the economy works.

Timely data would cut the risk of policy cock-ups—it would be easier to judge, say, if a dip in activity was becoming a slump. And the levers governments can pull will improve, too. Central bankers reckon it takes 18 months or more for a change in interest rates to take full effect. But Hong Kong is trying out cash handouts in digital wallets that expire if they are not spent quickly. CBDCs might allow interest rates to fall deeply negative. Good data during crises could let support be precisely targeted; imagine loans only for firms with robust balance-sheets but a temporary liquidity problem. Instead of wasteful universal welfare payments made through social-security bureaucracies, the poor could enjoy instant income top-ups if they lost their job, paid into digital wallets without any paperwork.

The real-time revolution promises to make economic decisions more accurate, transparent and rules-based. But it also brings dangers. New indicators may be misinterpreted: is a global recession starting or is Uber just losing market share? They are not as representative or free from bias as the painstaking surveys by statistical agencies. Big firms could hoard data, giving them an undue advantage. Private firms such as Facebook, which launched a digital wallet this week, may one day have more insight into consumer spending than the Fed does.

Know thyself

The biggest danger is hubris. With a panopticon of the economy, it will be tempting for politicians and officials to imagine they can see far into the future, or to mould society according to their preferences and favour particular groups. This is the dream of the Chinese Communist Party, which seeks to engage in a form of digital central planning.

In fact no amount of data can reliably predict the future. Unfathomably complex, dynamic economies rely not on Big Brother but on the spontaneous behaviour of millions of independent firms and consumers. Instant economics isn’t about clairvoyance or omniscience. Instead its promise is prosaic but transformative: better, timelier and more rational decision-making. ■

economist.com

Enter third-wave economics

Oct 23rd 2021


AS PART OF his plan for socialism in the early 1970s, Salvador Allende created Project Cybersyn. The Chilean president’s idea was to offer bureaucrats unprecedented insight into the country’s economy. Managers would feed information from factories and fields into a central database. In an operations room bureaucrats could see if production was rising in the metals sector but falling on farms, or what was happening to wages in mining. They would quickly be able to analyse the impact of a tweak to regulations or production quotas.

Cybersyn never got off the ground. But something curiously similar has emerged in Salina, a small city in Kansas. Salina311, a local paper, has started publishing a “community dashboard” for the area, with rapid-fire data on local retail prices, the number of job vacancies and more—in effect, an electrocardiogram of the economy.

What is true in Salina is true for a growing number of national governments. When the pandemic started last year bureaucrats began studying dashboards of “high-frequency” data, such as daily airport passengers and hour-by-hour credit-card-spending. In recent weeks they have turned to new high-frequency sources, to get a better sense of where labour shortages are worst or to estimate which commodity price is next in line to soar. Economists have seized on these new data sets, producing a research boom (see chart 1). In the process, they are influencing policy as never before.

This fast-paced economics involves three big changes. First, it draws on data that are not only abundant but also directly relevant to real-world problems. When policymakers are trying to understand what lockdowns do to leisure spending they look at live restaurant reservations; when they want to get a handle on supply-chain bottlenecks they look at day-by-day movements of ships. Troves of timely, granular data are to economics what the microscope was to biology, opening a new way of looking at the world.

Second, the economists using the data are keener on influencing public policy. More of them do quick-and-dirty research in response to new policies. Academics have flocked to Twitter to engage in debate.

And, third, this new type of economics involves little theory. Practitioners claim to let the information speak for itself. Raj Chetty, a Harvard professor and one of the pioneers, has suggested that controversies between economists should be little different from disagreements among doctors about whether coffee is bad for you: a matter purely of evidence. All this is causing controversy among dismal scientists, not least because some, such as Mr Chetty, have done better from the shift than others: a few superstars dominate the field.

Their emerging discipline might be called “third wave” economics. The first wave emerged with Adam Smith and the “Wealth of Nations”, published in 1776. Economics mainly involved books or papers written by one person, focusing on some big theoretical question. Smith sought to tear down the monopolistic habits of 18th-century Europe. In the 20th century John Maynard Keynes wanted people to think differently about the government’s role in managing the economic cycle. Milton Friedman aimed to eliminate many of the responsibilities that politicians, following Keynes’s ideas, had arrogated to themselves.

All three men had a big impact on policies—as late as 1850 Smith was quoted 30 times in Parliament—but in a diffuse way. Data were scarce. Even by the 1970s more than half of economics papers focused on theory alone, suggests a study published in 2012 by Daniel Hamermesh, an economist.

That changed with the second wave of economics. By 2011 purely theoretical papers accounted for only 19% of publications. The growth of official statistics gave wonks more data to work with. More powerful computers made it easier to spot patterns and ascribe causality (this year’s Nobel prize was awarded for the practice of identifying cause and effect). The average number of authors per paper rose, as the complexity of the analysis increased (see chart 2). Economists had greater involvement in policy: rich-world governments began using cost-benefit analysis for infrastructure decisions from the 1950s.

Second-wave economics nonetheless remained constrained by data. Most national statistics are published with lags of months or years. “The traditional government statistics weren’t really all that helpful—by the time they came out, the data were stale,” says Michael Faulkender, an assistant treasury secretary in Washington at the start of the pandemic. The quality of official local economic data is mixed, at best; they do a poor job of covering the housing market and consumer spending. National statistics came into being at a time when the average economy looked more industrial, and less service-based, than it does now. The Standard Industrial Classification, introduced in 1937-38 and still in use with updates, divides manufacturing into 24 subsections, but the entire financial industry into just three.

The mists of time

Especially in times of rapid change, policymakers have operated in a fog. “If you look at the data right now…we are not in what would normally be characterised as a recession,” argued Edward Lazear, then chairman of the White House Council of Economic Advisers, in May 2008. Five months later, after Lehman Brothers had collapsed, the IMF noted that America was “not necessarily” heading for a deep recession. In fact America had entered a recession in December 2007. In 2007-09 there was no surge in economics publications. Economists’ recommendations for policy were mostly based on judgment, theory and a cursory reading of national statistics.

The gap between official data and what is happening in the real economy can still be glaring. Walk around a Walmart in Kansas and many items, from pet food to bottled water, are in short supply. Yet some national statistics fail to show such problems. Dean Baker of the Centre for Economic and Policy Research, using official data, points out that American real inventories, excluding cars and farm products, are barely lower than before the pandemic.

There were hints of an economics third wave before the pandemic. Some economists were finding new, extremely detailed streams of data, such as anonymised tax records and location information from mobile phones. The analysis of these giant data sets requires the creation of what are in effect industrial labs, teams of economists who clean and probe the numbers. Susan Athey, a trailblazer in applying modern computational methods in economics, has 20 or so non-faculty researchers at her Stanford lab (Mr Chetty’s team boasts similar numbers). Of the 20 economists with the most cited new work during the pandemic, three run industrial labs.

More data sprouted from firms. Visa and Square record spending patterns, Apple and Google track movements, and security companies know when people go in and out of buildings. “Computers are in the middle of every economic arrangement, so naturally things are recorded,” says Jon Levin of Stanford’s Graduate School of Business. Jamie Dimon, the boss of JPMorgan Chase, a bank, is an unlikely hero of the emergence of third-wave economics. In 2015 he helped set up an institute at his bank which tapped into data from its network to analyse questions about consumer finances and small businesses.

The Brexit referendum of June 2016 was the first big event when real-time data were put to the test. The British government and investors needed to get a sense of this unusual shock long before Britain’s official GDP numbers came out. They scraped web pages for telltale signs such as restaurant reservations and the number of supermarkets offering discounts—and concluded, correctly, that though the economy was slowing, it was far from the catastrophe that many forecasters had predicted.

Real-time data might have remained a niche pursuit for longer were it not for the pandemic. Chinese firms have long produced granular high-frequency data on everything from cinema visits to the number of glasses of beer that people are drinking daily. Beer-and-movie statistics are a useful cross-check against sometimes dodgy official figures. China-watchers turned to them in January 2020, when lockdowns began in Hubei province. The numbers showed that the world’s second-largest economy was heading for a slump. And they made it clear to economists elsewhere how useful such data could be.

Vast and fast

In the early days of the pandemic Google started releasing anonymised data on people’s physical movements; this has helped researchers produce a day-by-day measure of the severity of lockdowns (see chart 3). OpenTable, a booking platform, started publishing daily information on restaurant reservations. America’s Census Bureau quickly introduced a weekly survey of households, asking them questions ranging from their employment status to whether they could afford to pay the rent.

In May 2020 Jose Maria Barrero, Nick Bloom and Steven Davis, three economists, began a monthly survey of American business practices and work habits. Working-age Americans are paid to answer questions on how often they plan to visit the office, say, or how they would prefer to greet a work colleague. “People often complete a survey during their lunch break,” says Mr Bloom, of Stanford University. “They sit there with a sandwich, answer some questions, and that pays for their lunch.”

Demand for research to understand a confusing economic situation jumped. The first analysis of America’s $600 weekly boost to unemployment insurance, implemented in March 2020, was published in weeks. The British government knew by October 2020 that a scheme to subsidise restaurant attendance in August 2020 had probably boosted covid infections. Many apparently self-evident things about the pandemic—that the economy collapsed in March 2020, that the poor have suffered more than the rich, or that the shift to working from home is turning out better than expected—only seem obvious because of rapid-fire economic research.

It is harder to quantify the policy impact. Some economists scoff at the notion that their research has influenced politicians’ pandemic response. Many studies using real-time data suggested that the Paycheck Protection Programme, an effort to channel money to American small firms, was doing less good than hoped. Yet small-business lobbyists ensured that politicians did not get rid of it for months. Tyler Cowen, of George Mason University, points out that the most significant contribution of economists during the pandemic involved recommending early pledges to buy vaccines—based on older research, not real-time data.

Still, Mr Faulkender says that the special support for restaurants that was included in America’s stimulus was influenced by a weak recovery in the industry seen in the OpenTable data. Research by Mr Chetty in early 2021 found that stimulus cheques sent in December boosted spending by lower-income households, but not much for richer households. He claims this informed the decision to place stronger income limits on the stimulus cheques sent in March.

Shaping the economic conversation

As for the Federal Reserve, in May 2020 the Dallas and New York regional Feds and James Stock, a Harvard economist, created an activity index using data from SafeGraph, a data provider that tracks mobility using mobile-phone pings. The St Louis Fed used data from Homebase to track employment numbers daily. Both showed shortfalls of economic activity in advance of official data. This led the Fed to communicate its doveish policy stance faster.

Speedy data also helped frame debate. Everyone realised the world was in a deep recession much sooner than they had in 2007-09. In the IMF’s overviews of the global economy in 2009, 40% of the papers cited had been published in 2008-09. In the overview published in October 2020, by contrast, over half the citations were for papers published that year.

The third wave of economics has been better for some practitioners than others. As lockdowns began, many male economists found themselves at home with no teaching responsibilities and more time to do research. Female ones often picked up the slack of child care. A paper in Covid Economics, a rapid-fire journal, finds that female authors accounted for 12% of economics working-paper submissions during the pandemic, compared with 20% before. Economists lucky enough to have researched topics before the pandemic which became hot, from home-working to welfare policy, were suddenly in demand.

There are also deeper shifts in the value placed on different sorts of research. The Economist has examined rankings of economists from IDEAS RePEC, a database of research, and citation data from Google Scholar. We divided economists into three groups: “lone wolves” (who publish with less than one unique co-author per paper on average); “collaborators” (those who tend to work with more than one unique co-author per paper, usually two to four people); and “lab leaders” (researchers who run a large team of dedicated assistants). We then looked at the top ten economists for each as measured by RePEC author rankings for the past ten years.

Collaborators performed far ahead of the other two groups during the pandemic (see chart 4). Lone wolves did worst: working with large data sets benefits from a division of labour. Why collaborators did better than lab leaders is less clear. They may have been more nimble in working with those best suited for the problems at hand; lab leaders are stuck with a fixed group of co-authors and assistants.

The most popular types of research highlight another aspect of the third wave: its usefulness for business. Scott Baker, another economist, and Messrs Bloom and Davis—three of the top four authors during the pandemic compared with the year before—are all “collaborators” and use daily newspaper data to study markets. Their uncertainty index has been used by hedge funds to understand the drivers of asset prices. The research by Messrs Bloom and Davis on working from home has also gained attention from businesses seeking insight on the transition to remote work.

But does it work in theory?

Not everyone likes where the discipline is going. When economists say that their fellows are turning into data scientists, it is not meant as a compliment. A kinder interpretation is that the shift to data-heavy work is correcting a historical imbalance. “The most important problem with macro over the past few decades has been that it has been too theoretical,” says Jón Steinsson of the University of California, Berkeley, in an essay published in July. A better balance with data improves theory. Half of the recent Nobel prize went for the application of new empirical methods to labour economics; the other half was for the statistical theory around such methods.

Some critics question the quality of many real-time sources. High-frequency data are less accurate at estimating levels (for example, the total value of GDP) than they are at estimating changes, and in particular turning-points (such as when growth turns into recession). In a recent review of real-time indicators Samuel Tombs of Pantheon Macroeconomics, a consultancy, pointed out that OpenTable data tended to exaggerate the rebound in restaurant attendance last year.

Others have worries about the new incentives facing economists. Researchers now race to post a working paper with America’s National Bureau of Economic Research in order to stake their claim to an area of study or to influence policymakers. The downside is that consumers of fast-food academic research often treat it as if it is as rigorous as the slow-cooked sort—papers which comply with the old-fashioned publication process involving endless seminars and peer review. A number of papers using high-frequency data which generated lots of clicks, including one which claimed that a motorcycle rally in South Dakota had caused a spike in covid cases, have since been called into question.

Whatever the concerns, the pandemic has given economists a new lease of life. During the Chilean coup of 1973 members of the armed forces broke into Cybersyn’s operations room and smashed up the slides of graphs—not only because it was Allende’s creation, but because the idea of an electrocardiogram of the economy just seemed a bit weird. Third-wave economics is still unusual, but ever less odd. ■

Can Skeletons Have a Racial Identity? (New York Times)

nytimes.com

Sabrina Imbler


A growing number of forensic researchers are questioning how the field interprets the geographic ancestry of human remains.
Forensic anthropologists have relied on features of face and skull bones, known as morphoscopic traits, such as the post-bregmatic depression — a dip on the top of the skull — to estimate ancestry.
Credit: John M. Daugherty/Science Source

Oct. 19, 2021, 2:30 a.m. ET

Racial reckonings were happening everywhere in the summer of 2020, after George Floyd was killed in Minneapolis by the police. The time felt right, two forensic anthropologists reasoned, to reignite a conversation about the role of race in their own field, where specialists help solve crimes by analyzing skeletons to determine who those people were and how they died.

Dr. Elizabeth DiGangi of Binghamton University and Jonathan Bethard of the University of South Florida published a letter in The Journal of Forensic Science that questioned the longstanding practice of estimating ancestry, or a person’s geographic origin, as a proxy for estimating race. Ancestry, along with height, age at death and assigned sex, is one of the key details that many forensic anthropologists try to determine.

That fall, they published a longer paper with a more ambitious call to action: “We urge all forensic anthropologists to abolish the practice of ancestry estimation.”

In recent years, a growing number of forensic anthropologists have grown critical of ancestry estimation and want to replace it with something more nuanced.

Criminal cases in which the victim’s identity is entirely unknown are rare. But in these instances, some forensic anthropologists argue, a tool like ancestry estimation can be crucial.

The assessment of race has been a part of forensic anthropology since the field’s inception a century ago. The earliest scholars were white men who studied human skulls to support racist beliefs. Ales Hrdlicka, a physical anthropologist who joined the Smithsonian Institution in 1903, was a eugenicist who looted human remains for his collections and sought to classify humans into different races based on certain appearances and traits.

An expert on skeletons, Dr. Hrdlicka helped law enforcement identify human remains, laying the blueprint for the professional field. Forensic anthropologists thereafter were expected to produce a profile with the “Big Four” — age at death, sex, height and race.

In the 1990s, as more scientists debunked the myth of biological race — the notion that the humans species is divided into distinct races — anthropologists grew sharply divided over the issue. One survey found that 50 percent of physical anthropologists accepted the idea of a biological concept of race, while 42 rejected it. At the time, some researchers still used terms like “Caucasoid,” “Mongoloid” and “Negroid” to describe skeletons, and DNA as a forensic tool was still many years away. Today in the U.S., the field of forensic anthropology is 87 percent white.

The anthropologist Ales Hrdlicka, right, in 1925.
Credit: Sueddeutsche Zeitung Photo/Alamy

In 1992, Norman Sauer, an anthropologist at Michigan State University, suggested dropping the term “race,” which he considered loaded, and replacing it with “ancestry.” The term became universal. But some researchers contend that little changed about the practice.

When Shanna Williams, a forensic anthropologist at the University of South Carolina School of Medicine Greenville, was in graduate school around a decade ago, it was still customary to sort skeletons into one of the “Big Three” possible populations — African, Asian or European.

But Dr. Williams grew suspicious of the idea and the way ancestry was often assigned. She saw skulls designated as “Hispanic,” a term that refers to a language group and has no biological meaning. She considered how the field might try, and fail, to sort her own skull. “My mom is white, and my dad is Black,” she said. “Do I fit that mold? Am I perfectly one thing or the other?”

The body of a skeleton can provide a person’s age or height. But the question of ancestry is reserved for the skull — specifically, features of face and skull bones, known as morphoscopic traits, that vary across different groups of humans and can occur more frequently in certain populations.

One trait, called the post-bregmatic depression, is a small indentation located on top of some people’s heads. For a long time, forensic anthropologists assumed that if the skull was indented, the person may be Black.

But forensic anthropologists know little else about the post-bregmatic depression. “There’s not been any understanding as to why this trait exists, what causes it, and what it means,” Dr. Bethard said.

Moreover, the science linking the trait and African ancestry was flawed. In 2003, Joe Hefner, a forensic anthropologist at Michigan State University, used trait lists from a key textbook, “Skeletal Attribution of Race,” to examine more than 700 skulls for his masters thesis. He found that the post-bregmatic depression was present in only 40 percent of people with African ancestry, and is actually more common in many other populations.

Of the 17 morphoscopic traits typically used to estimate ancestry, only five have been studied for whether they are heritable, making it unclear why the unstudied traits would correspond with specific populations. “There’s been this use and reuse of these traits without a fundamental understanding of what they even are,” Dr. Bethard said.

Nonetheless, Dr. Hefner said, if nothing is known about a victim beyond the shape of their skull, ancestry might hold the key to their identity.

He cited a recent example in Michican in which the police had a skull that they believed belonged to a missing woman, one of two who were reported missing in the county at the time. When Dr. Hefner examined it and searched the list of missing people in the area, he concluded that the skull might have come from a missing Southeast Asian male. “They sent us his dental records over and five minutes later we had identified this person,” Dr. Hefner said.

Dr. DiGangi worries that these estimations could suggest to the police that biological race is real and increase racial bias. “When I say to the police, ‘OK, I took these measurements, I looked at these things on the skull and this person is African-American,’ of course they’re going to think it’s biological,” Dr. DiGangi said. “Why would they not?”

To what extent this concern plays out in the real world is hard to measure, however.

Dr. Shanna Williams, a forensic anthropologist and professor in South Carolina, grew suspicious of the idea of the way ancestry was assigned when in graduate school.
Credit: Juan Diego Reyes for The New York Times

For the past two years, Ann Ross, a forensic anthropologist at North Carolina State University, has pushed the American Academy of Forensic Sciences Standards Board to replace ancestry estimation with something new: population affinity.

Whereas ancestry aims to trace back to a continent of origin, population affinity aims to align someone with a population, such as Panamanian. This more nuanced framework looks at how the larger history of a place or community can lead to significant differences between populations that are otherwise geographically close.

A recent paper by Dr. Ross and Dr. Williams, who are close friends, examines Panama and Colombia as a test case. An ancestry estimation might suggest people from both countries would have similarly shaped skulls. But population affinity acknowledges that the trans-Atlantic slave trade and colonization by Spain resulted in new communities living in Panama that changed the makeup of the country’s population. “Because of those historical events, individuals from Panama are very, very different from those from Colombia,” said Dr. Ross, who is Panamanian.

Dr. Ross even designed her own software, 3D-ID, in place of Fordisc, the most commonly used forensic software that categorizes skulls into inconsistent terms: White. Black. Hispanic. Guatemalan. Japanese.

Other anthropologists say that, for all practical purposes, their own ancestry estimations have become affinity estimations. Kate Spradley, a forensic anthropologist at Texas State University, works with the unidentified remains of migrants found near the U.S.-Mexico border. “When we reference data that uses local population groups, that’s really affinity, not ancestry,” Dr. Spradley said.

In her work, Dr. Spradley uses missing persons’ databases from multiple countries that do not always share DNA data. The bones are often weathered, fragmenting the DNA. Estimating affinity can “help to provide a preponderance of evidence,” Dr. Spradley said.

Still, Dr. DiGangi said that switching to affinity may not address racial biases in law enforcement. Until she sees evidence that bias does not preclude people from becoming identified, she says, she does not want a “checkbox” that gets at ancestry or affinity.

As of mid-October, Dr. Ross is waiting for the American Academy of Forensic Sciences Standards Board to set a vote to determine whether ancestry estimation should be replaced with population affinity. But the larger debate — over how to bridge the gap between a person’s bones and identity in real life — is far from settled.

“In 10 or 20 years, we might find a better way to do it,” Dr. Williams said. “I hope that’s the case.”

Reunião do governo com Fundação Cacique Cobra Coral irrita empresários (Painel S.A./Folha de S.Paulo)

Painel S.A.

Representante do setor de turismo afirma que governo não pode contar com a sorte

Joana Cunha – 19.out.2021 às 15h06

A recente reunião do Ministério de Minas e Energia com a entidade esotérica Fundação Cacique Cobra Coral, que diz controlar o clima, desagradou representantes do empresariado que vêm, há meses, tentando convencer o governo de que haveria benefício econômico em retomar o horário de verão para resolver o problema energético agravado pela falta de chuva.

Fabio Aguayo, diretor da CNTur, uma das entidades de turismo que defende a mudança no relógio para alongar o tempo de atendimento no comércio e nas atividades de lazer, diz que o encontro do ministério com a Cobra Coral mostra que o governo está preocupado, mas não pode contar com a sorte e esperar um dilúvio para resolver a questão energética.

Para Aguayo, o ministro Bento Albuquerque é “intransigente e cabeça dura”. Ele afirma que deve ser difícil por parte do governo admitir a volta do horário de verão porque o debate tomou um rumo ideológico comparável a cloroquina e tratamento precoce, quando deveria ser mais econômico, científico e estratégico.

O grupo pró-horário de verão iniciado por Aguayo, que tem apoio de associações de bares e restaurantes, argumenta que a medida promoveria alguma economia de energia. Também permitiria estender o funcionamento de atividades ligadas ao lazer e ajudaria os negócios mais afetados na pandemia.

“Eles estão em um momento crítico. Não podem contar com a sorte. Não podem contar com a sorte de que vai ter um dilúvio, um tsunami de chuva no Brasil. Não vai. Ficaram tão fechados nesse mundinho deles da ideologia, agora estão indo para o lado esotérico. É o que restou para eles”, afirma Aguayo.

O ministério divulgou comunicado no domingo (17) dizendo que seu encontro com a Fundação Cacique Cobra Coral não foi pedido pela pasta. ​

com Mariana Grazini e Andressa Motter

Por ‘tragédia energética’, ministério se reúne com ONG Cacique Cobra Coral (UOL)

noticias.uol.com.br

Eduardo Militão Do UOL, em Brasília 17/10/2021 15h05


Ruínas da década de 70 são reveladas após seca do rio Paraná em São Paulo - Reprodução/TV TEM
Ruínas da década de 70 são reveladas após seca do rio Paraná em São Paulo Imagem: Reprodução/TV TEM

Para conversar com uma instituição que alertava para uma “tragédia econômica e energética” decorrente da falta de chuvas, servidores do Ministério das Minas e Energia (MME) se reuniram com representantes da Fundação Cacique Cobra Coral (FCCC). Em seu site, a ONG informa que é presidida por uma “médium que incorpora o espírito e mentor Cacique Cobra Coral, que também já teria sido de Galileu Galilei e Abraham Lincoln”.

A fundação pediu uma audiência com o ministro Bento Albuquerque porque previa “blackout no Centro-Sul [do país] a partir de 16/10/21 se medidas urgentes não forem adotadas”, de acordo com transcrição de email de 2 de setembro, enviada ao UOL pela assessoria de imprensa do Ministério neste domingo (17).

O tempo seco já afeta o meio ambiente, preços das contas de luz e dos alimentos e o abastecimento de água em algumas regiões —especialistas dizem que, se não houver muita chuva nos próximos meses, a situação tende a se agravar.

“Vimos pelo presente solicitar uma audiência extra agenda para para [sic] ontem, afim [sic] de tratarmos da tragédia econômica x energética acima e os meios para recuperar tais precipitações irregulares no lugar certo ainda na estação inverno que se finda e primavera, cujo verão precisará ser antecipado ja [sic] na primavera”, diz o email divulgado pelo governo federal.

Ministro não participou de encontro

O remetente da correspondência era Osmar Santos, que usou seu email profissional, da “Cacique Cobra Coral Foundation” (cuja tradução livre é Fundação Cacique Cobra Coral) e assinou o texto como responsável pelo setor de “relações governamentais” da seguradora Tunikito, “mantenedora oficial da http://www.fccc.org.br”, o site da Fundação Cacique Cobra Coral.

17.out.2021 - Agenda de servidor do Ministério de Minas e Energia mostra reunião com Fundação Cacique Cobra Coral - Reprodução/MME - Reprodução/MME
17.out.2021 – Agenda de servidor do Ministério de Minas e Energia mostra reunião com Fundação Cacique Cobra Coral Imagem: Reprodução/MME

A reunião foi realizada por videoconferência na quinta-feira passada (14), com servidores da Secretaria de Energia Elétrica do ministério, segundo a assessoria.

“O Ministro de Minas e Energia sequer foi informado acerca da citada solicitação de audiência e igualmente não participou da referida reunião”, afirmou a pasta —apesar de o próprio site do governo indicar que a secretaria faz parte do ministério.

Um dos servidores que participaram do encontro foi o diretor do Departamento de Monitoramento do Sistema Elétrico, Guilherme Silva de Godoi. Em sua agenda, consta reunião com a “FCCC”, a sigla da fundação.

Fundação anunciou fazer “previsões”, diz ministério

Segundo a assessoria da pasta comandada por Albuqurque, a fundação anunciou que faz previsões diversas sobre a natureza.

“Durante a audiência, o senhor Osmar relatou aos técnicos do MME que o instituto faz serviços de previsões dos mais variados tipos”, diz texto enviado ao UOL. “Destaca-se que o trabalho no MME é pautado, estritamente, na fundamentação técnica, no interesse público e pela transparência nas ações executadas.”

Como servidores públicos, os servidores do MME apenas e tão somente ouviram as informações do senhor Osmar, assim como ocorre em todas as solicitações de audiências que a pasta recebe, prezando pelo diálogo com toda a sociedade”
Assessoria do Ministério das Minas e Energia

O site da fundação afirma que sua missão é “minimizar catástrofes que podem ocorrer em razão dos desequilíbrios provocados pelo homem na natureza”. A instituição não atendeu aos pedidos de esclarecimentos feitos pelo UOL neste domingo.

Mas Osmar Santos disse à revista Veja que a médium da Fundação, Adelaide Scritori, iria trazer “muita chuva” para Minas Gerais a partir de novembro.

A instituição costuma anunciar contratos com governos locais, como com a Prefeitura de São Paulo, o Distrito Federal em 2017 e a Prefeitura do Rio. A FCCC já afirmou ter dado conselhos a ministros do governo Bolsonaro e até fechado parcerias para ajudar no desencalhe de um navio no canal de Suez, no Egito.

Cobra Coral é chamado às pressas por ministro de Bolsonaro: “Faça chover” (Fórum)

revistaforum.com.br

“É a data-limite. Alguma coisa tinha que ser feita com urgência”, disse Osmar Santos, porta-voz da médium que incorpora a entidade. Convocação foi feita pelo almirante Bento Albuquerque, ministro de Minas e Energia

Por Plinio Teodoro 15 out 2021 – 14:12


Após ganhar notoriedade nos anos 90, durante os governos FHC, e relegado na era Lula/PT, o Cacique Cobra Coral, entidade invocada pela fundação que leva seu nome para controlar as chuvas, voltou ao Planalto às pressas a pedido do ministro de Minas e Energia, o almirante Bento Albuquerque.

Segundo informações de Cleo Guimarães na revista Veja, Osmar Santos, porta-voz de Adelaide Scritori, médium que incorpora o cacique, o ministro militar determinou à entidade: “Faça chover!”.

A reunião teria acontecido nesta quinta-feira (14). Em agosto, quando o país já estava em plena crise hídrica, a Fundação diz ter enviado ao governo Jair Bolsonaro (Sem partido) um alerta sobre riscos de apagão a partir deste sábado (16).

“É a data-limite. Alguma coisa tinha que ser feita com urgência”, disse Osmar, ressaltando a importância do encontro para por fim à crise hídrica vivida pelo país em decorrência da falta de chuva nos reservatórios das hidrelétricas.

A Fundação garante que a intervenção do “cacique” trará resultados a partir de novembro, quando as chuvas devem cair sobre Minas Gerais e o sul do país.

A fundação também já estaria articulando um encontro com o governador paulista João Doria (PSDB) para acabar com a estiagem no estado.

Posse

O último trabalho realizado pela fundação junto ao governo federal foi na posse de Jair Bolsonaro, em janeiro de 2019, quando teria impedido a chuva durante o evento.

“Apesar de o dia ter amanhecido chuvoso, começou a melhorar após as 13h e foi abrindo. Por onde o presidente e a comitiva passavam, o tempo ia abrindo e permaneceu firme”, disse à época Osmar Santos.

Leia também: Fundação Cacique Cobra Coral afirma ter sido chamada para ajudar a desencalhar navio no Canal de Suez

Genomes Show the History and Travels of Indigenous Peoples (Scientific American)

Scientific American

A new study demonstrates “I ka wā mamua, ka wā ma hope,” or “the future is in the past”

October 13, 2021 – DNASocial Justice

Keolu Fox is an assistant professor at the University of California, San Diego, where he is affiliated with the department of anthropology, the Global Health Program, the Halıcıoğlu Data Science Institute, the Climate Action Lab, the Design Lab and the Indigenous Futures Institute. His work focuses on designing and engineering genome sequencing and editing technologies to advance precision medicine for Indigenous communities.

Genomes Show the History and Travels of Indigenous Peoples
Wa’a Kiakahi in Keaukaha, Hawaii. Credit: Keolu Fox

I am the proud descendant of people who, at least 1,000 years ago, made one of the riskiest decisions in human history: to leave behind their homeland and set sail into the world’s largest ocean. As the first Native Hawaiian to be awarded a Ph.D. in genome sciences, I realized in graduate school that there is another possible line of evidence that can give insights into my ancestors’ voyaging history: our moʻokuʻauhau, our genome. Our ancestors’ genomes were shaped by evolutionary and cultural factors, including our migration and the ebb and flow of the Pacific Ocean. They were also shaped by the devastating history of colonialism.

Through analyzing genomes from present-day peoples, we can do incredible things like determine the approximate number of wa‘a (voyaging canoes) that arrived when my ancestors landed on the island of Hawaii or even reconstruct the genomes of some of the legendary chiefs and navigators that discovered the islands of the Pacific. And beyond these scientific and historical discoveries, genomics research can also help us understand and rectify the injustices of the past. For instance, genomics might clarify how colonialism affected things like genetic susceptibility to illness—information crucial for developing population-specific medical interventions. It can also help us reconstruct the history of land use, which might offer new evidence in court cases over disputed territories and land repatriation.

First, let’s examine what we already know from oral tradition and experimental archeology about our incredible voyaging history in the Pacific. Using complex observational science and nature as their guide, my ancestors drew on bird migration patterns, wind and weather systems, ocean currents, the turquoise glint on the bottom of a cloud reflecting a lagoon, and a complex understanding of stars, constellations and physics to find the most remote places in the world. These intrepid voyagers were the first people to launch what Kanaka Maoli (Hawaiian) master navigator Nainoa Thompson refers to as the original “moonshot.”

This unbelievably risky adventure paid off: In less than 50 generations (1,000 years), my ancestors mastered the art of sailing in both hemispheres. Traveling back and forth along an oceanic superhighway the space of Eurasia in double-hulled catamarans filled to the brim with taro, sweet potatoes, pigs and chickens, using the stars at night to navigate and other advanced techniques and technologies, iteratively perfected over time. This would be humankind’s most impressive migratory feat—no other culture in human history has covered so much distance in such a short amount of time.

The history of my voyaging ancestors and their legacy has been passed to us traditionally through our ʻōlelo (language), mo‘olelo (oral history) and hula. As a Kanaka Maoli, I have grown up knowing them: of how Maui pulled the Hawaiian Islands from the sea and how Herb Kāne, Ben Finney, Tommy Holmes, Mau Piailug and many other members of the Polynesian Voyaging Society enabled the first noninstrumental voyage from Tahiti to Hawaii in over 600 years onboard the wa‘a Hōkūle‘a.

Genomes from modern Pacific Islanders have enabled us to reconstruct precise timings, paths and branching patterns, or bifurcations, of these ancient voyages, giving a refined understanding of the order in which many archipelagoes in the Pacific were settled. By working collaboratively with communities, our approach has directly challenged colonial science’s legacy of taking artifacts and genetic materials without consent. Similar tools to the new genomics have no doubt been misused in the past to justify racist and social Darwinist ends. Yet by using genetic data graciously provided by multiple communities across the Pacific, and by allowing them to shape research priorities, my colleagues and I have been able to “I ka wā mamua, ka ma hope,” or “walk backward into the future.”

So how can our knowledge of the genomic past allow us to walk toward this better future? Genome sequence data are not just helpful in providing refined historical information, they also help us understand and treat important contemporary matters such as population-specific disease. The time frame of these ancestors’ arrival in the Pacific, and the order in which the most remote islands in the world were settled, matters for understanding the incidence and severity among Islander populations of many complex diseases today.

Think of our genetic history as a tree, with present-day populations at the tips of branches and older ones closer to the trunk. Moving backward in time—or from the tips to the trunk—you encounter places where two branches, or populations, were descended from the same ancestor. The places where the branches split represent events in settlement histories in which two populations split, often because of a migration to a new place.

These events provide key insights into what geneticists call “founder effects” and “population bottlenecks,” which are extremely important for understanding disease susceptibility. For example, if there is a specific condition in a population at the trunk of a branching event, then populations on islands that are settled later will have a higher chance of presenting that same health condition as well. Founder populations have provided key insights into rare population-specific diseases. Some examples include Ashkenazi Jews and susceptibility to Tay-Sachs disease and Mennonite communities and susceptibility to maple syrup urine disease (MSUD).

This research also sheds important light on colonialism. As European settlers arrived in the Pacific in places such as Hawaii, Tahiti, and Aotearoa (New Zealand), they didn’t just bring the printing press, the Bible and gunpowder, they brought deadly pathogens. In the case of many Indigenous peoples, historical contact with Europeans resulted in a population collapse (a loss of approximately 80 percent of an Indigenous population’s size), mostly as a result of virgin-soilepidemics of diseases such as smallpox. From Hernán Cortés to James Cook, these bottlenecks have shaped the contemporary genetics of Indigenous peoples in ways that directly impact our susceptibility to disease.

By integrating digital sequence information (DSI) from both modern and ancient Indigenous genomes in genetic regions such as the human leukocyte antigen (HLA) system, we can observe a reduction in human genetic variation in contemporary populations, as compared with ancient ones. In this way, we can observe empirically how colonialism has shaped the genomes of modern Indigenous populations.

Today fewer than 1 percent of genome-wide association studies, which identify associations between diseases and genetic variants, and less than 5 percent of clinical trials include Indigenous peoples. We have just begun to develop mRNA vaccine-based therapies that have already shown their ability to “save the world.” Given their success and potential, why not design treatments, such as gene therapies, that are population specific and reflect the local complexity that speaks to Indigenous peoples’ unique migratory histories and experiences with colonialism?

Finally, genomics also has the potential to impact the politics of Indigenous rights and specifically how we think about the history of land stewardship and belonging. For instance, emerging genomics evidence can empirically verify who first lived on contested territories—e.g., indigenous groups could prove how many generations they arrived before colonists—which could be used in a court of law to settle land and resource repatriation claims.

Genetics gives us insights into the impact of both our peoples’ proud history of migration and the shameful legacy of colonialism. We need to encourage the use of these data to design treatments for the least, the last, the looked over and the left out, and to generate policies and legal decisions that can rectify the history of injustice. In this way, genomics can connect where we come from to where we will go. Once used to make claims about Indigenous peoples’ inferiority, today the science of the genome can be part of an Indigenous future we can all believe in.

A reunião do Cacique Cobra Coral com o Ministério de Minas e Energia (Veja)

veja.abril.com.br

Representante da médium que teria o poder de desviar chuvas e controlar o tempo tem encontro virtual com equipe técnica de Bento Albuquerque

Por Cleo Guimarães. Atualizado em 15 out 2021, 17h30; Publicado em 15 out 2021, 13h24


a imagem mostra o ministro das minas e energia, bento albuquerque
Bento Albuquerque, ministro de Minas e Energia: sua equipe ouviu dicas e conselhos do representante da médium que incorpora o Cacique Cobra Coral  Marcelo Camargo/Agência Brasil

Vale tudo para enfrentar a pior seca dos últimos 91 anos, inclusive recorrer à paranormalidade. Porta-voz de Adelaide Scritori – a médium que, ao incorporar o Cacique Cobra Coral, teria o poder de desviar chuvas e controlar o tempo -, Osmar Santos participou nesta quinta (14) de uma reunião com três integrantes da equipe técnica do ministro de Minas e Energia, Bento Albuquerque. O assunto foi um só: a crise hídrica no país.

email do ministerio das minas e energia
reprodução/Reprodução

Osmar diz que em agosto a Fundação enviou um alerta ao governo federal, no qual alertava para os riscos de um apagão, caso a estiagem permanecesse por mais de um mês. O encontro virtual aconteceu nesta quinta (14) e, segundo Guilherme Godoi, um dos técnicos do Ministério na reunião, não houve avanço. “Simplesmente ouvimos o que ele tinha a dizer. Nosso trabalho é técnico”. Já Osmar garante que a médium vai trazer “muita chuva” para Minas Gerais a partir do mês que vem.

A falta de chuvas, como se sabe, reduziu a níveis críticos os reservatórios das usinas hidroelétricas. Por isso, foram acionadas as termoelétricas, que usam combustíveis fósseis, mais caros. O custo é repassado aos consumidores residenciais, comerciais e industriais, o que pressiona a inflação ao produtor e ao consumidor.

Ministério das Minas e Energia encomenda chuvas ao Cacique Cobra Coral (Metrópoles)

metropoles.com

Médium alertou o governo para o risco de um apagão neste sábado

Ricardo Noblat

16/10/2021 9:00, atualizado 16/10/2021 5:55


É tamanha a certeza expressa, ontem, pelo presidente Bolsonaro de que chuvas recentes em algumas regiões do país afastaram o risco de um apagão de eletricidade que, na última quinta-feira, a convite do ministro Bento Albuquerque, de Minas e Energia, desembarcou às pressas em Brasília o cidadão Osmar Santos.

Osmar é o porta-voz de Adelaide Scritori, a médium paulista que diz incorporar o espírito da entidade Cacique Cobra Coral, detentora do poder de desviar chuvas e controlar o tempo. Osmar contou à VEJA que ouviu o apelo da equipe técnica do ministro: “Faça chover”. E que ele respondeu que o Cacique fará chover.

O encontro deveu-se ao fato de que a médium, em agosto último, alertou o governo federal sobre os riscos de um apagão no país a partir deste sábado, 16 de outubro. “Seria a data limite”, segundo Osmar. “Alguma coisa tinha que ser feita com urgência”. Então se fez a reunião urgente, embora em cima da hora.

Agora está tudo nas mãos de Deus. Ou melhor: do Cacique Cobra Coral.

Governo conversa com representantes de entidade esotérica para ajudar na crise energética (Folha de S.Paulo)

www1.folha.uol.com.br

Ministério de Minas e Energia confirma reunião com equipe de Fundação Cacique Cobra Coral

17.out.2021 às 17h02; Atualizado: 17.out.2021 às 19h35


O Ministério de Minas e Energia reuniu-se recentemente com representantes da Fundação Cacique Cobra Coral para tratar da questão da crise hídrica que secou reservatórios de hidrelétricas do país neste ano. A reunião foi divulgada pela revista Veja na sexta-feira (15).

A pasta confirmou a reunião com representantes da entidade esotérica, a quem é atribuída poderes de intervenção no clima, em comunicado divulgado à imprensa neste domingo (17), que responde reportagens publicadas sobre o encontro.

Segundo o comunicado, que não cita quando a reunião foi realizada, o encontro não foi pedido pelo ministério, mas ocorreu em atendimento a “princípios da transparência e do diálogo franco”.

O ministério, que diz receber centenas de pedidos de audiência, afirmou que apenas aceitou o encontro com representantes da entidade e reproduziu email recebido no início de setembro em que representante da entidade chamado Osmar Santos pediu uma reunião com o ministro Bento Albuquerque.

O assunto da reunião seria “tratar da tragédia econômica x energética… e os meios para recuperar tais precipitações irregulares no lugar certo ainda na estação inverno que se finda e primavera”, segundo a mensagem reproduzida pelo comunicado do ministério, que ressalta que Albuquerque não participou da reunião.

“Durante a audiência, o senhor Osmar (diretor de relações governamentais do instituto) relatou aos técnicos do MME que o instituto faz serviços de previsões dos mais variados tipos”, afirmou a pasta. “Como servidores públicos, os servidores do MME apenas e tão somente ouviram as informações do senhor Osmar”, acrescentou o ministério.

Na semana passada, o Operador Nacional do Sistema Elétrico (ONS) afirmou que a projeção para o nível das represas de hidrelétricas do país é que eles cheguem até o fim do mês com 16,7% da sua capacidade na região Sudeste/Centro-Oeste, contra projeção de 15,2% feita na semana anterior.

O ONS afirmou ainda que vê ainda um cenário “bastante preocupante” para 2022 e recomendou que o país permaneça mobilizado para enfrentar a próxima estação seca.

Byung-Chul Han: smartphone e o “inferno dos iguais” (Outras Palavras)

outraspalavras.net

por El País Brasil – Publicado 14/10/2021 às 17:13 – Atualizado 14/10/2021 às 18:46


Por Byung-Chul Han, em entrevista a Sergio C. Fanjul, no El País

Com certa vertigem, o mundo material, feito de átomos e moléculas, de coisas que podemos tocar e cheirar, está se dissolvendo em um mundo de informação, de não-coisas, como observa o filósofo alemão de origem coreana Byung-Chul Han. Não-coisas que, ainda assim, continuamos desejando, comprando e vendendo, que continuam nos influenciando. O mundo digital cada vez se hibridiza de modo mais notório com o que ainda consideramos mundo real, ao ponto de confundirem-se entre si, fazendo a existência cada vez mais intangível e fugaz. O último livro do pensador, Não-coisas. Quebras no mundo de hoje, se une a uma série de pequenos ensaios em que o pensador sucesso de vendas (o chamaram de rockstar da filosofia) disseca minuciosamente as ansiedades que o capitalismo neoliberal nos produz.

Unindo citações frequentes aos grandes filósofos e elementos da cultura popular, os textos de Han transitam do que chamou de “a sociedade do cansaço”, em que vivemos esgotados e deprimidos pelas inapeláveis exigências da existência, à análise das novas formas de entretenimento que nos oferecem. Da psicopolítica, que faz com que as pessoas aceitem se render mansamente à sedução do sistema, ao desaparecimento do erotismo que Han credita ao narcisismo e exibicionismo atual, que proliferam, por exemplo, nas redes sociais: a obsessão por si mesmo faz com que os outros desapareçam e o mundo seja um reflexo de nossa pessoa. O pensador reivindica a recuperação do contato íntimo com a cotidianidade – de fato, é sabido que ele gosta de cultivar lentamente um jardim, trabalhos manuais, o silêncio. E se rebela contra “o desaparecimento dos rituais” que faz com que a comunidade desapareça e que nos transformemos em indivíduos perdidos em sociedades doentes e cruéis.

Byung-Chul Han aceitou esta entrevista como EL PAÍS, mas somente mediante um questionário por e-mail que foi respondido em alemão pelo filósofo e posteriormente traduzido e editado.

PERGUNTA. Como é possível que em um mundo obcecado pela hiperprodução eo hiperconsumo, ao mesmo tempo os objetos vão se dissolvendo e vamos rumo a um mundo de não-coisas?

RESPOSTA. Há, sem dúvida, uma hiperinflação de objetos que conduz a sua proliferação explosiva. Mas se trata de objetos descartáveis com os quais não estabelecemos laços afetivos. Hoje estamos obcecados não com as coisas, e sim com informações e dados, ou seja, não-coisas. Hoje somos todos infômanos. Chegou a se falar de datasexuais [pessoas que compilam e compartilham obsessivamente informação sobre sua vida pessoal].

P. Nesse mundo que o senhor descreve, de hiperconsumo e perda de laços, por que é importante ter “coisas queridas” e estabelecer rituais?

R. As coisas são os apoios que dão tranquilidade na vida. Hoje em dia estão em conjunto obscurecidas pelas informações. O smartphone não é uma coisa. Eu o caracterizo como o infômata que produz e processa informações. As informações são todo o contrário aos apoios que dão tranquilidade à vida. Vivem do estímulo da surpresa. Elas nos submergem em um turbilhão de atualidade. Também os rituais, como arquiteturas temporais, dão estabilidade à vida. A pandemia destruiu essas estruturas temporais. Pense no teletrabalho. Quando o tempo perde sua estrutura, a depressão começa a nos afetar.

P. Em seu livro se estabelece que, pela digitalização, nos transformaremos em homo ludens, focados mais no lazer do que no trabalho. Mas, com a precarização e a destruição do emprego, todos poderemos ter acesso a essa condição?

R. Falei de um desemprego digital que não é determinado pela conjuntura. A digitalização levará a um desemprego maciço. Esse desemprego representará um problema muito sério no futuro. O futuro humano consistirá na renda básica e nos jogos de computador? Um panorama desalentador. Com panem et circenses (pão e circo) Juvenal se refere à sociedade romana em que a ação política não é possível. As pessoas se mantêm contentes com alimentos gratuitos e jogos espetaculares. A dominação total é aquela em que as pessoas só se dedicam a jogar. A recente e hiperbólica série coreana da Netflix, Round 6, em que todo mundo só se dedica ao jogo, aponta nessa direção.

P. Em que sentido?

R. Essas pessoas estão totalmente endividadas e se entregam a esse jogo mortal que promete ganhos enormes. Round 6 representa um aspecto central do capitalismo em um formato extremo. Walter Benjamin já disse que o capitalismo representa o primeiro caso de um culto que não é expiatório, e sim nos endivida. No começo da digitalização se sonhava que ela substituiria o trabalho pelo jogo. Na verdade, o capitalismo digital explora impiedosamente a pulsão humana pelo jogo. Pense nas redes sociais, que incorporam elementos lúdicos para provocar o vício nos usuários.

P. De fato, o smatphone nos prometia certa liberdade… Não se transformou em uma longa corrente que nos aprisiona onde quer que estejamos?

R. O smartphone é hoje um lugar de trabalho digital e um confessionário digital. Todo dispositivo, toda técnica de dominação gera artigos cultuados que são utilizados à subjugação. É assim que a dominação se consolida. O smartphone é o artigo de culto da dominação digital. Como aparelho de subjugação age como um rosário e suas contas; é assim que mantemos o celular constantemente nas mãos. O like é o amém digital. Continuamos nos confessando. Por decisão própria, nos desnudamos. Mas não pedimos perdão, e sim que prestem atenção em nós.

P. Há quem tema que a internet das coisas possa significar algo assim como a rebelião dos objetos contra o ser humano.

R. Não exatamente. A smarthome [casa inteligente] com coisas interconectadas representa uma prisão digital. A smartbed [cama inteligente] com sensores prolonga a vigilância também durante as horas de sono. A vigilância vai se impondo de maneira crescente e sub-reptícia na vida cotidiana como se fosse o conveniente. As coisas informatizadas, ou seja, os infômatas, se revelam como informadores eficientes que nos controlam e dirigem constantemente.

P. O senhor descreveu como o trabalho vai ganhando caráter de jogo, as redes sociais, paradoxalmente, nos fazem sentir mais livres, o capitalismo nos seduz. O sistema conseguiu se meter dentro de nós para nos dominar de uma maneira até prazerosa para nós mesmos?

R. Somente um regime repressivo provoca a resistência. Pelo contrário, o regime neoliberal, que não oprime a liberdade, e sim a explora, não enfrenta nenhuma resistência. Não é repressor, e sim sedutor. A dominação se torna completa no momento em que se apresenta como a liberdade.

P. Por que, apesar da precariedade e da desigualdade crescentes, dos riscos existenciais etc., o mundo cotidiano nos países ocidentais parece tão bonito, hiperplanejado, e otimista? Por que não parece um filme distópico e cyberpunk?

R. O romance 1984 de George Orwell se transformou há pouco tempo em um sucesso de vendas mundial. As pessoas têm a sensação de que algo não anda bem com nossa zona de conforto digital. Mas nossa sociedade se parece mais a Admirável Mundo Novo de Aldous Huxley. Em 1984 as pessoas são controladas pela ameaça de machucá-las. Em Admirável Mundo Novo são controladas pela administração de prazer. O Estado distribui uma droga chamada “soma” para que todo mundo se sinta feliz. Esse é nosso futuro.

P. O senhor sugere que a Inteligência Artificial e o big data não são formas de conhecimento tão espantosas como nos fazem crer, e sim mais “rudimentares”. Por que?

R. O big data dispõe somente de uma forma muito primitiva de conhecimento, a saber, a correlação: acontece A, então ocorre B. Não há nenhuma compreensão. A Inteligência Artificial não pensa. A Inteligência Artificial não sente medo.

P. Blaise Pascal disse que a grande tragédia do ser humano é que não pode ficar quieto sem fazer nada. Vivemos em um culto à produtividade, até mesmo nesse tempo que chamamos “livre”. O senhor o chamou, com grande sucesso, de a sociedade do cansaço. Nós deveríamos nos fixar na recuperação do próprio tempo como um objetivo político?

R. A existência humana hoje está totalmente absorvida pela atividade. Com isso se faz completamente explorável. A inatividade volta a aparecer no sistema capitalista de dominação com incorporação de algo externo. É chamado tempo de ócio. Como serve para se recuperar do trabalho, permanece vinculado ao mesmo. Como derivada do trabalho constitui um elemento funcional dentro da produção. Precisamos de uma política da inatividade. Isso poderia servir para liberar o tempo das obrigações da produção e tornar possível um tempo de ócio verdadeiro.

P. Como se combina uma sociedade que tenta nos homogeneizar e eliminar as diferenças, com a crescente vontade das pessoas em ser diferentes dos outros, de certo modo, únicas?

R. Todo mundo hoje quer ser autêntico, ou seja, diferente dos outros. Dessa forma, estamos nos comparando o tempo todo com os outros. É justamente essa comparação que nos faz todos iguais. Ou seja: a obrigação de ser autênticos leva ao inferno dos iguais.

P. Precisamos de mais silêncio? Ficar mais dispostos a escutar o outro?

R. Precisamos que a informação se cale. Caso contrário, explorará nosso cérebro. Hoje entendemos o mundo através das informações. Assim a vivência presencial se perde. Nós nos desconectamos do mundo de modo crescente. Vamos perdendo o mundo. O mundo é mais do que a informação. A tela é uma representação pobre do mundo. Giramos em círculo ao redor de nós mesmos. O smartphone contribui decisivamente a essa percepção pobre de mundo. Um sintoma fundamental da depressão é a ausência de mundo.

P. A depressão é um dos mais alarmantes problemas de saúde contemporâneos. Como essa ausência do mundo opera?

R. Na depressão perdemos a relação com o mundo, com o outro. E nos afundamos em um ego difuso. Penso que a digitalização, e com ela o smartphone, nos transformam em depressivos. Há histórias de dentistas que contam que seus pacientes se aferram aos seus telefones quando o tratamento é doloroso. Por que o fazem? Graças ao celular sou consciente de mim mesmo. O celular me ajuda a ter a certeza de que vivo, de que existo. Dessa forma nos aferramos ao celular em situações críticas, como o tratamento dental. Eu lembro que quando era criança apertava a mão de minha mãe no dentista. Hoje a mãe não dá a mão à criança, e sim o celular para que se agarre a ele. A sustentação não vem dos outros, e sim de si mesmo. Isso nos adoece. Temos que recuperar o outro.

P. Segundo o filósofo Fredric Jameson é mais fácil imaginar o fim do mundo do que o fim do capitalismo. O senhor imaginou algum modo de pós-capitalismo agora que o sistema parece em decadência?

R. O capitalismo corresponde realmente às estruturas instintivas do homem. Mas o homem não é só um ser instintivo. Temos que domar, civilizar e humanizar o capitalismo. Isso também é possível. A economia social de mercado é uma demonstração. Mas nossa economia está entrando em uma nova época, a época da sustentabilidade.

P. O senhor se doutorou com uma tese sobre Heidegger, que explorou as formas mais abstratas de pensamento e cujos textos são muito obscuros até o profano. O senhor, entretanto, consegue aplicar esse pensamento abstrato a assuntos que qualquer um pode experimentar. A filosofia deve se ocupar mais do mundo em que a maior parte da população vive?

R. Michel Foucault define a filosofia como uma espécie de jornalismo radical, e se considera a si mesmo jornalista. Os filósofos deveriam se ocupar sem rodeios do hoje, da atualidade. Nisso sigo Foucault. Eu tento interpretar o hoje em pensamentos. Esses pensamentos são justamente o que nos fazem livres.

Trust in meteorology has saved lives. The same is possible for climate science. (Washington Post)

washingtonpost.com

Placing our faith in forecasting and science could save lives and money

Oliver Uberti

October 14, 2021


2021 is shaping up to be a historically busy hurricane season. And while damage and destruction have been serious, there has been one saving grace — that the National Weather Service has been mostly correct in its predictions.

Thanks to remote sensing, Gulf Coast residents knew to prepare for the “life-threatening inundation,” “urban flooding” and “potentially catastrophic wind damage” that the Weather Service predicted for Hurricane Ida. Meteorologists nailed Ida’s strength, surge and location of landfall while anticipating that a warm eddy would make her intensify too quickly to evacuate New Orleans safely. Then, as her remnants swirled northeast, reports warned of tornadoes and torrential rain. Millions took heed, and lives were saved. While many people died, their deaths resulted from failures of infrastructure and policy, not forecasting.

The long history of weather forecasting and weather mapping shows that having access to good data can help us make better choices in our own lives. Trust in meteorology has made our communities, commutes and commerce safer — and the same is possible for climate science.

Two hundred years ago, the few who studied weather deemed any atmospheric phenomenon a “meteor.” The term, referencing Aristotle’s “Meteorologica,” essentially meant “strange thing in the sky.” There were wet things (hail), windy things (tornadoes), luminous things (auroras) and fiery things (comets). In fact, the naturalist Elias Loomis, who was among the first to spot Halley’s comet upon its return in 1835, thought storms behaved as cyclically as comets. So to understand “the laws of storms,” Loomis and the era’s other leading weatherheads began gathering observations. Master the elements, they reasoned, and you could safely sail the seas, settle the American West, plant crops with confidence and ward off disease.

In 1856, Joseph Henry, the Smithsonian Institution’s first director, hung a map of the United States in the lobby of its Washington headquarters. Every morning, he would affix small colored discs to show the nation’s weather: white for places with clear skies, blue for snow, black for rain and brown for cloud cover. An arrow on each disc allowed him to note wind direction, too. For the first time, visitors could see weather across the expanding country.

Although simple by today’s standards, the map belied the effort and expense needed to select the correct colors each day. Henry persuaded telegraph companies to transmit weather reports every morning at 10. Then he equipped each station with thermometers, barometers, weathervanes and rain gauges — no small task by horse and rail, as instruments often broke in transit.

For longer-term studies of the North American climate, Henry enlisted academics, farmers and volunteers from Maine to the Caribbean. Eager to contribute, “Smithsonian observers” took readings three times a day and posted them to Washington each month. At its peak in 1860, the Smithsonian Meteorological Project had more than 500 observers. Then the Civil War broke out.

Henry’s ranks thinned by 40 percent as men traded barometers for bayonets. Severed telegraph lines and the priority of war messages crippled his network. Then in January 1865, a fire in Henry’s office landed the fatal blow to the project. All of his efforts turned to salvaging what survived. With a vacuum of leadership in Washington, citizen scientists picked up the slack.

Although the Chicago Tribune lampooned Lapham, wondering “what practical value” a warning service would provide “if it takes 10 years to calculate the progress of a storm,” Rep. Halbert E. Paine (Wis.), who had studied storms under Loomis, rushed a bill into Congress before the winter recess. In early 1870, a joint resolution establishing a storm-warning service under the U.S. Army Signal Office passed without debate. President Ulysses S. Grant signed it into law the following week.

Despite the mandate for an early-warning system, an aversion to predictions remained. Fiscal hawks could not justify an investment in erroneous forecasts, religious zealots could not stomach the hubris, and politicians wary of a skeptical public could not bear the fallout. In 1893, Agriculture Secretary J. Sterling Morton cut the salary of one of the country’s top weather scientists, Cleveland Abbe, by 25 percent, making an example out of him.

While Moore didn’t face consequences for his dereliction of duty, the Weather Bureau’s hurricane-forecasting methods gradually improved as the network expanded and technologies like radio emerged. The advent of aviation increased insight into the upper atmosphere; military research led to civilian weather radar, first deployed at Washington National Airport in 1947. By the 1950s, computers were ushering in the future of numerical forecasting. Meanwhile, public skepticism thawed as more people and businesses saw it in their best interests to trust experts.

In September 1961, a local news team decided to broadcast live from the Weather Bureau office in Galveston, Tex., as Hurricane Carla angled across the Gulf of Mexico. Leading the coverage was a young reporter named Dan Rather. “There is the eye of the hurricane right there,” he told his audience as the radar sweep brought the invisible into view. At the time, no one had seen a radar weather map televised before.

Rather realized that for viewers to comprehend the storm’s size, location and imminent danger, people needed a sense of scale. So he had a meteorologist draw the Texas coast on a transparent sheet of plastic, which Rather laid over the radarscope. Years later, he recalled that when he said “one inch equals 50 miles,” you could hear people in the studio gasp. The sight of the approaching buzz saw persuaded 350,000 Texans to evacuate their homes in what was then the largest weather-related evacuation in U.S. history. Ultimately, Carla inflicted twice as much damage as the Galveston hurricane 60 years earlier. But with the aid of Rather’s impromptu visualization, fewer than 50 lives were lost.

In other words, weather forecasting wasn’t only about good science, but about good communication and visuals.

Data visualization helped the public better understand the weather shaping their lives, and this enabled them to take action. It also gives us the power to see deadly storms not as freak occurrences, but as part of something else: a pattern.

A modified version of a chart that appears in “Atlas of the Invisible: Maps and Graphics That Will Change How You See the World.” Copyright © 2021 by James Cheshire and Oliver Uberti. With permission of the publisher, W.W. Norton & Co. All rights reserved.

Two hundred years ago, a 10-day forecast would have seemed preposterous. Now we can predict if we’ll need an umbrella tomorrow or a snowplow next week. Imagine if we planned careers, bought homes, built infrastructure and passed policy based on 50-year forecasts as routinely as we plan our weeks by five-day ones.

Unlike our predecessors of the 19th or even 20th centuries, we have access to ample climate data and data visualization that give us the knowledge to take bold actions. What we do with that knowledge is a matter of political will. It may be too late to stop the coming storm, but we still have time to board our windows.

Geoengineering: We should not play dice with the planet (The Hill)

thehill.com

Kim Cobb and Michael E. Mann, opinion contributors

10/12/21 11:30 AM EDT


The fate of the Biden administration’s agenda on climate remains uncertain, captive to today’s toxic atmosphere in Washington, DC. But the headlines of 2021 leave little in the way of ambiguity — the era of dangerous climate change is already upon us, in the form of wildfires, hurricanes, droughts and flooding that have upended lives across America. A recent UN report on climate is clear these impacts will worsen in the coming two decades if we fail to halt the continued accumulation of greenhouse gases in the atmosphere.

To avert disaster, we must chart a different climate course, beginning this year, to achieve steep emissions reductions this decade. Meeting this moment demands an all hands-on-deck approach. And no stone should be left unturned in our quest for meaningful options for decarbonizing our economy.

But while it is tempting to pin our hopes on future technology that might reduce the scope of future climate damages, we must pursue such strategies based on sound science, with a keen eye for potential false leads and dead ends. And we must not allow ourselves to be distracted from the task at hand — reducing fossil fuel emissions — by technofixes that at best, may not pan out, and at worst, may open the door to potentially disastrous unintended consequences. 

So-called “geoengineering,” the intentional manipulation of our planetary environment in a dubious effort to offset the warming from carbon pollution, is the poster child for such potentially dangerous gambits. As the threat of climate change becomes more apparent, an increasingly desperate public — and the policymakers that represent them — seem to be willing to entertain geoengineering schemes. And some prominent individuals, such as former Microsoft CEO Bill Gates, have been willing to use them to advocate for this risky path forward.  

The New York Times recently injected momentum into the push for geoengineering strategies with a recent op-ed by Harvard scientist and geoengineering advocate David Keith. Keith argues that even in a world where emissions cuts are quick enough and large enough to limit warming to 1.5 degrees Celsius by 2050, we would face centuries of elevated atmospheric CO2 concentrations and global temperatures combined with rising sea levels.

The solution proposed by geoengineering proponents? A combination of slow but steady CO2 removal factories (including Keith’s own for-profit company) and a quick-acting temperature fix — likened to a “band-aid” — delivered by a fleet of airplanes dumping vast quantities of chemicals into the upper atmosphere.

This latter scheme is sometimes called “solar geoengineering” or “solar radiation management,” but that’s really a euphemism for efforts to inject potentially harmful chemicals into the stratosphere with potentially disastrous side effects, including more widespread drought, reduced agricultural productivity, and unpredictable shifts in regional climate patterns. Solar geoengineering does nothing to slow the pace of ocean acidification, which will increase with emissions.

On top of that is the risk of “termination shock” (a scenario in which we suffer the cumulative warming from decades of increasing emissions in a matter of several years, should we abruptly end solar geoengineering efforts). Herein lies the moral hazard of this scheme: It could well be used to justify delays in reducing carbon emissions, addicting human civilization writ large to these dangerous regular chemical injections into the atmosphere. 

While this is the time to apply bold, creative thinking to accelerate progress toward climate stability, this is not the time to play fast and loose with the planet, in service of any agenda, be it political or scientific in nature. As the recent UN climate report makes clear, any emissions trajectory consistent with peak warming of 1.5 degrees Celsius by mid-century will pave the way for substantial drawdown of atmospheric CO2 thereafter. Such drawdown prevents further increases in surface temperatures once net emissions decline to zero, followed by global-scale cooling shortly after emissions go negative.

Natural carbon sinks — over land as well as the ocean — play a critical role in this scenario. They have sequestered half of our historic CO2 emissions, and are projected to continue to do so in coming decades. Their buffering capacity may be reduced with further warming, however, which is yet another reason to limit warming to 1.5 degrees Celsius this century. But if we are to achieve negative emissions this century — manifest as steady reductions of atmospheric CO2 concentrations — it will be because we reduce emissions below the level of uptake by natural carbon sinks. So, carbon removal technology trumpeted as a scalable solution to our emissions challenge is unlikely to make a meaningful dent in atmospheric CO2 concentrations.

As to the issue of climate reversibility, it’s naïve to think that we could reverse nearly two centuries of cumulative emissions and associated warming in a matter of decades. Nonetheless, the latest science tells us that surface warming responds immediately to reductions in carbon emissions. Land responds the fastest, so we can expect a rapid halt to the worsening of heatwaves, droughts, wildfires and floods once we reach net-zero emissions. Climate impacts tied to the ocean, such as marine heat waves and hurricanes, would respond somewhat more slowly. And the polar ice sheets may continue to lose mass and contribute to sea-level rise for centuries, but coastal communities can more easily adapt to sea-level rise if warming is limited to 1.5 degrees Celsius. 

While it’s appealing to think that a climate “band-aid” could protect us from the worst climate impacts, solar geoengineering is more like risky elective surgery than a preventative medicine. This supposed “climate fix” might very well be worse than the disease, drying the continents and reducing crop yields, and having potentially other unforeseen negative consequences. The notion that such an intervention might somehow aid the plight of the global poor seems misguided at best.

When considering how to advance climate justice in the world, it is critical to ask, “Who wins — and who loses?” in a geoengineered future. If the winners are petrostates and large corporations who, if history is any guide, will likely be granted preferred access to the planetary thermostat, and the losers are the global poor — who already suffer disproportionately from dirty fossil fuels and climate impacts — then we might simply be adding insult to injury.

To be clear, the world should continue to invest in research and development of science and technology that might hasten societal decarbonization and climate stabilization, and eventually the return to a cooler climate. But those technologies must be measured, in both efficacy and safety, against the least risky and most surefire path to a net-zero world: the path from a fossil fuel-driven to a clean energy-driven society.

Kim Cobb is the director of the Global Change Program at the Georgia Institute of Technology and professor in the School of Earth and Atmospheric Sciences. She was a lead author on the recent UN Intergovernmental Panel on Climate Change (IPCC) Sixth Assessment Report. Follow her on Twitter: @coralsncaves

Michael E. Mann is distinguished professor of atmospheric science and director of the Earth System Science Center at Penn State University. He is author of the recently released book, “The New Climate War: The Fight to Take Back our Planet.” Follow him on Twitter: @MichaelEMann

Physics meets democracy in this modeling study (Science Daily)

A new paper explores how the opinions of an electorate may be reflected in a mathematical model ‘inspired by models of simple magnetic systems’

Date: October 8, 2021

Source: University at Buffalo

Summary: A study leverages concepts from physics to model how campaign strategies influence the opinions of an electorate in a two-party system.


A study in the journal Physica A leverages concepts from physics to model how campaign strategies influence the opinions of an electorate in a two-party system.

Researchers created a numerical model that describes how external influences, modeled as a random field, shift the views of potential voters as they interact with each other in different political environments.

The model accounts for the behavior of conformists (people whose views align with the views of the majority in a social network); contrarians (people whose views oppose the views of the majority); and inflexibles (people who will not change their opinions).

“The interplay between these behaviors allows us to create electorates with diverse behaviors interacting in environments with different levels of dominance by political parties,” says first author Mukesh Tiwari, PhD, associate professor at the Dhirubhai Ambani Institute of Information and Communication Technology.

“We are able to model the behavior and conflicts of democracies, and capture different types of behavior that we see in elections,” says senior author Surajit Sen, PhD, professor of physics in the University at Buffalo College of Arts and Sciences.

Sen and Tiwari conducted the study with Xiguang Yang, a former UB physics student. Jacob Neiheisel, PhD, associate professor of political science at UB, provided feedback to the team, but was not an author of the research. The study was published online in Physica A in July and will appear in the journal’s Nov. 15 volume.

The model described in the paper has broad similarities to the random field Ising model, and “is inspired by models of simple magnetic systems,” Sen says.

The team used this model to explore a variety of scenarios involving different types of political environments and electorates.

Among key findings, as the authors write in the abstract: “In an electorate with only conformist agents, short-duration high-impact campaigns are highly effective. … In electorates with both conformist and contrarian agents and varying level(s) of dominance due to local factors, short-term campaigns are effective only in the case of fragile dominance of a single party. Strong local dominance is relatively difficult to influence and long-term campaigns with strategies aimed to impact local level politics are seen to be more effective.”

“I think it’s exciting that physicists are thinking about social dynamics. I love the big tent,” Neiheisel says, noting that one advantage of modeling is that it could enable researchers to explore how opinions might change over many election cycles — the type of longitudinal data that’s very difficult to collect.

Mathematical modeling has some limitations: “The real world is messy, and I think we should embrace that to the extent that we can, and models don’t capture all of this messiness,” Neiheisel says.

But Neiheisel was excited when the physicists approached him to talk about the new paper. He says the model provides “an interesting window” into processes associated with opinion dynamics and campaign effects, accurately capturing a number of effects in a “neat way.”

“The complex dynamics of strongly interacting, nonlinear and disordered systems have been a topic of interest for a long time,” Tiwari says. “There is a lot of merit in studying social systems through mathematical and computational models. These models provide insight into short- and long-term behavior. However, such endeavors can only be successful when social scientists and physicists come together to collaborate.”



Journal Reference:

  1. Mukesh Tiwari, Xiguang Yang, Surajit Sen. Modeling the nonlinear effects of opinion kinematics in elections: A simple Ising model with random field based study. Physica A: Statistical Mechanics and its Applications, 2021; 582: 126287 DOI: 10.1016/j.physa.2021.126287

One policy accounts for a lot of the decarbonisation in Joe Biden’s climate plans (The Economist)

economist.com

As Democrats trim the legislation, they should focus on keeping it

Oct 12th 2021


TAKE A ROAD TRIP to Indianapolis, home to a certain two-and-a-half-mile race track, and you will find yourself in good company. A survey carried out before the pandemic found that about 85% of local commuters drive to work, alone. Standing on a bridge over 38th Street, which runs by the state fairground, you cannot escape the roar of six lanes of petrol-fired traffic below—and, reports a local, this is quiet compared with the noise on pre-virus days. Getting Americans to kick their addiction to fossil fuels will require many of these drivers to find another way of getting to work, and to move on from the flaming hydrocarbons celebrated at the city’s famous oval.

Joe Biden hopes to use what looks like a narrow window of Democratic control of Congress to encourage this transition. The last time lawmakers came close to writing climate legislation on anything like this scale was in 2009, when the Waxman-Markey bill, which would have established a trading system for greenhouse-gas emissions, was passed by the House. Since then, a Democratic White House has tried to nudge America to reduce emissions, by issuing new regulations, and a Republican White House has tried to undo them. That record illustrates what a delicate operation this is. Yet despite having a much weaker grip on Congress than Barack Obama had in the first year of his presidency, Mr Biden and his legislative allies have put forward a sweeping set of proposals for decarbonising America’s economy. These would promote everything from clean energy on the grid and electric vehicles on the road, to union jobs making green technologies and climate justice for left-behind communities.

Were this wish list passed in its entirety, which is unlikely, it would give a boost to Mr Biden’s pledge to reduce America’s emissions by roughly half from their 2005 level by 2030. A chart released by the office of Chuck Schumer, the Senate’s majority leader, suggests that implementing all of these provisions could reduce America’s emissions by 45% below 2005 levels by 2030, thus achieving almost all of Mr Biden’s goal of cutting them by roughly half in that period (see chart 1). Passing a law, even a less expansive one, would allow Mr Biden to travel to the UN climate summit in Glasgow in November representing a country that is making progress towards internationally agreed goals, rather than asking for the patience of poorer, less technologically sophisticated countries while America sorts itself out.

Some of the Democratic proposals are in a $1trn infrastructure bill with bipartisan support. But most are found in a $3.5trn budget bill that, on account of Senate rules, can only pass through a partisan parliamentary manoeuvre known as reconciliation. This requires the assent of all 50 Democratic senators. The likeliest outcome is a compromise between Democratic progressives and moderates that yokes together the agreed infrastructure bill with a much slimmer version of the $3.5trn proposal. Yet it is possible that neither bill will become law.

This raises two questions. First, how good on climate can a salami-sliced version of Mr Biden’s agenda, the result of a negotiation between 270 Democratic members of Congress each angling for their constituents’ interests, really be? Second, how bad would it be for America’s decarbonisation efforts were both bills to fail?

Happily even reconciliation-lite could bring meaningful progress if key bits of the current proposals survive the negotiations. Paul Bledsoe of the Progressive Policy Institute, a think-tank, is confident a deal “likely a bit under $2trn” will happen this month. The Rhodium Group, an analysis firm, reckons that just six proposals would cut America’s emissions by nearly 1bn tonnes in 2030 compared with no new policies (see chart 2), about a sixth of America’s total net emissions per year. That is roughly equivalent to the annual emissions from all cars and pickup trucks on American roads, or the emissions of Florida and Texas combined. The six include proposals related to “natural carbon removal” (which involves spending on forests and soil), fossil fuels (making it more expensive to emit methane) and transport (a generous credit for buyers of electric vehicles).

The big prize, though, is the power sector. Two proposals for decarbonising the grid account for the lion’s share of likely emissions reductions: a new Clean Electricity Performance Programme (CEPP) and more mundane reforms to the tax credits received by clean energy. The CEPP has been touted by Mr Biden’s cabinet officials and leading progressives as a linchpin of the climate effort. It is loosely based on the mandatory clean electricity standards imposed by over two dozen states which have successfully boosted adoption of low-carbon energy.

The CEPP is flawed in a couple of ways, though. Because it has to be primarily a fiscal measure in order to squeeze through the reconciliation process it does not involve mandatory regulation, unlike those successful state energy standards. Rather, it uses (biggish) subsidies and (rather punier) penalty fees to try to nudge utilities to build more clean energy. It is politically vulnerable because it is seen as unfriendly to natural gas and coal (unless they have expensive add-on kit to capture and store related emissions). That has incurred the hostility of Senator Joe Manchin, a Democrat who represents coal-rich West Virginia, without whose approval the bill will fail. Some influential utility companies with coal assets, including Ohio-based American Electric Power, do not like it either.

Despite the attention paid to it, CEPP is actually less potent as a greenhouse-gas slayer than those boring tax credits, which are less controversial because they do not overtly penalise coal or gas. Two energy veterans, one at a top renewables lobbying outfit and the other at a fossil-heavy utility, agree that the tax credits would sharply boost investment in low-carbon technologies. That is because they improve the current set-up by replacing stop-go uncertainty with a predictable long-term tax regime, and make tax breaks “refundable” rather than needing to be offset against tax liabilities, meaning even utilities that do not have such tax liabilities can enjoy them as freely as cash in the bank.

Thus the obsession over the CEPP is overshadowing the real star proposal. The tax credits have “a huge impact potentially”, reckons Rhodium, accounting for over one-quarter of the greenhouse-gas emissions reductions in the legislation, at a cost of roughly $150bn over ten years. A former administrator of the Environmental Protection Agency (EPA) puts it bluntly: “Take the wind and solar tax credits at ten years if you had to choose—and let everything else go.”

What if Democrats fail, the negotiations fall apart and Mr Biden is left empty handed? That would be embarrassing. And it would perhaps make it difficult to pursue ambitious federal climate policies through Congress for years, just as the failure of Waxman-Markey in 2009 haunted lawmakers. However it would not mean America can do nothing at all about climate change.
First of all, as Mr Biden’s officials have already made clear, they stand ready to use regulations to push ahead on decarbonisation efforts, just as the Obama administration did. Last month the EPA issued rules cracking-down on emissions of hydrofluorocarbons, an especially powerful greenhouse gas. The administration also has plans for loan guarantees for energy innovations and for speeding-up approvals for offshore wind farms. Yet this is tinkering compared with the federal law being discussed, especially as new regulations will likely encounter legal challenges.

Even if the federal government fails again, states and cities have climate policies too. Drawing on analysis funded by Bloomberg Philanthropies, Leon Clarke of the University of Maryland calculates that decentralised policies emulating the current best efforts of states like California could achieve roughly one-quarter of Mr Biden’s objective. But this is a bad deal: such efforts would fall a long way short of the federal proposal in terms of emissions reduction, and what reductions they achieve would be more expensive than if done at the federal level. Still, it is not nothing. Last month, Illinois passed the country’s boldest climate-change law. Democratic states such as New York and California have green policies, but Republican states such as Texas and Indiana have big wind industries too.

While Mr Clarke says Congress has to act if America is to achieve Mr Biden’s targets, he believes that progress will continue even if Congress falters, because there is now a deeper sense of ownership of climate policy among local and state governments. “The Trump years really changed the way that subnationals in the US view climate action,” he says. “They can’t rely on the federal government.”

Change is happening in surprising places. Take that flyover in Indianapolis. The city’s officials have made it into a bike path that will be connected to 55 miles of commuter-friendly trails traversing the city. $100m has been allocated for building a bus-rapid transit system, which is a cheap and efficient substitute for underground rail, with more such rapid bus lines on the cards. Bloated 38th Street will undergo a “lane diet” with car and lorry traffic yielding two lanes to the buses. Come back in a few years and the view from the bridge will be quieter.

The Death of the Bering Strait Theory (Indian Country Today)

indiancountrytoday.com

Alexander Ewen


Updated: Sep 13, 2018. Original: Aug 12, 2016

Two new studies have finally put an end to the theory that the Americas were populated by ancient peoples who walked across the Bering Strait.

Two new studies have now, finally, put an end to the long-held theory that the Americas were populated by ancient peoples who walked across the Bering Strait land-bridge from Asia approximately 15,000 years ago. Because much of Canada was then under a sheet of ice, it had long been hypothesised that an “ice-free corridor” might have allowed small groups through from Beringia, some of which was ice-free. One study published in the journal Nature, entitled “Postglacial Viability and Colonization in North America’s Ice-Free Corridor” found that the corridor was incapable of sustaining human life until about 12,600 years ago, or well after the continent had already been settled.

An international team of researchers “obtained radiocarbon dates, pollen, macrofossils and metagenomic DNA from lake sediment cores” from nine former lake beds in British Columbia, where the Laurentide and Cordellian ice sheets split apart. Using a technique called “shotgun sequencing,” the team had to sequence every bit of DNA in a clump of organic matter in order to distinguish between the jumbled strands of DNA. They then matched the results to a database of known genomes to differentiate the organisms. Using this data they reconstructed how and when different flora and fauna emerged from the once ice-covered landscape. According to Mikkel Pedersen, a Ph.D. student at the Center for Geogenetics, University of Copenhagen, in the deepest layers, from 13,000 years ago, “the land was completely naked and barren.”

“What nobody has looked at is when the corridor became biologically viable,” noted study co-author, Professor Eske Willerslev, an evolutionary geneticist at the Centre for GeoGenetics and also the Department of Zoology, the University of Cambridge. “The bottom line is that even though the physical corridor was open by 13,000 years ago, it was several hundred years before it was possible to use it.” In Willerslev’s view, “that means that the first people entering what is now the U.S., Central and South America must have taken a different route.”

A second study, “Bison Phylogeography Constrains Dispersal and Viability of the Ice Free Corridor in Western Canada,” published in the Proceedings of the National Academy of Sciences, examined ancient mitochondrial DNA from bison fossils to “determine the chronology for when the corridor was open and viable for biotic dispersals” and found that the corridor was potentially a viable route for bison to travel through about 13,000 years ago, or slightly earlier than the Nature study.

Geologists had long known that the towering icecaps were a formidable barrier to migration from Asia to the Americas between 26,000 to 10,000 years ago. Thus the discovery in 1932 of the Clovis spear points, believed at that time to be about 10,000 years old, presented a problem, given the overwhelming presumption of the day that the ancient Indians had walked over from Asia about that time. In 1933, the Canadian geologist William Alfred Johnston proposed that when the glaciers began melting, they broke into two massive sheets long before completely disappearing, and between these two ice sheets people might have been able to walk through, an idea dubbed the “ice-free corridor” by Swedish-American geologist Ernst Antevs two years later.

Archaeologists then seized on the idea of a passageway to uphold the tenuous notion that Indians had arrived to the continent relatively recently, until such belief became a matter of faith. Given the recent discoveries that place Indians in the Americas at least 14,000 years ago, both studies now finally lay to rest the ice-free corridor theory. As Willerslev points out, “The school book story that most of us are used to doesn’t seem to be supported.” The new school book story is that the Indians migrated in boats down along the Pacific coast around 15,000 years ago. How long that theory will hold up remains to be seen.

Friend or Foe? Crows Never Forget a Face, It Seems (New York Times)

nytimes.com

Michelle Nijhuis


Aug. 25, 2008

Crows and their relatives — among them ravens, magpies and jays — are renowned for their intelligence and for their ability to flourish in human-dominated landscapes. That ability may have to do with cross-species social skills. In the Seattle area, where rapid suburban growth has attracted a thriving crow population, researchers have found that the birds can recognize individual human faces.

John M. Marzluff, a wildlife biologist at the University of Washington, has studied crows and ravens for more than 20 years and has long wondered if the birds could identify individual researchers. Previously trapped birds seemed more wary of particular scientists, and often were harder to catch. “I thought, ‘Well, it’s an annoyance, but it’s not really hampering our work,’ ” Dr. Marzluff said. “But then I thought we should test it directly.”

To test the birds’ recognition of faces separately from that of clothing, gait and other individual human characteristics, Dr. Marzluff and two students wore rubber masks. He designated a caveman mask as “dangerous” and, in a deliberate gesture of civic generosity, a Dick Cheney mask as “neutral.” Researchers in the dangerous mask then trapped and banded seven crows on the university’s campus in Seattle.

In the months that followed, the researchers and volunteers donned the masks on campus, this time walking prescribed routes and not bothering crows.

The crows had not forgotten. They scolded people in the dangerous mask significantly more than they did before they were trapped, even when the mask was disguised with a hat or worn upside down. The neutral mask provoked little reaction. The effect has not only persisted, but also multiplied over the past two years. Wearing the dangerous mask on one recent walk through campus, Dr. Marzluff said, he was scolded by 47 of the 53 crows he encountered, many more than had experienced or witnessed the initial trapping. The researchers hypothesize that crows learn to recognize threatening humans from both parents and others in their flock.

After their experiments on campus, Dr. Marzluff and his students tested the effect with more realistic masks. Using a half-dozen students as models, they enlisted a professional mask maker, then wore the new masks while trapping crows at several sites in and around Seattle. The researchers then gave a mix of neutral and dangerous masks to volunteer observers who, unaware of the masks’ histories, wore them at the trapping sites and recorded the crows’ responses.

The reaction to one of the dangerous masks was “quite spectacular,” said one volunteer, Bill Pochmerski, a retired telephone company manager who lives near Snohomish, Wash. “The birds were really raucous, screaming persistently,” he said, “and it was clear they weren’t upset about something in general. They were upset with me.”

Again, crows were significantly more likely to scold observers who wore a dangerous mask, and when confronted simultaneously by observers in dangerous and neutral masks, the birds almost unerringly chose to persecute the dangerous face. In downtown Seattle, where most passersby ignore crows, angry birds nearly touched their human foes. In rural areas, where crows are more likely to be viewed as noisy “flying rats” and shot, the birds expressed their displeasure from a distance.

Though Dr. Marzluff’s is the first formal study of human face recognition in wild birds, his preliminary findings confirm the suspicions of many other researchers who have observed similar abilities in crows, ravens, gulls and other species. The pioneering animal behaviorist Konrad Lorenz was so convinced of the perceptive capacities of crows and their relatives that he wore a devil costume when handling jackdaws. Stacia Backensto, a master’s student at the University of Alaska Fairbanks who studies ravens in the oil fields on Alaska’s North Slope, has assembled an elaborate costume — including a fake beard and a potbelly made of pillows — because she believes her face and body are familiar to previously captured birds.

Kevin J. McGowan, an ornithologist at the Cornell Laboratory of Ornithology who has trapped and banded crows in upstate New York for 20 years, said he was regularly followed by birds who have benefited from his handouts of peanuts — and harassed by others he has trapped in the past.

Why crows and similar species are so closely attuned to humans is a matter of debate. Bernd Heinrich, a professor emeritus at the University of Vermont known for his books on raven behavior, suggested that crows’ apparent ability to distinguish among human faces is a “byproduct of their acuity,” an outgrowth of their unusually keen ability to recognize one another, even after many months of separation.

Dr. McGowan and Dr. Marzluff believe that this ability gives crows and their brethren an evolutionary edge. “If you can learn who to avoid and who to seek out, that’s a lot easier than continually getting hurt,” Dr. Marzluff said. “I think it allows these animals to survive with us — and take advantage of us — in a much safer, more effective way.”

Crows are self-aware just like us, says new study (Big Think)

Neuropsych — September 29, 2020

Crows have their own version of the human cerebral cortex.
Credit: Amarnath Tade/ Unsplash

Robby Berman Share Crows are self-aware just like us, says new study on Facebook Share Crows are self-aware just like us, says new study on Twitter Share Crows are self-aware just like us, says new study on LinkedIn Crows and the rest of the corvid family keep turning out to be smarter and smarter. New research observes them thinking about what they’ve just seen and associating it with an appropriate response. A corvid’s pallium is packed with more neurons than a great ape’s.


It’s no surprise that corvids — the “crow family” of birds that also includes ravens, jays, magpies, and nutcrackers — are smart. They use tools, recognize faces, leave gifts for people they like, and there’s even a video on Facebook showing a crow nudging a stubborn little hedgehog out of traffic. Corvids will also drop rocks into water to push floating food their way.

What is perhaps surprising is what the authors of a new study published last week in the journal Science have found: Crows are capable of thinking about their own thoughts as they work out problems. This is a level of self-awareness previously believed to signify the kind of higher intelligence that only humans and possibly a few other mammals possess. A crow knows what a crow knows, and if this brings the word sentience to your mind, you may be right.

Credit: Neoplantski/Alexey Pushkin/Shutterstock/Big Think

It’s long been assumed that higher intellectual functioning is strictly the product of a layered cerebral cortex. But bird brains are different. The authors of the study found crows’ unlayered but neuron-dense pallium may play a similar role for the avians. Supporting this possibility, another study published last week in Science finds that the neuroanatomy of pigeons and barn owls may also support higher intelligence.

“It has been a good week for bird brains!” crow expert John Marzluff of the University of Washington tells Stat. (He was not involved in either study.)

Corvids are known to be as mentally capable as monkeys and great apes. However, bird neurons are so much smaller that their palliums actually contain more of them than would be found in an equivalent-sized primate cortex. This may constitute a clue regarding their expansive mental capabilities.

In any event, there appears to be a general correspondence between the number of neurons an animal has in its pallium and its intelligence, says Suzana Herculano-Houzel in her commentary on both new studies for Science. Humans, she says, sit “satisfyingly” atop this comparative chart, having even more neurons there than elephants, despite our much smaller body size. It’s estimated that crow brains have about 1.5 billion neurons.

Ozzie and Glenn not pictured. Credit: narubono/Unsplash

The kind of higher intelligence crows exhibited in the new research is similar to the way we solve problems. We catalog relevant knowledge and then explore different combinations of what we know to arrive at an action or solution.

The researchers, led by neurobiologist Andreas Nieder of the University of Tübingen in Germany, trained two carrion crows (Corvus corone), Ozzie and Glenn.

The crows were trained to watch for a flash — which didn’t always appear — and then peck at a red or blue target to register whether or not a flash of light was seen. Ozzie and Glenn were also taught to understand a changing “rule key” that specified whether red or blue signified the presence of a flash with the other color signifying that no flash occurred.

In each round of a test, after a flash did or didn’t appear, the crows were presented a rule key describing the current meaning of the red and blue targets, after which they pecked their response.

This sequence prevented the crows from simply rehearsing their response on auto-pilot, so to speak. In each test, they had to take the entire process from the top, seeing a flash or no flash, and then figuring out which target to peck.

As all this occurred, the researchers monitored their neuronal activity. When Ozzie or Glenn saw a flash, sensory neurons fired and then stopped as the bird worked out which target to peck. When there was no flash, no firing of the sensory neurons was observed before the crow paused to figure out the correct target.

Nieder’s interpretation of this sequence is that Ozzie or Glenn had to see or not see a flash, deliberately note that there had or hadn’t been a flash — exhibiting self-awareness of what had just been experienced — and then, in a few moments, connect that recollection to their knowledge of the current rule key before pecking the correct target.

During those few moments after the sensory neuron activity had died down, Nieder reported activity among a large population of neurons as the crows put the pieces together preparing to report what they’d seen. Among the busy areas in the crows’ brains during this phase of the sequence was, not surprisingly, the pallium.

Overall, the study may eliminate the layered cerebral cortex as a requirement for higher intelligence. As we learn more about the intelligence of crows, we can at least say with some certainty that it would be wise to avoid angering one.

Climate change: Voices from global south muted by climate science (BBC)

By Matt McGrath
Environment correspondent

October 6, 2021

climate researcher

Climate change academics from some of the regions worst hit by warming are struggling to be published, according to a new analysis.

The study looked at 100 of the most highly cited climate research papers over the past five years.

Less than 1% of the authors were based in Africa, while only 12 of the papers had a female lead researcher.

The lack of diverse voices means key perspectives are being ignored, says the study’s author.

Researchers from the Carbon Brief website examined the backgrounds of around 1,300 authors involved in the 100 most cited climate change research papers from 2016-2020.

They found that some 90% of these scientists were affiliated with academic institutions from North America, Europe or Australia.

Africa
Issues of concern to African climate researchers were in danger of being ignored

The African continent, home to around 16% of the world’s population had less than 1% of the authors according to the analysis.

There were also huge differences within regions – of the 10 authors from Africa, eight of them were from South Africa.

When it comes to lead authors, not one of the top 100 papers was led by a scientist from Africa or South America. Of the seven papers led by Asian authors, five were from China.

“If the vast majority of research around climate change is coming from a group of people with a very similar background, for example, male scientists from the global north, then the body of knowledge that we’re going to have around climate change is going to be skewed towards their interests, knowledge and scientific training,” said Ayesha Tandon from Carbon Brief, who carried out the analysis and says that “systemic bias” is at play here.

“One study noted that a lot of our understanding of climate change is biased towards cooler climates, because it’s mainly carried out by scientists who live in the global north in cold climates,” she added.

There are a number of other factors at play that limit the opportunities for researchers from the global south. These include a lack of funding for expensive computers to run the computer models, or simulations, that are the bedrock of much climate research.

Other issues include a different academic culture where teaching is prioritised over research, as well as language barriers and a lack of access to expensive libraries and databases.

Ice research
Most of the leading papers on climate change were published by institutions in the global north

Even where researchers from better-off countries seek to collaborate with colleagues in the developing world, the efforts don’t always work out well.

One researcher originally from Tanzania but now working in Mexico explained what can happen.

“The northern scientist often brings his or her own grad students from the north, and they tend to view their local partners as facilitators – logistic, cultural, language, admin – rather than science collaborators,” Dr Tuyeni Mwampamba from the Institute of Ecosystems and Sustainability Research in Mexico told Carbon Brief.

Researchers from the north are often seen as wanting to extract resources and data from developing nations without making any contribution to local research, a practice sometimes known as “helicopter science”.

For women involved in research in the global south there are added challenges in getting your name on a scientific paper

Women in science
A scientist at work in Cote D’Ivoire

“Women tend to have a much higher dropout rate than men as they progress through academia,” said Ayesha Tandon.

“But then women also have to contend with stereotypes and sexism, and even just cultural norms in their country or from the upbringing that might prevent them from spending as much time on their science or from pursuing it in the way that men do.”

The analysis suggests that the lack of voices from women and from the global south is hampering the global understanding of climate change.

Solving the problem is not going to be easy, according to the author.

“This is a systemic problem and it will progress and keep getting worse, because people in positions of power will continue to have those privileges,” said Ayesha Tandon.

“It’s a problem that will not just go away on its own unless people really work at it.”

Projeto que regulamenta uso da inteligência artificial é positivo, mas ainda é preciso discutir mais o tema, diz especialista (Rota Jurídica)

rotajuridica.com.br

5 de outubro de 2021


Após ser aprovado na Câmara dos Deputados, no último dia 29 de setembro, o projeto de lei que regulamenta o uso da inteligêcia artificial (IA) no Brasil (PL 21/20) passará agora pela análise do Senado. Enquanto isso não acontece, o PL que estabelece o Marco Civil da IA, ainda é alvo de discussões.

A proposta, de autoria do deputado federal Eduardo Bismarck (PDT-CE), foi aprovado na Câmara na forma de um substitutivo apresentado pela deputada federal Luisa Canziani (PTB-PR). O texto define como sistemas de inteligência artificial as representações tecnológicas oriundas do campo da informática e da ciência da computação. Caberá privativamente à União legislar e editar normas sobre a matéria.

Em entrevista ao Portal Rota Jurídica, o neurocientista Álvaro Machado Dias salientou, por exemplo, que as intenções contidas no referido do PL apontam um caminho positivo. Contudo, as definições genéricas dão a sensação de que, enquanto o projeto tramita no Senado, vai ser importante aprofundar o contato com a área.

O neurocientista, que é professor livre-docente da UNIFESP, sócio da WeMind Escritório de Inovação, do Instituto Locomotiva de Pesquisas e do Rhizom Blockchain, salienta por outro lado que, em termos sociais, o Marco Civil da Inteligência Artificial promete aumentar a consciência sobre os riscos trazidos pelos algoritmos enviesados, bem como estimular a autorregulação.

Isso, segundo diz, deve aumentar a “justiça líquida” destes dispositivos que tanto influenciam a vida em sociedade. Ressalta que, em termos econômicos, a interoperabilidade (o equivalente a todas as tomadas teremos o mesmo número de pinos) vai fortalecer um pouco o mercado.

“Porém, verdade seja dita, estes impactos não serão tão grandes, já que o PL não fala em colocar a IA como tema estratégico para o País, nem aponta para maior apoio ao progresso científico na área”, acrescenta.

Riscos

Para o neurocientista, os riscos são os de sempre: engessamento da inovação; endereçamento das responsabilidades aos alvos errados; externalidades abertas por estratégias que questionarão as bases epistemológicas do conceito com certa razão (o famoso: dada a definição X, isto aqui não é inteligência artificial).

Porém, o especialista diz que é importante ter em mente que é absolutamente fundamental regular esta indústria, cujo ponto mais alto é a singularidade. “Isto é, a criação de dispositivos capazes de fazer tudo aquilo que fazemos, do ponto de vista interativo e produtivo, só que com mais velocidade e precisão. Trata-se de um debate muito complexo. E, como sempre, na prática, a teoria é outra”, completou.

Objetivos

Álvaro Machado Dias explica que o objetivo principal do PL é definir obrigações para a União, estados e municípios, especialmente regras de governança, responsabilidades civis e parâmetros de impacto social, relacionadas à aplicação e comercialização de plataformas de inteligência artificial. Existe também uma parte mais técnica, que foca a interoperabilidade, isto é, a capacidade dos sistemas trocarem informações.

Observa, ainda, que a principal premissa do projeto é a de que estas tecnologias devem ter sua implementação determinada por princípios como a ausência da intenção de fazer o mal, a qual seria escorada na transparência e responsabilização dos chamados agentes da inteligência artificial.

The Facebook whistleblower says its algorithms are dangerous. Here’s why. (MIT Technology Review)

technologyreview.com

Frances Haugen’s testimony at the Senate hearing today raised serious questions about how Facebook’s algorithms work—and echoes many findings from our previous investigation.

October 5, 2021

Karen Hao


Facebook whistleblower Frances Haugen testifies during a Senate Committee October 5. Drew Angerer/Getty Images

On Sunday night, the primary source for the Wall Street Journal’s Facebook Files, an investigative series based on internal Facebook documents, revealed her identity in an episode of 60 Minutes.

Frances Haugen, a former product manager at the company, says she came forward after she saw Facebook’s leadership repeatedly prioritize profit over safety.

Before quitting in May of this year, she combed through Facebook Workplace, the company’s internal employee social media network, and gathered a wide swath of internal reports and research in an attempt to conclusively demonstrate that Facebook had willfully chosen not to fix the problems on its platform.

Today she testified in front of the Senate on the impact of Facebook on society. She reiterated many of the findings from the internal research and implored Congress to act.

“I’m here today because I believe Facebook’s products harm children, stoke division, and weaken our democracy,” she said in her opening statement to lawmakers. “These problems are solvable. A safer, free-speech respecting, more enjoyable social media is possible. But there is one thing that I hope everyone takes away from these disclosures, it is that Facebook can change, but is clearly not going to do so on its own.”

During her testimony, Haugen particularly blamed Facebook’s algorithm and platform design decisions for many of its issues. This is a notable shift from the existing focus of policymakers on Facebook’s content policy and censorship—what does and doesn’t belong on Facebook. Many experts believe that this narrow view leads to a whack-a-mole strategy that misses the bigger picture.

“I’m a strong advocate for non-content-based solutions, because those solutions will protect the most vulnerable people in the world,” Haugen said, pointing to Facebook’s uneven ability to enforce its content policy in languages other than English.

Haugen’s testimony echoes many of the findings from an MIT Technology Review investigation published earlier this year, which drew upon dozens of interviews with Facebook executives, current and former employees, industry peers, and external experts. We pulled together the most relevant parts of our investigation and other reporting to give more context to Haugen’s testimony.

How does Facebook’s algorithm work?

Colloquially, we use the term “Facebook’s algorithm” as though there’s only one. In fact, Facebook decides how to target ads and rank content based on hundreds, perhaps thousands, of algorithms. Some of those algorithms tease out a user’s preferences and boost that kind of content up the user’s news feed. Others are for detecting specific types of bad content, like nudity, spam, or clickbait headlines, and deleting or pushing them down the feed.

All of these algorithms are known as machine-learning algorithms. As I wrote earlier this year:

Unlike traditional algorithms, which are hard-coded by engineers, machine-learning algorithms “train” on input data to learn the correlations within it. The trained algorithm, known as a machine-learning model, can then automate future decisions. An algorithm trained on ad click data, for example, might learn that women click on ads for yoga leggings more often than men. The resultant model will then serve more of those ads to women.

And because of Facebook’s enormous amounts of user data, it can

develop models that learned to infer the existence not only of broad categories like “women” and “men,” but of very fine-grained categories like “women between 25 and 34 who liked Facebook pages related to yoga,” and [target] ads to them. The finer-grained the targeting, the better the chance of a click, which would give advertisers more bang for their buck.

The same principles apply for ranking content in news feed:

Just as algorithms [can] be trained to predict who would click what ad, they [can] also be trained to predict who would like or share what post, and then give those posts more prominence. If the model determined that a person really liked dogs, for instance, friends’ posts about dogs would appear higher up on that user’s news feed.

Before Facebook began using machine-learning algorithms, teams used design tactics to increase engagement. They’d experiment with things like the color of a button or the frequency of notifications to keep users coming back to the platform. But machine-learning algorithms create a much more powerful feedback loop. Not only can they personalize what each user sees, they will also continue to evolve with a user’s shifting preferences, perpetually showing each person what will keep them most engaged.

Who runs Facebook’s algorithm?

Within Facebook, there’s no one team in charge of this content-ranking system in its entirety. Engineers develop and add their own machine-learning models into the mix, based on their team’s objectives. For example, teams focused on removing or demoting bad content, known as the integrity teams, will only train models for detecting different types of bad content.

This was a decision Facebook made early on as part of its “move fast and break things” culture. It developed an internal tool known as FBLearner Flow that made it easy for engineers without machine learning experience to develop whatever models they needed at their disposal. By one data point, it was already in use by more than a quarter of Facebook’s engineering team in 2016.

Many of the current and former Facebook employees I’ve spoken to say that this is part of why Facebook can’t seem to get a handle on what it serves up to users in the news feed. Different teams can have competing objectives, and the system has grown so complex and unwieldy that no one can keep track anymore of all of its different components.

As a result, the company’s main process for quality control is through experimentation and measurement. As I wrote:

Teams train up a new machine-learning model on FBLearner, whether to change the ranking order of posts or to better catch content that violates Facebook’s community standards (its rules on what is and isn’t allowed on the platform). Then they test the new model on a small subset of Facebook’s users to measure how it changes engagement metrics, such as the number of likes, comments, and shares, says Krishna Gade, who served as the engineering manager for news feed from 2016 to 2018.

If a model reduces engagement too much, it’s discarded. Otherwise, it’s deployed and continually monitored. On Twitter, Gade explained that his engineers would get notifications every few days when metrics such as likes or comments were down. Then they’d decipher what had caused the problem and whether any models needed retraining.

How has Facebook’s content ranking led to the spread of misinformation and hate speech?

During her testimony, Haugen repeatedly came back to the idea that Facebook’s algorithm incites misinformation, hate speech, and even ethnic violence. 

“Facebook … knows—they have admitted in public—that engagement-based ranking is dangerous without integrity and security systems but then not rolled out those integrity and security systems in most of the languages in the world,” she told the Senate today. “It is pulling families apart. And in places like Ethiopia it is literally fanning ethnic violence.”

Here’s what I’ve written about this previously:

The machine-learning models that maximize engagement also favor controversy, misinformation, and extremism: put simply, people just like outrageous stuff.

Sometimes this inflames existing political tensions. The most devastating example to date is the case of Myanmar, where viral fake news and hate speech about the Rohingya Muslim minority escalated the country’s religious conflict into a full-blown genocide. Facebook admitted in 2018, after years of downplaying its role, that it had not done enough “to help prevent our platform from being used to foment division and incite offline violence.”

As Haugen mentioned, Facebook has also known this for a while. Previous reporting has found that it’s been studying the phenomenon since at least 2016.

In an internal presentation from that year, reviewed by the Wall Street Journal, a company researcher, Monica Lee, found that Facebook was not only hosting a large number of extremist groups but also promoting them to its users: “64% of all extremist group joins are due to our recommendation tools,” the presentation said, predominantly thanks to the models behind the “Groups You Should Join” and “Discover” features.

In 2017, Chris Cox, Facebook’s longtime chief product officer, formed a new task force to understand whether maximizing user engagement on Facebook was contributing to political polarization. It found that there was indeed a correlation, and that reducing polarization would mean taking a hit on engagement. In a mid-2018 document reviewed by the Journal, the task force proposed several potential fixes, such as tweaking the recommendation algorithms to suggest a more diverse range of groups for people to join. But it acknowledged that some of the ideas were “antigrowth.” Most of the proposals didn’t move forward, and the task force disbanded.

In my own conversations, Facebook employees also corroborated these findings.

A former Facebook AI researcher who joined in 2018 says he and his team conducted “study after study” confirming the same basic idea: models that maximize engagement increase polarization. They could easily track how strongly users agreed or disagreed on different issues, what content they liked to engage with, and how their stances changed as a result. Regardless of the issue, the models learned to feed users increasingly extreme viewpoints. “Over time they measurably become more polarized,” he says.

In her testimony, Haugen also repeatedly emphasized how these phenomena are far worse in regions that don’t speak English because of Facebook’s uneven coverage of different languages.

“In the case of Ethiopia there are 100 million people and six languages. Facebook only supports two of those languages for integrity systems,” she said. “This strategy of focusing on language-specific, content-specific systems for AI to save us is doomed to fail.”

She continued: “So investing in non-content-based ways to slow the platform down not only protects our freedom of speech, it protects people’s lives.”

I explore this more in a different article from earlier this year on the limitations of large language models, or LLMs:

Despite LLMs having these linguistic deficiencies, Facebook relies heavily on them to automate its content moderation globally. When the war in Tigray[, Ethiopia] first broke out in November, [AI ethics researcher Timnit] Gebru saw the platform flounder to get a handle on the flurry of misinformation. This is emblematic of a persistent pattern that researchers have observed in content moderation. Communities that speak languages not prioritized by Silicon Valley suffer the most hostile digital environments.

Gebru noted that this isn’t where the harm ends, either. When fake news, hate speech, and even death threats aren’t moderated out, they are then scraped as training data to build the next generation of LLMs. And those models, parroting back what they’re trained on, end up regurgitating these toxic linguistic patterns on the internet.

How does Facebook’s content ranking relate to teen mental health?

One of the more shocking revelations from the Journal’s Facebook Files was Instagram’s internal research, which found that its platform is worsening mental health among teenage girls. “Thirty-two percent of teen girls said that when they felt bad about their bodies, Instagram made them feel worse,” researchers wrote in a slide presentation from March 2020.

Haugen connects this phenomenon to engagement-based ranking systems as well, which she told the Senate today “is causing teenagers to be exposed to more anorexia content.”

“If Instagram is such a positive force, have we seen a golden age of teenage mental health in the last 10 years? No, we have seen escalating rates of suicide and depression amongst teenagers,” she continued. “There’s a broad swath of research that supports the idea that the usage of social media amplifies the risk of these mental health harms.”

In my own reporting, I heard from a former AI researcher who also saw this effect extend to Facebook.

The researcher’s team…found that users with a tendency to post or engage with melancholy content—a possible sign of depression—could easily spiral into consuming increasingly negative material that risked further worsening their mental health.

But as with Haugen, the researcher found that leadership wasn’t interested in making fundamental algorithmic changes.

The team proposed tweaking the content-ranking models for these users to stop maximizing engagement alone, so they would be shown less of the depressing stuff. “The question for leadership was: Should we be optimizing for engagement if you find that somebody is in a vulnerable state of mind?” he remembers.

But anything that reduced engagement, even for reasons such as not exacerbating someone’s depression, led to a lot of hemming and hawing among leadership. With their performance reviews and salaries tied to the successful completion of projects, employees quickly learned to drop those that received pushback and continue working on those dictated from the top down….

That former employee, meanwhile, no longer lets his daughter use Facebook.

How do we fix this?

Haugen is against breaking up Facebook or repealing Section 230 of the US Communications Decency Act, which protects tech platforms from taking responsibility for the content it distributes.

Instead, she recommends carving out a more targeted exemption in Section 230 for algorithmic ranking, which she argues would “get rid of the engagement-based ranking.” She also advocates for a return to Facebook’s chronological news feed.

Ellery Roberts Biddle, a projects director at Ranking Digital Rights, a nonprofit that studies social media ranking systems and their impact on human rights, says a Section 230 carve-out would need to be vetted carefully: “I think it would have a narrow implication. I don’t think it would quite achieve what we might hope for.”

In order for such a carve-out to be actionable, she says, policymakers and the public would need to have a much greater level of transparency into how Facebook’s ad-targeting and content-ranking systems even work. “I understand Haugen’s intention—it makes sense,” she says. “But it’s tough. We haven’t actually answered the question of transparency around algorithms yet. There’s a lot more to do.”

Nonetheless, Haugen’s revelations and testimony have brought renewed attention to what many experts and Facebook employees have been saying for years: that unless Facebook changes the fundamental design of its algorithms, it will not make a meaningful dent in the platform’s issues. 

Her intervention also raises the prospect that if Facebook cannot put its own house in order, policymakers may force the issue.

“Congress can change the rules that Facebook plays by and stop the many harms it is now causing,” Haugen told the Senate. “I came forward at great personal risk because I believe we still have time to act, but we must act now.”

‘Mulheres como você precisam ser fortes’, diz psiquiatra à paciente negra (Yahoo! Notícias)

br.noticias.yahoo.com

Alma Preta – seg., 4 de outubro de 2021 1:17 PM


Unidade da Universidade Federal de São Paulo. (Foto: Divulgação)
Unidade da Universidade Federal de São Paulo. (Foto: Divulgação)
  • Universitária buscou atendimento psiquiátrico na Unifesp, instituição de ensino que oferece o serviço médico gratuitamente aos alunos
  • Thayná Alexandrino conta que há tempos percebe alguns sintomas associados à depressão e ansiedade
  • Segundo a jovem de 24 anos, a médica que a atendeu a julgou pela aparência física; universidade não se pronunciou

Texto: Letícia Fialho Edição: Nadine Nascimento

A estudante de geografia da Universidade Federal de São Paulo (Unifesp), Thayná Alexandrino (24), buscou ajuda psiquiátrica na unidade de atendimento gratuito oferecida pela instituição aos alunos, há cerca de um mês. A jovem relata ter sido julgada pela sua aparência física no atendimento, quando ouviu da profissional que a atendeu: “você não tem cara de paciente psiquiátrica. Mulheres como você precisam ser fortes”.

“Ingressei na universidade e tive a oportunidade de cuidar da minha saúde através dos serviços gratuitos oferecidos por eles. Contudo, ao chegar lá, me deparei com algo totalmente diferente do que esperava. Fui mal tratada pela psiquiatra, que me julgou do começo ao fim”, relata Thayná.

A estudante conta que há tempos percebe alguns sintomas associados à depressão e ansiedade e que, por conta dos estigmas relacionados a doenças mentais, demorou a procurar ajuda. Durante a pandemia, ela perdeu pessoas próximas e se sentiu fragilizada para lidar com o luto.

“Mesmo contando para ela sobre o luto pelo qual estou passando, sobre meu histórico familiar e pré-disposições, escutei a pior justificativa ‘você está muito bem vestida para ter algum problema de ordem mental’ e também que ‘não pode se dar ao luxo de ser fraca’”, relata a vítima que desistiu do atendimento quando a profissional disse: “Mulheres como você sabem lidar muito bem com a dor”.

A estudante conta que sentiu-se impotente e negligenciada no atendimento prestado pela unidade de atendimento da universidade. Segundo ela, a profissional que a atendeu era uma mulher branca, na faixa etária dos 40 anos, com bagagem profissional e acadêmica.

“Parece que a única alternativa sugerida por profissionais brancos é que nós, mulheres negras, precisamos ser fortes o tempo todo. Pessoalmente, na visão dela, eu não poderia sofrer. Lembro que na minha infância uma professora disse que a vida seria dura pra quem fosse fraco. E agora ouvi quase a mesma coisa, vindo de uma profissional de saúde mental”, reflete Thayná.

Insegurança da aluna

Em busca de atendimento adequado, a estudante recorreu a um psicólogo, seguindo orientação médica, em outra unidade de atendimento. E novamente teve uma abordagem pouco acolhedora.

“Quando relatei sobre o episódio em que fui vítima de racismo. Fui surpreendida com a colocação de mais um profissional branco. Ele disse que eu não era negra e, sim, ‘mulata’, em vista de outros pacientes negros que ele atende. Até quando um cara branco pode julgar a negritude de outras pessoas?”, conta.

A estudante diz que, até o momento, não recorreu a nenhum outro profissional por conta dos valores altos e por sentir-se insegura. “Eu adoro a área da saúde e ser atendida por profissionais que não tiveram a sensibilidade de olhar para a minha dor, me toca bastante. Outra coisa é a falta de representatividade. O fato de não ter pessoas negras inseridas nesses espaços, perpetua o racismo estrutural”, reitera a Thayná.

A Alma Preta Jornalismo entrou em contato com a Unifesp para solicitar um posicionamento sobre o caso, mas até o momento não teve retorno. Caso a instituição se posicione, o texto será atualizado.

Advogados precisam se tornar cocriadores da inteligência artificial, diz autor de livro sobre o tema (Folha de S.Paulo)

www1.folha.uol.com.br

Géssica Brandino, 5 de outubro de 2021

Americano Joshua Walker defende que decisões judiciais nunca sejam automatizadas


Identificar as melhores práticas e quais fatores influenciaram decisões judiciais são alguns dos exemplos de como o uso da inteligência artificial pode beneficiar o sistema de Justiça e, consequentemente, a população, afirma o advogado americano Joshua Walker.

Um dos fundadores do CodeX, centro de informática legal da Universidade de Stanford (EUA) —onde também lecionou— e fundador da Lex Machina, empresa pioneira no segmento jurídico tecnológico, Walker iniciou a carreira no mundo dos dados há mais de 20 anos, trabalhando com processos do genocídio de 1994 em Ruanda, que matou ao menos 800 mil pessoas em cem dias.

Autor do livro “On Legal AI: Um Rápido Tratado sobre a Inteligência Artificial no Direito” (Revista dos Tribunais, 2021), no qual fala sobre como softwares de análise podem ser usados na busca de soluções no direito, Walker palestrou sobre o tema na edição da Fenalaw —evento sobre uso da tecnologia por advogados.

Em entrevista à Folha por email, ele defende que os advogados não só aprendam a usar recursos de inteligência artificial, como também assumam o protagonismo nos processos de desenvolvimento de tecnologias voltadas ao direito.

“Nós [advogados] precisamos começar a nos tornar cocriadores porque, enquanto os engenheiros de software se lembram dos dados, nós nos lembramos da estória e das histórias”, afirma.

Ao longo de sua carreira, quais tabus foram superados e quais continuam quando o assunto é inteligência artificial? Como confrontar essas ideias? Tabus existem em abundância. Há mais e novos todos os dias. Você tem que se perguntar duas coisas: o que meus clientes precisam? E como posso ser —um ou o— melhor no que faço para ajudar meus clientes? Isso é tudo que você precisa se preocupar para “inovar”.

A tradição jurídica exige que nos adaptemos, e nos adaptemos rapidamente, porque temos: a) o dever de lealdade de ajudar nossos clientes com os melhores meios disponíveis; b) o dever de melhorar a prática e a administração da lei e do próprio sistema.

A inteligência artificial legal e outras técnicas básicas de outros campos podem impulsionar de forma massiva ambas as áreas. Para isso, o dever de competência profissional nos exige conhecimentos operacionais e sobre as plataformas, que são muito úteis para serem ignorados. Isso não significa que você deve adotar tudo. Seja cético.

Estamos aprendendo a classificar desafios humanos complexos em estruturas processuais que otimizam os resultados para todos os cidadãos, de qualquer origem. Estamos aprendendo qual impacto as diferentes regras locais se correlacionam com diferentes classes de resultados de casos. Estamos apenas começando.

FolhaJus

Seleção das principais notícias da semana sobre o cenário jurídico e conteúdos exclusivos com entrevistas e infográficos.

O sr. começou a trabalhar com análise de dados por causa do genocídio de Ruanda. O que aquela experiência lhe ensinou sobre as possibilidades e limites do trabalho com bancos de dados? O que me ensinou é que a arquitetura da informação é mais importante do que o número de doutores, consultores ou milhões de dólares do orçamento de TI (tecnologia da informação) que você tem à sua disposição.

Você tem que combinar a infraestrutura de TI, o design de dados, com o objetivo da equipe e da empresa. A empresa humana, seu cliente (e para nós eram os mortos) está em primeiro lugar. Todo o resto é uma variável dependente.

Talento, orçamento etc. são muito importantes. Mas você não precisa necessariamente de dinheiro para obter resultados sérios.

Como avalia o termo inteligência artificial? Como superar a estranheza que ele gera? É basicamente um meme de marketing que foi usado para inspirar financiadores a investir em projetos de ciência da computação, começando há muitas décadas. Uma boa descrição comercial de inteligência artificial —mais prática e menos jargão— é: software que faz análise. Tecnicamente falando, inteligência artificial é: dados mais matemática.

Se seus dados são terríveis, a IA resultante também o será. Se são tendenciosos, ou contêm comunicação abusiva, o resultado também será assim.

Esse é um dos motivos de tantas empresas de tecnologia jurídica e operações jurídicas dominadas pela engenharia fracassarem de forma tão espetacular. Você precisa de advogados altamente qualificados, técnicos, matemáticos e advogados céticos para desenvolver a melhor tecnologia/matemática.

Definir IA de forma mais simples também implica, precisamente, que cada inteligência artificial ​​é única, como uma criança. Ela sempre está se desenvolvendo, mudando etc. Esta é a maneira de pensar sobre isso. E, como acontece com as crianças, você pode ensinar, mas nenhum pai pode controlar operacionalmente um filho, além de um certo limite.

Como o uso de dados pode ampliar o acesso à Justiça e torná-lo mais ágil? Nunca entendi muito bem o que significa o termo “acesso à Justiça”. Talvez seja porque a maioria das pessoas, de todas as origens socioeconômicas e étnicas, compartilha a experiência comum de não ter esse acesso.

Posso fazer analogias com outras áreas, porém. Um pedaço de software tem um custo marginal de aproximadamente zero. Cada vez que um de nós usa uma ferramenta de busca, ela não nos custa o investimento que foi necessário para fazer esse software e sofisticá-lo. Há grandes custos fixos, mas baixo custo por usuário.

Essa é a razão pela qual o software é um ótimo negócio. Se bem governado, podemos torná-lo um modus operandi ainda melhor para um país moderno. Isso supondo que possamos evitar todos os pesadelos que podem acontecer!

Podemos criar software de inteligência artificial legal que ajuda todas as pessoas em um país inteiro. Esse software pode ser perfeitamente personalizado e tornar-se fiel a cada indivíduo. Pode custar quase zero por cada operação incremental.

Eu criei um pacote de metodologias chamado Citizen’s AI Lab (laboratório de IA dos cidadãos) que será levado a muitos países ao redor do mundo, incluindo o Brasil, se as pessoas quiserem colocá-lo para funcionar. Vai fazer exatamente isso. Novamente, esses sistemas não apenas podem ser usados ​​para cada operação (uso) de cada indivíduo, mas também para cada país.

FolhaJus Dia

Seleção diária das principais notícias sobre o cenário jurídico em diferentes áreas

Em quais situações não é recomendado que a Justiça use IA? Nunca para a própria tomada de decisão. Neste momento, em qualquer caso, e/ou em minha opinião, não é possível e nem desejável automatizar a tomada de decisões judiciais.

Por outro lado, juízes podem sempre se beneficiar com a inteligência artificial. Quais são as melhores práticas? Quantos casos um determinado juiz tem em sua pauta? Ou em todo o tribunal? Como isso se compara a outros tribunais e como os resultados poderiam ser diferentes por causa dos casos ou do cenário econômico, político ou outros fatores?

Há protocolos que ajudam as partes a obter uma resolução antecipada de disputas? Esses resultados são justos?, uma questão humana possibilitada por uma base ou plataforma empírica auxiliada por IA. Ou os resultados são apenas impulsionados pelo acesso relativo aos fundos de litígio pelos litigantes?

Como estruturamos as coisas para que tenhamos menos disputas estúpidas nos tribunais? Quais advogados apresentam os comportamentos de arquivamento mais malignos e abusivos em todos os tribunais? Como a lei deve ser regulamentada?

Essas são perguntas que não podemos nem começar a fazer sem IA —leia-se: matemática— para nos ajudar a analisar grandes quantidades de dados.

Quais são os limites éticos para o uso de bancos de dados? Como evitar abusos? Uma boa revisão legal é essencial para todo projeto de inteligência artificial e dados que tenha um impacto material na humanidade. Mas para fazer isso em escala, nós, os advogados, também precisamos de mecanismos legais de revisão de IA.

Apoio muito o trabalho atual da inteligência artificial ética. Infelizmente, nos Estados Unidos, e talvez em outros lugares, a “IA ética” é uma espécie de “falsa questão” para impedir os advogados de se intrometerem em projetos de engenharia lucrativos e divertidos. Isso tem sido um desastre político, operacional e comercial em muitos casos.

Nós [advogados] precisamos começar a nos tornar cocriadores porque, enquanto os engenheiros de software se lembram dos dados, nós nos lembramos da estória e das histórias. Nós somos os leitores. Nossas IAs estão imbuídas de um tipo diferente de sentido, evoluíram de um tipo diferente de educação. Cientistas da computação e advogados/estudiosos do direito estão intimamente alinhados, mas nosso trabalho precisa ser o de guardiões da memória social.

Pesquisa Datafolha com advogados brasileiros mostrou que apenas 29% dos 303 entrevistados usavam recursos de IA no dia a dia. Como é nos EUA? O que é necessário para avançar mais? O que observei no “microclima” da tecnologia legal de São Paulo foi que o “tabu” contra o uso de tecnologia legal foi praticamente eliminado. Claro, isso é um microclima e pode não ser representativo ou ser contrarrepresentativo. Mas as pessoas podem estar usando IA todos os dias na prática, sem estar cientes disso. Os motores de busca são um exemplo muito simples. Temos que saber o que é algo antes de saber o quanto realmente o usamos.

Nos EUA: suspeito que o uso ainda esteja no primeiro “trimestre” do jogo em aplicações de IA para a lei. Litígio e contrato são casos de uso razoavelmente estabelecidos. Na verdade, eu não acho que você pode ser um advogado de propriedade intelectual de nível nacional sem o impulsionamento de alguma forma de dados empíricos.

Ainda são raros cursos de análise de dados para estudantes de direito no Brasil. Diante dessa lacuna, o que os profissionais devem fazer para se adaptar a essa nova realidade? Qual é o risco para quem não fizer nada? Eu começaria ensinando processo cívil com dados. Essa é a regra, é assim que as pessoas aplicam a regra (o que arquivam), e o que acontece quando o fazem (consequências). Isso seria revolucionário. Alunos, professores e doutores podem desenvolver todos os tipos de estudos e utilidades sociais.

Existem inúmeros outros exemplos. Os acadêmicos precisam conduzir isso em parceria com juízes, reguladores, a imprensa e a Ordem dos Advogados.

Na verdade, meu melhor conselho para os novos alunos é: assuma que todos os dados são falsos até prova em contrário. E quanto mais sofisticada a forma, mais volumosa a definição, mais para se aprofundar.

RAIO X

Joshua Walker

Autor do livro “On Legal AI – Um Rápido Tratado sobre a Inteligência Artificial no Direito” (Revista dos Tribunais, 2021) e diretor da empresa Aon IPS. Graduado em Havard e doutor pela Faculdade de Direito da Universidade de Chicago, foi cofundador do CodeX, centro de informática legal da Universidade de Stanford, e fundador da Lex Machina, empresa pioneira do segmento jurídico tecnológico. Também lecionou nas universidades de Stanford e Berkeley

Nobel de Física 2021 vai para pesquisa de sistemas complexos, com destaque para predição do aquecimento global (Folha de S.Paulo)

www1.folha.uol.com.br

Salvador Nogueira, 5 de outubro de 2021

Pesquisadores Syukuro Manabe, Klaus Hasselmann e Giorgio Parisi vão dividir prêmio de 10 milhões de coroas suecas


O prêmio Nobel em Física deste ano foi dedicado ao estudo de sistemas complexos, dentre eles os que permitem a compreensão das mudanças climáticas que afetam nosso planeta. A escolha coloca um carimbo definitivo de consenso sobre a ciência do clima.

Os pesquisadores Syukuro Manabe, dos Estados Unidos, e Klaus Hasselmann, da Alemanha, foram premiados especificamente por modelarem o clima terrestre e fazerem predições sobre o aquecimento global. A outra metade do prêmio foi para Giorgio Parisi, da Itália, que revelou padrões ocultos em materiais complexos desordenados, das escalas atômica à planetária, em uma contribuição essencial à teoria de sistemas complexos, com relevância também para o estudo do clima.

“Muitas pessoas pensam que a física lida com fenômenos simples, como a órbita perfeitamente elíptica da Terra ao redor do Sol ou átomos em estruturas cristalinas”, disse Thors Hans Hansson, membro do comitê de escolha do Nobel, na coletiva que apresentou a escolha.

​”Mas a física é muito mais que isso. Uma das tarefas básicas da física é usar teorias básicas da matéria para explicar fenômenos e processos complexos, como o comportamento de materiais e qual é o desenvolvimento no clima da Terra. Isso exige intuição profunda por quais estruturas e quais progressões são essenciais, e também engenhosidade matemática para desenvolver os modelos e as teorias que as descrevem, coisas em que os laureados deste ano são poderosos.”

“Eu acho que é urgente que tomemos decisões muito fortes e nos movamos em um passo forte, porque estamos numa situação em que podemos ter uma retroalimentação positiva e isso pode acelerar o aumento de temperatura”, disse Giorgio Parisi, um dos vencedores, na coletiva de apresentação do evento. “É claro que para as gerações futuras nós temos de agir agora de uma forma muito rápida.”

​COMO É ESCOLHIDO O GANHADOR DO NOBEL

A tradicional premiação do Nobel teve início com a morte do químico sueco Alfred Nobel (1833-1896), inventor da dinamite. Em 1895, em seu último testamento, Nobel registrou que sua fortuna deveria ser destinada para a construção de um prêmio —o que foi recebido por sua família com contestação. O primeiro prêmio só foi dado em 1901.

O processo de escolha do vencedor do prêmio da área de física começa no ano anterior à premiação. Em setembro, o Comitê do Nobel de Física envia convites (cerca de 3.000) para a indicação de nomes que merecem a homenagem. As respostas são enviadas até o dia 31 de janeiro.

Podem indicar nomes os membros da Academia Real Sueca de Ciências; membros do Comitê do Nobel de Física; ganhadores do Nobel de Física; professores física em universidades e institutos de tecnologia da Suécia, Dinamarca, Finlândia, Islândia e Noruega, e do Instituto Karolinska, em Estocolmo; professores em cargos semelhantes em pelo menos outras seis (mas normalmente em centenas de) universidades escolhidas pela Academia de Ciências, com o objetivo de assegurar a distribuição adequada pelos continentes e áreas de conhecimento; e outros cientistas que a Academia entenda adequados para receber os convites.

Autoindicações não são aceitas.

Começa então um processo de análise das centenas de nomes apontados, com consulta a especialistas e o desenvolvimento de relatórios, a fim de afunilar a seleção. Finalmente, em outubro, a Academia, por votação majoritária, decide quem receberá o reconhecimento.

HISTÓRICO RECENTE DO NOBEL DE FÍSICA

A descoberta de buracos negros e o impacto disso na compreensão do Universo levaram o Nobel de Física de 2020. A láurea foi dividida entre Roger Penrose, Reihard Genzel e Andrea Ghez.

Ghez é somente a quarta mulher premiada com o Nobel de Física, entre 216 homenageados.

Já em 2019, o prêmio ficou James Peebles, Michel Mayor e Didier Queloz, mais uma vez, por pesquisas cósmicas, que ajudaram a explicar melhor o funcionamento do Universo.

Peebles ajudou a entender como o Universo evoluiu após o Big Bang, e Mayor e Queloz descobriram um exoplaneta (planeta fora do Sistema Solar) que orbitava uma estrela do tipo solar.

Pesquisas com laser foram premiadas em 2018, com láureas para Arthur Ashkin, Donna Strickland e Gérard Mourou.

Indo um pouco mais longe, o prêmio já esteve nas mãos de Max Planck (1918), por ter lançado as bases da física quântica e de Albert Einstein (1921), pela descoberta do efeito fotoelétrico. Niels Bohr (1922), por suas contribuições para o entendimento da estrutura atômica, e Paul Dirac e Erwin Schrödinger (1933), pelo desenvolvimento de novas versões da teoria quântica, também foram premiados.