Arquivo da tag: Mediação tecnológica

The Problem With Weather Apps (The Atlantic)

theatlantic.com

Charlie Warzel

April 10, 2023


How are we still getting caught in the rain?

An illustration of a guy on his phone standing in rain showers.
Illustration by Daniel Zender. Source: Getty.

Technologically speaking, we live in a time of plenty. Today, I can ask a chatbot to render The Canterbury Tales as if written by Taylor Swift or to help me write a factually inaccurate autobiography. With three swipes, I can summon almost everyone listed in my phone and see their confused faces via an impromptu video chat. My life is a gluttonous smorgasbord of information, and I am on the all-you-can-eat plan. But there is one specific corner where technological advances haven’t kept up: weather apps.

Weather forecasts are always a game of prediction and probabilities, but these apps seem to fail more often than they should. At best, they perform about as well as meteorologists, but some of the most popular ones fare much worse. The cult favorite Dark Sky, for example, which shut down earlier this year and was rolled into the Apple Weather app, accurately predicted the high temperature in my zip code only 39 percent of the time, according to ForecastAdvisor, which evaluates online weather providers. The Weather Channel’s app, by comparison, comes in at 83 percent. The Apple app, although not rated by ForecastAdvisor, has a reputation for off-the-mark forecasts and has been consistently criticized for presenting faulty radar screens, mixing up precipitation totals, or, as it did last week, breaking altogether. Dozens of times, the Apple Weather app has lulled me into a false sense of security, leaving me wet and betrayed after a run, bike ride, or round of golf.

People love to complain about weather forecasts, dating back to when local-news meteorologists were the primary source for those planning their morning commutes. But the apps have produced a new level of frustration, at least judging by hundreds of cranky tweets over the past decade. Nearly two decades into the smartphone era—when anyone can theoretically harness the power of government weather data and dissect dozens of complex, real-time charts and models—we are still getting caught in the rain.


Weather apps are not all the same. There are tens of thousands of them, from the simply designed Apple Weather to the expensive, complex, data-rich Windy.App. But all of these forecasts are working off of similar data, which are pulled from places such as the National Oceanic and Atmospheric Administration (NOAA) and the European Centre for Medium-Range Weather Forecasts. Traditional meteorologists interpret these models based on their training as well as their gut instinct and past regional weather patterns, and different weather apps and services tend to use their own secret sauce of algorithms to divine their predictions. On an average day, you’re probably going to see a similar forecast from app to app and on television. But when it comes to how people feel about weather apps, these edge cases—which usually take place during severe weather events—are what stick in a person’s mind. “Eighty percent of the year, a weather app is going to work fine,” Matt Lanza, a forecaster who runs Houston’s Space City Weather, told me. “But it’s that 20 percent where people get burned that’s a problem.”

No people on the planet have a more tortured and conflicted relationship with weather apps than those who interpret forecasting models for a living. “My wife is married to a meteorologist, and she will straight up question me if her favorite weather app says something different than my forecast,” Lanza told me. “That’s how ingrained these services have become in most peoples’ lives.” The basic issue with weather apps, he argues, is that many of them remove a crucial component of a good, reliable forecast: a human interpreter who can relay caveats about models or offer a range of outcomes instead of a definitive forecast.

Lanza explained the human touch of a meteorologist using the example of a so-called high-resolution forecasting model that can predict only 18 hours out. It is generally quite good, he told me, at predicting rain and thunderstorms—“but every so often it runs too hot and over-indexes the chances of a bad storm.” This model, if left to its own devices, will project showers and thunderstorms blanketing the region for hours when, in reality, the storm might only cause 30 minutes of rain in an isolated area of the mapped region. “The problem is when you take the model data and push it directly into the app with no human interpretation,” he said. “Because you’re not going to get nuance from these apps at all. And that can mean a difference between a chance of rain all day and it’s going to rain all day.”

But even this explanation has caveats; all weather apps are different, and their forecasts have varying levels of sophistication. Some pipe model data right in, whereas others are curated using artificial intelligence. Peter Neilley, the Weather Channel’s director of weather forecasting sciences and technologies, said in an email that the company’s app incorporates “billions of weather data points,” adding that “our expert team of meteorologists does oversee and correct the process as needed.”

Weather apps might be less reliable for another reason too. When it comes to predicting severe weather such as snow, small changes in atmospheric moisture—the type of change an experienced forecaster might notice—can cause huge variances in precipitation outcomes. An app with no human curation might choose to average the model’s range of outcomes, producing a forecast that doesn’t reflect the dynamic situation on the ground. Or consider cities with microclimates: “Today, in Chicago, the lakefront will sit in the lower 40s, and the suburbs will be 50-plus degrees,” Greg Dutra, a meteorologist at ABC 7 Chicago, told me. “Often, the difference is even more stark—20-degree swings over just miles.” These sometimes subtle temperature disparities can mean very different forecasts for people living in the same region—something that one-size-fits-all weather apps don’t always pick up.

Naturally, meteorologists think that what they do is superior to forecasting by algorithm alone, but even weather-app creators told me that the challenges are real. “It’s impossible for a weather-data provider to be accurate everywhere in the world,” Brian Mueller, the founder of the app Carrot Weather, told me. His solution to the problem of app-based imprecision is to give users more ability to choose what they see when they open Carrot, letting them customize what specific weather information the app surfaces as well as what data sources the app will draw from. Mueller said that he learned from Dark Sky’s success how important beautiful, detailed radar maps were—both as a source of weather data and for entertainment purposes. In fact, meteorology seems to be only part of the allure when it comes to building a beloved weather app. Carrot has a pleasant design interface, with bright colors and Easter eggs scattered throughout, such as geography challenges based off of its weather maps. He’s also hooked Carrot up to ChatGPT to allow people to chat with the app’s fictional personality.


But what if these detailed models and dizzying maps, in the hands of weather rubes like myself,  are the real problem? “The general public has access to more weather information than ever, and I’d posit that that’s a bad thing,” Chris Misenis, a weather-forecasting consultant in North Carolina who goes by the name “Weather Moose,” told me. “You can go to PivotalWeather.com right now and pull up just about any model simulation you want.” He argues that these data are fine to look at if you know how to interpret them, but for people who aren’t trained to analyze them, they are at best worthless and at worst dangerous.

In fact, forecasts are better than ever, Andrew Blum, a journalist and the author of the book The Weather Machine: A Journey Inside the Forecast, told me. “But arguably, we are less prepared to understand,” he said, “and act upon that improvement—and a forecast is only as good as our ability to make decisions with it.” Indeed, even academic research around weather apps suggests that apps fail worst when they give users a false sense of certainty around forecasting. A 2016 paper for the Royal Meteorological Society argued that “the current way of conveying forecasts in the most common apps is guilty of ‘immodesty’ (‘not admitting that sometimes predictions may fail’) and ‘impoverishment’ (‘not addressing the broader context in which forecasts … are made’).”

The conflicted relationship that people have with weather apps may simply be a manifestation of the information overload that dominates all facets of modern life. These products grant anyone with a phone access to an overwhelming amount of information that can be wildly complex. Greg Dutra shared one such public high-resolution model from the NOAA with me that was full of indecipherable links to jargony terms such as “0-2 km max vertical vorticity.” Weather apps seem to respond mostly to this fire hose of data in two ways: By boiling them down to a reductive “partly sunny” icon, or by bombarding the user with information they might not need or understand. At its worst, a modern weather app seems to flatter people, entrusting them to do their own research even if they’re not equipped. I’m not too proud to admit that some of the fun of toying around with Dark Sky’s beautiful radar or Windy.App’s endless array of models is the feeling of role-playing as a meteorologist. But the truth is that I don’t really know what I’m looking at.

What people seem to be looking for in a weather app is something they can justify blindly trusting and letting into their lives—after all, it’s often the first thing you check when you roll over in bed in the morning. According to the 56,400 ratings of Carrot in Apple’s App Store, its die-hard fans find the app entertaining and even endearing. “Love my psychotic, yet surprisingly accurate weather app,” one five-star review reads. Although many people need reliable forecasting, true loyalty comes from a weather app that makes people feel good when they open it.

Our weather-app ambivalence is a strange pull between feeling grateful for instant access to information and simultaneously navigating a sense of guilt and confusion about how the experience is also, somehow, dissatisfying—a bit like staring down Netflix’s endless library and feeling as if there’s nothing to watch. Weather apps aren’t getting worse. In fact they’re only getting more advanced, inputting more and more data and offering them to us to consume. Which, of course, might be why they feel worse.

A real-time revolution will up-end the practice of macroeconomics (The Economist)

economist.com

The Economist Oct 23rd 2021


DOES ANYONE really understand what is going on in the world economy? The pandemic has made plenty of observers look clueless. Few predicted $80 oil, let alone fleets of container ships waiting outside Californian and Chinese ports. As covid-19 let rip in 2020, forecasters overestimated how high unemployment would be by the end of the year. Today prices are rising faster than expected and nobody is sure if inflation and wages will spiral upward. For all their equations and theories, economists are often fumbling in the dark, with too little information to pick the policies that would maximise jobs and growth.

Yet, as we report this week, the age of bewilderment is starting to give way to greater enlightenment. The world is on the brink of a real-time revolution in economics, as the quality and timeliness of information are transformed. Big firms from Amazon to Netflix already use instant data to monitor grocery deliveries and how many people are glued to “Squid Game”. The pandemic has led governments and central banks to experiment, from monitoring restaurant bookings to tracking card payments. The results are still rudimentary, but as digital devices, sensors and fast payments become ubiquitous, the ability to observe the economy accurately and speedily will improve. That holds open the promise of better public-sector decision-making—as well as the temptation for governments to meddle.

The desire for better economic data is hardly new. America’s GNP estimates date to 1934 and initially came with a 13-month time lag. In the 1950s a young Alan Greenspan monitored freight-car traffic to arrive at early estimates of steel production. Ever since Walmart pioneered supply-chain management in the 1980s private-sector bosses have seen timely data as a source of competitive advantage. But the public sector has been slow to reform how it works. The official figures that economists track—think of GDP or employment—come with lags of weeks or months and are often revised dramatically. Productivity takes years to calculate accurately. It is only a slight exaggeration to say that central banks are flying blind.

Bad and late data can lead to policy errors that cost millions of jobs and trillions of dollars in lost output. The financial crisis would have been a lot less harmful had the Federal Reserve cut interest rates to near zero in December 2007, when America entered recession, rather than in December 2008, when economists at last saw it in the numbers. Patchy data about a vast informal economy and rotten banks have made it harder for India’s policymakers to end their country’s lost decade of low growth. The European Central Bank wrongly raised interest rates in 2011 amid a temporary burst of inflation, sending the euro area back into recession. The Bank of England may be about to make a similar mistake today.

The pandemic has, however, become a catalyst for change. Without the time to wait for official surveys to reveal the effects of the virus or lockdowns, governments and central banks have experimented, tracking mobile phones, contactless payments and the real-time use of aircraft engines. Instead of locking themselves in their studies for years writing the next “General Theory”, today’s star economists, such as Raj Chetty at Harvard University, run well-staffed labs that crunch numbers. Firms such as JPMorgan Chase have opened up treasure chests of data on bank balances and credit-card bills, helping reveal whether people are spending cash or hoarding it.

These trends will intensify as technology permeates the economy. A larger share of spending is shifting online and transactions are being processed faster. Real-time payments grew by 41% in 2020, according to McKinsey, a consultancy (India registered 25.6bn such transactions). More machines and objects are being fitted with sensors, including individual shipping containers that could make sense of supply-chain blockages. Govcoins, or central-bank digital currencies (CBDCs), which China is already piloting and over 50 other countries are considering, might soon provide a goldmine of real-time detail about how the economy works.

Timely data would cut the risk of policy cock-ups—it would be easier to judge, say, if a dip in activity was becoming a slump. And the levers governments can pull will improve, too. Central bankers reckon it takes 18 months or more for a change in interest rates to take full effect. But Hong Kong is trying out cash handouts in digital wallets that expire if they are not spent quickly. CBDCs might allow interest rates to fall deeply negative. Good data during crises could let support be precisely targeted; imagine loans only for firms with robust balance-sheets but a temporary liquidity problem. Instead of wasteful universal welfare payments made through social-security bureaucracies, the poor could enjoy instant income top-ups if they lost their job, paid into digital wallets without any paperwork.

The real-time revolution promises to make economic decisions more accurate, transparent and rules-based. But it also brings dangers. New indicators may be misinterpreted: is a global recession starting or is Uber just losing market share? They are not as representative or free from bias as the painstaking surveys by statistical agencies. Big firms could hoard data, giving them an undue advantage. Private firms such as Facebook, which launched a digital wallet this week, may one day have more insight into consumer spending than the Fed does.

Know thyself

The biggest danger is hubris. With a panopticon of the economy, it will be tempting for politicians and officials to imagine they can see far into the future, or to mould society according to their preferences and favour particular groups. This is the dream of the Chinese Communist Party, which seeks to engage in a form of digital central planning.

In fact no amount of data can reliably predict the future. Unfathomably complex, dynamic economies rely not on Big Brother but on the spontaneous behaviour of millions of independent firms and consumers. Instant economics isn’t about clairvoyance or omniscience. Instead its promise is prosaic but transformative: better, timelier and more rational decision-making. ■

economist.com

Enter third-wave economics

Oct 23rd 2021


AS PART OF his plan for socialism in the early 1970s, Salvador Allende created Project Cybersyn. The Chilean president’s idea was to offer bureaucrats unprecedented insight into the country’s economy. Managers would feed information from factories and fields into a central database. In an operations room bureaucrats could see if production was rising in the metals sector but falling on farms, or what was happening to wages in mining. They would quickly be able to analyse the impact of a tweak to regulations or production quotas.

Cybersyn never got off the ground. But something curiously similar has emerged in Salina, a small city in Kansas. Salina311, a local paper, has started publishing a “community dashboard” for the area, with rapid-fire data on local retail prices, the number of job vacancies and more—in effect, an electrocardiogram of the economy.

What is true in Salina is true for a growing number of national governments. When the pandemic started last year bureaucrats began studying dashboards of “high-frequency” data, such as daily airport passengers and hour-by-hour credit-card-spending. In recent weeks they have turned to new high-frequency sources, to get a better sense of where labour shortages are worst or to estimate which commodity price is next in line to soar. Economists have seized on these new data sets, producing a research boom (see chart 1). In the process, they are influencing policy as never before.

This fast-paced economics involves three big changes. First, it draws on data that are not only abundant but also directly relevant to real-world problems. When policymakers are trying to understand what lockdowns do to leisure spending they look at live restaurant reservations; when they want to get a handle on supply-chain bottlenecks they look at day-by-day movements of ships. Troves of timely, granular data are to economics what the microscope was to biology, opening a new way of looking at the world.

Second, the economists using the data are keener on influencing public policy. More of them do quick-and-dirty research in response to new policies. Academics have flocked to Twitter to engage in debate.

And, third, this new type of economics involves little theory. Practitioners claim to let the information speak for itself. Raj Chetty, a Harvard professor and one of the pioneers, has suggested that controversies between economists should be little different from disagreements among doctors about whether coffee is bad for you: a matter purely of evidence. All this is causing controversy among dismal scientists, not least because some, such as Mr Chetty, have done better from the shift than others: a few superstars dominate the field.

Their emerging discipline might be called “third wave” economics. The first wave emerged with Adam Smith and the “Wealth of Nations”, published in 1776. Economics mainly involved books or papers written by one person, focusing on some big theoretical question. Smith sought to tear down the monopolistic habits of 18th-century Europe. In the 20th century John Maynard Keynes wanted people to think differently about the government’s role in managing the economic cycle. Milton Friedman aimed to eliminate many of the responsibilities that politicians, following Keynes’s ideas, had arrogated to themselves.

All three men had a big impact on policies—as late as 1850 Smith was quoted 30 times in Parliament—but in a diffuse way. Data were scarce. Even by the 1970s more than half of economics papers focused on theory alone, suggests a study published in 2012 by Daniel Hamermesh, an economist.

That changed with the second wave of economics. By 2011 purely theoretical papers accounted for only 19% of publications. The growth of official statistics gave wonks more data to work with. More powerful computers made it easier to spot patterns and ascribe causality (this year’s Nobel prize was awarded for the practice of identifying cause and effect). The average number of authors per paper rose, as the complexity of the analysis increased (see chart 2). Economists had greater involvement in policy: rich-world governments began using cost-benefit analysis for infrastructure decisions from the 1950s.

Second-wave economics nonetheless remained constrained by data. Most national statistics are published with lags of months or years. “The traditional government statistics weren’t really all that helpful—by the time they came out, the data were stale,” says Michael Faulkender, an assistant treasury secretary in Washington at the start of the pandemic. The quality of official local economic data is mixed, at best; they do a poor job of covering the housing market and consumer spending. National statistics came into being at a time when the average economy looked more industrial, and less service-based, than it does now. The Standard Industrial Classification, introduced in 1937-38 and still in use with updates, divides manufacturing into 24 subsections, but the entire financial industry into just three.

The mists of time

Especially in times of rapid change, policymakers have operated in a fog. “If you look at the data right now…we are not in what would normally be characterised as a recession,” argued Edward Lazear, then chairman of the White House Council of Economic Advisers, in May 2008. Five months later, after Lehman Brothers had collapsed, the IMF noted that America was “not necessarily” heading for a deep recession. In fact America had entered a recession in December 2007. In 2007-09 there was no surge in economics publications. Economists’ recommendations for policy were mostly based on judgment, theory and a cursory reading of national statistics.

The gap between official data and what is happening in the real economy can still be glaring. Walk around a Walmart in Kansas and many items, from pet food to bottled water, are in short supply. Yet some national statistics fail to show such problems. Dean Baker of the Centre for Economic and Policy Research, using official data, points out that American real inventories, excluding cars and farm products, are barely lower than before the pandemic.

There were hints of an economics third wave before the pandemic. Some economists were finding new, extremely detailed streams of data, such as anonymised tax records and location information from mobile phones. The analysis of these giant data sets requires the creation of what are in effect industrial labs, teams of economists who clean and probe the numbers. Susan Athey, a trailblazer in applying modern computational methods in economics, has 20 or so non-faculty researchers at her Stanford lab (Mr Chetty’s team boasts similar numbers). Of the 20 economists with the most cited new work during the pandemic, three run industrial labs.

More data sprouted from firms. Visa and Square record spending patterns, Apple and Google track movements, and security companies know when people go in and out of buildings. “Computers are in the middle of every economic arrangement, so naturally things are recorded,” says Jon Levin of Stanford’s Graduate School of Business. Jamie Dimon, the boss of JPMorgan Chase, a bank, is an unlikely hero of the emergence of third-wave economics. In 2015 he helped set up an institute at his bank which tapped into data from its network to analyse questions about consumer finances and small businesses.

The Brexit referendum of June 2016 was the first big event when real-time data were put to the test. The British government and investors needed to get a sense of this unusual shock long before Britain’s official GDP numbers came out. They scraped web pages for telltale signs such as restaurant reservations and the number of supermarkets offering discounts—and concluded, correctly, that though the economy was slowing, it was far from the catastrophe that many forecasters had predicted.

Real-time data might have remained a niche pursuit for longer were it not for the pandemic. Chinese firms have long produced granular high-frequency data on everything from cinema visits to the number of glasses of beer that people are drinking daily. Beer-and-movie statistics are a useful cross-check against sometimes dodgy official figures. China-watchers turned to them in January 2020, when lockdowns began in Hubei province. The numbers showed that the world’s second-largest economy was heading for a slump. And they made it clear to economists elsewhere how useful such data could be.

Vast and fast

In the early days of the pandemic Google started releasing anonymised data on people’s physical movements; this has helped researchers produce a day-by-day measure of the severity of lockdowns (see chart 3). OpenTable, a booking platform, started publishing daily information on restaurant reservations. America’s Census Bureau quickly introduced a weekly survey of households, asking them questions ranging from their employment status to whether they could afford to pay the rent.

In May 2020 Jose Maria Barrero, Nick Bloom and Steven Davis, three economists, began a monthly survey of American business practices and work habits. Working-age Americans are paid to answer questions on how often they plan to visit the office, say, or how they would prefer to greet a work colleague. “People often complete a survey during their lunch break,” says Mr Bloom, of Stanford University. “They sit there with a sandwich, answer some questions, and that pays for their lunch.”

Demand for research to understand a confusing economic situation jumped. The first analysis of America’s $600 weekly boost to unemployment insurance, implemented in March 2020, was published in weeks. The British government knew by October 2020 that a scheme to subsidise restaurant attendance in August 2020 had probably boosted covid infections. Many apparently self-evident things about the pandemic—that the economy collapsed in March 2020, that the poor have suffered more than the rich, or that the shift to working from home is turning out better than expected—only seem obvious because of rapid-fire economic research.

It is harder to quantify the policy impact. Some economists scoff at the notion that their research has influenced politicians’ pandemic response. Many studies using real-time data suggested that the Paycheck Protection Programme, an effort to channel money to American small firms, was doing less good than hoped. Yet small-business lobbyists ensured that politicians did not get rid of it for months. Tyler Cowen, of George Mason University, points out that the most significant contribution of economists during the pandemic involved recommending early pledges to buy vaccines—based on older research, not real-time data.

Still, Mr Faulkender says that the special support for restaurants that was included in America’s stimulus was influenced by a weak recovery in the industry seen in the OpenTable data. Research by Mr Chetty in early 2021 found that stimulus cheques sent in December boosted spending by lower-income households, but not much for richer households. He claims this informed the decision to place stronger income limits on the stimulus cheques sent in March.

Shaping the economic conversation

As for the Federal Reserve, in May 2020 the Dallas and New York regional Feds and James Stock, a Harvard economist, created an activity index using data from SafeGraph, a data provider that tracks mobility using mobile-phone pings. The St Louis Fed used data from Homebase to track employment numbers daily. Both showed shortfalls of economic activity in advance of official data. This led the Fed to communicate its doveish policy stance faster.

Speedy data also helped frame debate. Everyone realised the world was in a deep recession much sooner than they had in 2007-09. In the IMF’s overviews of the global economy in 2009, 40% of the papers cited had been published in 2008-09. In the overview published in October 2020, by contrast, over half the citations were for papers published that year.

The third wave of economics has been better for some practitioners than others. As lockdowns began, many male economists found themselves at home with no teaching responsibilities and more time to do research. Female ones often picked up the slack of child care. A paper in Covid Economics, a rapid-fire journal, finds that female authors accounted for 12% of economics working-paper submissions during the pandemic, compared with 20% before. Economists lucky enough to have researched topics before the pandemic which became hot, from home-working to welfare policy, were suddenly in demand.

There are also deeper shifts in the value placed on different sorts of research. The Economist has examined rankings of economists from IDEAS RePEC, a database of research, and citation data from Google Scholar. We divided economists into three groups: “lone wolves” (who publish with less than one unique co-author per paper on average); “collaborators” (those who tend to work with more than one unique co-author per paper, usually two to four people); and “lab leaders” (researchers who run a large team of dedicated assistants). We then looked at the top ten economists for each as measured by RePEC author rankings for the past ten years.

Collaborators performed far ahead of the other two groups during the pandemic (see chart 4). Lone wolves did worst: working with large data sets benefits from a division of labour. Why collaborators did better than lab leaders is less clear. They may have been more nimble in working with those best suited for the problems at hand; lab leaders are stuck with a fixed group of co-authors and assistants.

The most popular types of research highlight another aspect of the third wave: its usefulness for business. Scott Baker, another economist, and Messrs Bloom and Davis—three of the top four authors during the pandemic compared with the year before—are all “collaborators” and use daily newspaper data to study markets. Their uncertainty index has been used by hedge funds to understand the drivers of asset prices. The research by Messrs Bloom and Davis on working from home has also gained attention from businesses seeking insight on the transition to remote work.

But does it work in theory?

Not everyone likes where the discipline is going. When economists say that their fellows are turning into data scientists, it is not meant as a compliment. A kinder interpretation is that the shift to data-heavy work is correcting a historical imbalance. “The most important problem with macro over the past few decades has been that it has been too theoretical,” says Jón Steinsson of the University of California, Berkeley, in an essay published in July. A better balance with data improves theory. Half of the recent Nobel prize went for the application of new empirical methods to labour economics; the other half was for the statistical theory around such methods.

Some critics question the quality of many real-time sources. High-frequency data are less accurate at estimating levels (for example, the total value of GDP) than they are at estimating changes, and in particular turning-points (such as when growth turns into recession). In a recent review of real-time indicators Samuel Tombs of Pantheon Macroeconomics, a consultancy, pointed out that OpenTable data tended to exaggerate the rebound in restaurant attendance last year.

Others have worries about the new incentives facing economists. Researchers now race to post a working paper with America’s National Bureau of Economic Research in order to stake their claim to an area of study or to influence policymakers. The downside is that consumers of fast-food academic research often treat it as if it is as rigorous as the slow-cooked sort—papers which comply with the old-fashioned publication process involving endless seminars and peer review. A number of papers using high-frequency data which generated lots of clicks, including one which claimed that a motorcycle rally in South Dakota had caused a spike in covid cases, have since been called into question.

Whatever the concerns, the pandemic has given economists a new lease of life. During the Chilean coup of 1973 members of the armed forces broke into Cybersyn’s operations room and smashed up the slides of graphs—not only because it was Allende’s creation, but because the idea of an electrocardiogram of the economy just seemed a bit weird. Third-wave economics is still unusual, but ever less odd. ■

Byung-Chul Han: smartphone e o “inferno dos iguais” (Outras Palavras)

outraspalavras.net

por El País Brasil – Publicado 14/10/2021 às 17:13 – Atualizado 14/10/2021 às 18:46


Por Byung-Chul Han, em entrevista a Sergio C. Fanjul, no El País

Com certa vertigem, o mundo material, feito de átomos e moléculas, de coisas que podemos tocar e cheirar, está se dissolvendo em um mundo de informação, de não-coisas, como observa o filósofo alemão de origem coreana Byung-Chul Han. Não-coisas que, ainda assim, continuamos desejando, comprando e vendendo, que continuam nos influenciando. O mundo digital cada vez se hibridiza de modo mais notório com o que ainda consideramos mundo real, ao ponto de confundirem-se entre si, fazendo a existência cada vez mais intangível e fugaz. O último livro do pensador, Não-coisas. Quebras no mundo de hoje, se une a uma série de pequenos ensaios em que o pensador sucesso de vendas (o chamaram de rockstar da filosofia) disseca minuciosamente as ansiedades que o capitalismo neoliberal nos produz.

Unindo citações frequentes aos grandes filósofos e elementos da cultura popular, os textos de Han transitam do que chamou de “a sociedade do cansaço”, em que vivemos esgotados e deprimidos pelas inapeláveis exigências da existência, à análise das novas formas de entretenimento que nos oferecem. Da psicopolítica, que faz com que as pessoas aceitem se render mansamente à sedução do sistema, ao desaparecimento do erotismo que Han credita ao narcisismo e exibicionismo atual, que proliferam, por exemplo, nas redes sociais: a obsessão por si mesmo faz com que os outros desapareçam e o mundo seja um reflexo de nossa pessoa. O pensador reivindica a recuperação do contato íntimo com a cotidianidade – de fato, é sabido que ele gosta de cultivar lentamente um jardim, trabalhos manuais, o silêncio. E se rebela contra “o desaparecimento dos rituais” que faz com que a comunidade desapareça e que nos transformemos em indivíduos perdidos em sociedades doentes e cruéis.

Byung-Chul Han aceitou esta entrevista como EL PAÍS, mas somente mediante um questionário por e-mail que foi respondido em alemão pelo filósofo e posteriormente traduzido e editado.

PERGUNTA. Como é possível que em um mundo obcecado pela hiperprodução eo hiperconsumo, ao mesmo tempo os objetos vão se dissolvendo e vamos rumo a um mundo de não-coisas?

RESPOSTA. Há, sem dúvida, uma hiperinflação de objetos que conduz a sua proliferação explosiva. Mas se trata de objetos descartáveis com os quais não estabelecemos laços afetivos. Hoje estamos obcecados não com as coisas, e sim com informações e dados, ou seja, não-coisas. Hoje somos todos infômanos. Chegou a se falar de datasexuais [pessoas que compilam e compartilham obsessivamente informação sobre sua vida pessoal].

P. Nesse mundo que o senhor descreve, de hiperconsumo e perda de laços, por que é importante ter “coisas queridas” e estabelecer rituais?

R. As coisas são os apoios que dão tranquilidade na vida. Hoje em dia estão em conjunto obscurecidas pelas informações. O smartphone não é uma coisa. Eu o caracterizo como o infômata que produz e processa informações. As informações são todo o contrário aos apoios que dão tranquilidade à vida. Vivem do estímulo da surpresa. Elas nos submergem em um turbilhão de atualidade. Também os rituais, como arquiteturas temporais, dão estabilidade à vida. A pandemia destruiu essas estruturas temporais. Pense no teletrabalho. Quando o tempo perde sua estrutura, a depressão começa a nos afetar.

P. Em seu livro se estabelece que, pela digitalização, nos transformaremos em homo ludens, focados mais no lazer do que no trabalho. Mas, com a precarização e a destruição do emprego, todos poderemos ter acesso a essa condição?

R. Falei de um desemprego digital que não é determinado pela conjuntura. A digitalização levará a um desemprego maciço. Esse desemprego representará um problema muito sério no futuro. O futuro humano consistirá na renda básica e nos jogos de computador? Um panorama desalentador. Com panem et circenses (pão e circo) Juvenal se refere à sociedade romana em que a ação política não é possível. As pessoas se mantêm contentes com alimentos gratuitos e jogos espetaculares. A dominação total é aquela em que as pessoas só se dedicam a jogar. A recente e hiperbólica série coreana da Netflix, Round 6, em que todo mundo só se dedica ao jogo, aponta nessa direção.

P. Em que sentido?

R. Essas pessoas estão totalmente endividadas e se entregam a esse jogo mortal que promete ganhos enormes. Round 6 representa um aspecto central do capitalismo em um formato extremo. Walter Benjamin já disse que o capitalismo representa o primeiro caso de um culto que não é expiatório, e sim nos endivida. No começo da digitalização se sonhava que ela substituiria o trabalho pelo jogo. Na verdade, o capitalismo digital explora impiedosamente a pulsão humana pelo jogo. Pense nas redes sociais, que incorporam elementos lúdicos para provocar o vício nos usuários.

P. De fato, o smatphone nos prometia certa liberdade… Não se transformou em uma longa corrente que nos aprisiona onde quer que estejamos?

R. O smartphone é hoje um lugar de trabalho digital e um confessionário digital. Todo dispositivo, toda técnica de dominação gera artigos cultuados que são utilizados à subjugação. É assim que a dominação se consolida. O smartphone é o artigo de culto da dominação digital. Como aparelho de subjugação age como um rosário e suas contas; é assim que mantemos o celular constantemente nas mãos. O like é o amém digital. Continuamos nos confessando. Por decisão própria, nos desnudamos. Mas não pedimos perdão, e sim que prestem atenção em nós.

P. Há quem tema que a internet das coisas possa significar algo assim como a rebelião dos objetos contra o ser humano.

R. Não exatamente. A smarthome [casa inteligente] com coisas interconectadas representa uma prisão digital. A smartbed [cama inteligente] com sensores prolonga a vigilância também durante as horas de sono. A vigilância vai se impondo de maneira crescente e sub-reptícia na vida cotidiana como se fosse o conveniente. As coisas informatizadas, ou seja, os infômatas, se revelam como informadores eficientes que nos controlam e dirigem constantemente.

P. O senhor descreveu como o trabalho vai ganhando caráter de jogo, as redes sociais, paradoxalmente, nos fazem sentir mais livres, o capitalismo nos seduz. O sistema conseguiu se meter dentro de nós para nos dominar de uma maneira até prazerosa para nós mesmos?

R. Somente um regime repressivo provoca a resistência. Pelo contrário, o regime neoliberal, que não oprime a liberdade, e sim a explora, não enfrenta nenhuma resistência. Não é repressor, e sim sedutor. A dominação se torna completa no momento em que se apresenta como a liberdade.

P. Por que, apesar da precariedade e da desigualdade crescentes, dos riscos existenciais etc., o mundo cotidiano nos países ocidentais parece tão bonito, hiperplanejado, e otimista? Por que não parece um filme distópico e cyberpunk?

R. O romance 1984 de George Orwell se transformou há pouco tempo em um sucesso de vendas mundial. As pessoas têm a sensação de que algo não anda bem com nossa zona de conforto digital. Mas nossa sociedade se parece mais a Admirável Mundo Novo de Aldous Huxley. Em 1984 as pessoas são controladas pela ameaça de machucá-las. Em Admirável Mundo Novo são controladas pela administração de prazer. O Estado distribui uma droga chamada “soma” para que todo mundo se sinta feliz. Esse é nosso futuro.

P. O senhor sugere que a Inteligência Artificial e o big data não são formas de conhecimento tão espantosas como nos fazem crer, e sim mais “rudimentares”. Por que?

R. O big data dispõe somente de uma forma muito primitiva de conhecimento, a saber, a correlação: acontece A, então ocorre B. Não há nenhuma compreensão. A Inteligência Artificial não pensa. A Inteligência Artificial não sente medo.

P. Blaise Pascal disse que a grande tragédia do ser humano é que não pode ficar quieto sem fazer nada. Vivemos em um culto à produtividade, até mesmo nesse tempo que chamamos “livre”. O senhor o chamou, com grande sucesso, de a sociedade do cansaço. Nós deveríamos nos fixar na recuperação do próprio tempo como um objetivo político?

R. A existência humana hoje está totalmente absorvida pela atividade. Com isso se faz completamente explorável. A inatividade volta a aparecer no sistema capitalista de dominação com incorporação de algo externo. É chamado tempo de ócio. Como serve para se recuperar do trabalho, permanece vinculado ao mesmo. Como derivada do trabalho constitui um elemento funcional dentro da produção. Precisamos de uma política da inatividade. Isso poderia servir para liberar o tempo das obrigações da produção e tornar possível um tempo de ócio verdadeiro.

P. Como se combina uma sociedade que tenta nos homogeneizar e eliminar as diferenças, com a crescente vontade das pessoas em ser diferentes dos outros, de certo modo, únicas?

R. Todo mundo hoje quer ser autêntico, ou seja, diferente dos outros. Dessa forma, estamos nos comparando o tempo todo com os outros. É justamente essa comparação que nos faz todos iguais. Ou seja: a obrigação de ser autênticos leva ao inferno dos iguais.

P. Precisamos de mais silêncio? Ficar mais dispostos a escutar o outro?

R. Precisamos que a informação se cale. Caso contrário, explorará nosso cérebro. Hoje entendemos o mundo através das informações. Assim a vivência presencial se perde. Nós nos desconectamos do mundo de modo crescente. Vamos perdendo o mundo. O mundo é mais do que a informação. A tela é uma representação pobre do mundo. Giramos em círculo ao redor de nós mesmos. O smartphone contribui decisivamente a essa percepção pobre de mundo. Um sintoma fundamental da depressão é a ausência de mundo.

P. A depressão é um dos mais alarmantes problemas de saúde contemporâneos. Como essa ausência do mundo opera?

R. Na depressão perdemos a relação com o mundo, com o outro. E nos afundamos em um ego difuso. Penso que a digitalização, e com ela o smartphone, nos transformam em depressivos. Há histórias de dentistas que contam que seus pacientes se aferram aos seus telefones quando o tratamento é doloroso. Por que o fazem? Graças ao celular sou consciente de mim mesmo. O celular me ajuda a ter a certeza de que vivo, de que existo. Dessa forma nos aferramos ao celular em situações críticas, como o tratamento dental. Eu lembro que quando era criança apertava a mão de minha mãe no dentista. Hoje a mãe não dá a mão à criança, e sim o celular para que se agarre a ele. A sustentação não vem dos outros, e sim de si mesmo. Isso nos adoece. Temos que recuperar o outro.

P. Segundo o filósofo Fredric Jameson é mais fácil imaginar o fim do mundo do que o fim do capitalismo. O senhor imaginou algum modo de pós-capitalismo agora que o sistema parece em decadência?

R. O capitalismo corresponde realmente às estruturas instintivas do homem. Mas o homem não é só um ser instintivo. Temos que domar, civilizar e humanizar o capitalismo. Isso também é possível. A economia social de mercado é uma demonstração. Mas nossa economia está entrando em uma nova época, a época da sustentabilidade.

P. O senhor se doutorou com uma tese sobre Heidegger, que explorou as formas mais abstratas de pensamento e cujos textos são muito obscuros até o profano. O senhor, entretanto, consegue aplicar esse pensamento abstrato a assuntos que qualquer um pode experimentar. A filosofia deve se ocupar mais do mundo em que a maior parte da população vive?

R. Michel Foucault define a filosofia como uma espécie de jornalismo radical, e se considera a si mesmo jornalista. Os filósofos deveriam se ocupar sem rodeios do hoje, da atualidade. Nisso sigo Foucault. Eu tento interpretar o hoje em pensamentos. Esses pensamentos são justamente o que nos fazem livres.

The Facebook whistleblower says its algorithms are dangerous. Here’s why. (MIT Technology Review)

technologyreview.com

Frances Haugen’s testimony at the Senate hearing today raised serious questions about how Facebook’s algorithms work—and echoes many findings from our previous investigation.

October 5, 2021

Karen Hao


Facebook whistleblower Frances Haugen testifies during a Senate Committee October 5. Drew Angerer/Getty Images

On Sunday night, the primary source for the Wall Street Journal’s Facebook Files, an investigative series based on internal Facebook documents, revealed her identity in an episode of 60 Minutes.

Frances Haugen, a former product manager at the company, says she came forward after she saw Facebook’s leadership repeatedly prioritize profit over safety.

Before quitting in May of this year, she combed through Facebook Workplace, the company’s internal employee social media network, and gathered a wide swath of internal reports and research in an attempt to conclusively demonstrate that Facebook had willfully chosen not to fix the problems on its platform.

Today she testified in front of the Senate on the impact of Facebook on society. She reiterated many of the findings from the internal research and implored Congress to act.

“I’m here today because I believe Facebook’s products harm children, stoke division, and weaken our democracy,” she said in her opening statement to lawmakers. “These problems are solvable. A safer, free-speech respecting, more enjoyable social media is possible. But there is one thing that I hope everyone takes away from these disclosures, it is that Facebook can change, but is clearly not going to do so on its own.”

During her testimony, Haugen particularly blamed Facebook’s algorithm and platform design decisions for many of its issues. This is a notable shift from the existing focus of policymakers on Facebook’s content policy and censorship—what does and doesn’t belong on Facebook. Many experts believe that this narrow view leads to a whack-a-mole strategy that misses the bigger picture.

“I’m a strong advocate for non-content-based solutions, because those solutions will protect the most vulnerable people in the world,” Haugen said, pointing to Facebook’s uneven ability to enforce its content policy in languages other than English.

Haugen’s testimony echoes many of the findings from an MIT Technology Review investigation published earlier this year, which drew upon dozens of interviews with Facebook executives, current and former employees, industry peers, and external experts. We pulled together the most relevant parts of our investigation and other reporting to give more context to Haugen’s testimony.

How does Facebook’s algorithm work?

Colloquially, we use the term “Facebook’s algorithm” as though there’s only one. In fact, Facebook decides how to target ads and rank content based on hundreds, perhaps thousands, of algorithms. Some of those algorithms tease out a user’s preferences and boost that kind of content up the user’s news feed. Others are for detecting specific types of bad content, like nudity, spam, or clickbait headlines, and deleting or pushing them down the feed.

All of these algorithms are known as machine-learning algorithms. As I wrote earlier this year:

Unlike traditional algorithms, which are hard-coded by engineers, machine-learning algorithms “train” on input data to learn the correlations within it. The trained algorithm, known as a machine-learning model, can then automate future decisions. An algorithm trained on ad click data, for example, might learn that women click on ads for yoga leggings more often than men. The resultant model will then serve more of those ads to women.

And because of Facebook’s enormous amounts of user data, it can

develop models that learned to infer the existence not only of broad categories like “women” and “men,” but of very fine-grained categories like “women between 25 and 34 who liked Facebook pages related to yoga,” and [target] ads to them. The finer-grained the targeting, the better the chance of a click, which would give advertisers more bang for their buck.

The same principles apply for ranking content in news feed:

Just as algorithms [can] be trained to predict who would click what ad, they [can] also be trained to predict who would like or share what post, and then give those posts more prominence. If the model determined that a person really liked dogs, for instance, friends’ posts about dogs would appear higher up on that user’s news feed.

Before Facebook began using machine-learning algorithms, teams used design tactics to increase engagement. They’d experiment with things like the color of a button or the frequency of notifications to keep users coming back to the platform. But machine-learning algorithms create a much more powerful feedback loop. Not only can they personalize what each user sees, they will also continue to evolve with a user’s shifting preferences, perpetually showing each person what will keep them most engaged.

Who runs Facebook’s algorithm?

Within Facebook, there’s no one team in charge of this content-ranking system in its entirety. Engineers develop and add their own machine-learning models into the mix, based on their team’s objectives. For example, teams focused on removing or demoting bad content, known as the integrity teams, will only train models for detecting different types of bad content.

This was a decision Facebook made early on as part of its “move fast and break things” culture. It developed an internal tool known as FBLearner Flow that made it easy for engineers without machine learning experience to develop whatever models they needed at their disposal. By one data point, it was already in use by more than a quarter of Facebook’s engineering team in 2016.

Many of the current and former Facebook employees I’ve spoken to say that this is part of why Facebook can’t seem to get a handle on what it serves up to users in the news feed. Different teams can have competing objectives, and the system has grown so complex and unwieldy that no one can keep track anymore of all of its different components.

As a result, the company’s main process for quality control is through experimentation and measurement. As I wrote:

Teams train up a new machine-learning model on FBLearner, whether to change the ranking order of posts or to better catch content that violates Facebook’s community standards (its rules on what is and isn’t allowed on the platform). Then they test the new model on a small subset of Facebook’s users to measure how it changes engagement metrics, such as the number of likes, comments, and shares, says Krishna Gade, who served as the engineering manager for news feed from 2016 to 2018.

If a model reduces engagement too much, it’s discarded. Otherwise, it’s deployed and continually monitored. On Twitter, Gade explained that his engineers would get notifications every few days when metrics such as likes or comments were down. Then they’d decipher what had caused the problem and whether any models needed retraining.

How has Facebook’s content ranking led to the spread of misinformation and hate speech?

During her testimony, Haugen repeatedly came back to the idea that Facebook’s algorithm incites misinformation, hate speech, and even ethnic violence. 

“Facebook … knows—they have admitted in public—that engagement-based ranking is dangerous without integrity and security systems but then not rolled out those integrity and security systems in most of the languages in the world,” she told the Senate today. “It is pulling families apart. And in places like Ethiopia it is literally fanning ethnic violence.”

Here’s what I’ve written about this previously:

The machine-learning models that maximize engagement also favor controversy, misinformation, and extremism: put simply, people just like outrageous stuff.

Sometimes this inflames existing political tensions. The most devastating example to date is the case of Myanmar, where viral fake news and hate speech about the Rohingya Muslim minority escalated the country’s religious conflict into a full-blown genocide. Facebook admitted in 2018, after years of downplaying its role, that it had not done enough “to help prevent our platform from being used to foment division and incite offline violence.”

As Haugen mentioned, Facebook has also known this for a while. Previous reporting has found that it’s been studying the phenomenon since at least 2016.

In an internal presentation from that year, reviewed by the Wall Street Journal, a company researcher, Monica Lee, found that Facebook was not only hosting a large number of extremist groups but also promoting them to its users: “64% of all extremist group joins are due to our recommendation tools,” the presentation said, predominantly thanks to the models behind the “Groups You Should Join” and “Discover” features.

In 2017, Chris Cox, Facebook’s longtime chief product officer, formed a new task force to understand whether maximizing user engagement on Facebook was contributing to political polarization. It found that there was indeed a correlation, and that reducing polarization would mean taking a hit on engagement. In a mid-2018 document reviewed by the Journal, the task force proposed several potential fixes, such as tweaking the recommendation algorithms to suggest a more diverse range of groups for people to join. But it acknowledged that some of the ideas were “antigrowth.” Most of the proposals didn’t move forward, and the task force disbanded.

In my own conversations, Facebook employees also corroborated these findings.

A former Facebook AI researcher who joined in 2018 says he and his team conducted “study after study” confirming the same basic idea: models that maximize engagement increase polarization. They could easily track how strongly users agreed or disagreed on different issues, what content they liked to engage with, and how their stances changed as a result. Regardless of the issue, the models learned to feed users increasingly extreme viewpoints. “Over time they measurably become more polarized,” he says.

In her testimony, Haugen also repeatedly emphasized how these phenomena are far worse in regions that don’t speak English because of Facebook’s uneven coverage of different languages.

“In the case of Ethiopia there are 100 million people and six languages. Facebook only supports two of those languages for integrity systems,” she said. “This strategy of focusing on language-specific, content-specific systems for AI to save us is doomed to fail.”

She continued: “So investing in non-content-based ways to slow the platform down not only protects our freedom of speech, it protects people’s lives.”

I explore this more in a different article from earlier this year on the limitations of large language models, or LLMs:

Despite LLMs having these linguistic deficiencies, Facebook relies heavily on them to automate its content moderation globally. When the war in Tigray[, Ethiopia] first broke out in November, [AI ethics researcher Timnit] Gebru saw the platform flounder to get a handle on the flurry of misinformation. This is emblematic of a persistent pattern that researchers have observed in content moderation. Communities that speak languages not prioritized by Silicon Valley suffer the most hostile digital environments.

Gebru noted that this isn’t where the harm ends, either. When fake news, hate speech, and even death threats aren’t moderated out, they are then scraped as training data to build the next generation of LLMs. And those models, parroting back what they’re trained on, end up regurgitating these toxic linguistic patterns on the internet.

How does Facebook’s content ranking relate to teen mental health?

One of the more shocking revelations from the Journal’s Facebook Files was Instagram’s internal research, which found that its platform is worsening mental health among teenage girls. “Thirty-two percent of teen girls said that when they felt bad about their bodies, Instagram made them feel worse,” researchers wrote in a slide presentation from March 2020.

Haugen connects this phenomenon to engagement-based ranking systems as well, which she told the Senate today “is causing teenagers to be exposed to more anorexia content.”

“If Instagram is such a positive force, have we seen a golden age of teenage mental health in the last 10 years? No, we have seen escalating rates of suicide and depression amongst teenagers,” she continued. “There’s a broad swath of research that supports the idea that the usage of social media amplifies the risk of these mental health harms.”

In my own reporting, I heard from a former AI researcher who also saw this effect extend to Facebook.

The researcher’s team…found that users with a tendency to post or engage with melancholy content—a possible sign of depression—could easily spiral into consuming increasingly negative material that risked further worsening their mental health.

But as with Haugen, the researcher found that leadership wasn’t interested in making fundamental algorithmic changes.

The team proposed tweaking the content-ranking models for these users to stop maximizing engagement alone, so they would be shown less of the depressing stuff. “The question for leadership was: Should we be optimizing for engagement if you find that somebody is in a vulnerable state of mind?” he remembers.

But anything that reduced engagement, even for reasons such as not exacerbating someone’s depression, led to a lot of hemming and hawing among leadership. With their performance reviews and salaries tied to the successful completion of projects, employees quickly learned to drop those that received pushback and continue working on those dictated from the top down….

That former employee, meanwhile, no longer lets his daughter use Facebook.

How do we fix this?

Haugen is against breaking up Facebook or repealing Section 230 of the US Communications Decency Act, which protects tech platforms from taking responsibility for the content it distributes.

Instead, she recommends carving out a more targeted exemption in Section 230 for algorithmic ranking, which she argues would “get rid of the engagement-based ranking.” She also advocates for a return to Facebook’s chronological news feed.

Ellery Roberts Biddle, a projects director at Ranking Digital Rights, a nonprofit that studies social media ranking systems and their impact on human rights, says a Section 230 carve-out would need to be vetted carefully: “I think it would have a narrow implication. I don’t think it would quite achieve what we might hope for.”

In order for such a carve-out to be actionable, she says, policymakers and the public would need to have a much greater level of transparency into how Facebook’s ad-targeting and content-ranking systems even work. “I understand Haugen’s intention—it makes sense,” she says. “But it’s tough. We haven’t actually answered the question of transparency around algorithms yet. There’s a lot more to do.”

Nonetheless, Haugen’s revelations and testimony have brought renewed attention to what many experts and Facebook employees have been saying for years: that unless Facebook changes the fundamental design of its algorithms, it will not make a meaningful dent in the platform’s issues. 

Her intervention also raises the prospect that if Facebook cannot put its own house in order, policymakers may force the issue.

“Congress can change the rules that Facebook plays by and stop the many harms it is now causing,” Haugen told the Senate. “I came forward at great personal risk because I believe we still have time to act, but we must act now.”

Papa Francisco pede orações para robôs e IA (Tecmundo)

11/11/2020 às 18:30 1 min de leitura

Imagem de: Papa Francisco pede orações para robôs e IA

Jorge Marin

O Papa Francisco pediu aos fiéis do mundo inteiro para que, durante o mês de novembro, rezem para que o progresso da robótica e da inteligência artificial (IA) possam sempre servir a humanidade.

A mensagem faz parte de uma série de intenções de oração que o pontífice divulga anualmente, e compartilha a cada mês no YouTube para auxiliar os católicos a “aprofundar sua oração diária”, concentrando-se em tópicos específicos. Em setembro, o papa pediu orações para o “compartilhamento dos recursos do planeta”; em agosto, para o “mundo marítimo”; e agora chegou a vez dos robôs e da IA.

Na sua mensagem, o Papa Francisco pediu uma atenção especial para a IA que, segundo ele, está “no centro da mudança histórica que estamos experimentando”. E que não se trata apenas dos benefícios que a robótica pode trazer para o mundo.

Progresso tecnológico e algoritmos

Francisco afirma que nem sempre o progresso tecnológico é sinal de bem-estar para a humanidade, pois, se esse progresso contribuir para aumentar as desigualdades, não poderá ser considerado como um progresso verdadeiro. “Os avanços futuros devem ser orientados para o respeito à dignidade da pessoa”, alerta o papa.

A preocupação com que a tecnologia possa aumentar as divisões sociais já existentes levou o Vaticano assinar no início deste ano, em conjunto com a Microsoft e a IBM, a “Chamada de Roma por Ética de IA”, um documento em que são fixados alguns princípios para orientar a implantação da IA: transparência, inclusão, imparcialidade e confiabilidade.

Mesmo pessoas não religiosas são capazes de reconhecer que, quando se trata de implantar algoritmos, a preocupação do papa faz todo o sentido.

Cientistas planejam a ressurreição digital com bots e humanoides (Canal Tech)

Por Natalie Rosa | 25 de Junho de 2020 às 16h40 Reprodução

Em fevereiro deste ano, o mundo todo se surpreendeu com a história de Jang Ji-sung, uma sul-coreana que “reencontrou” a sua filha, já falecida, graças à inteligência artificial. A garota morreu em 2016 devido a uma doença sanguínea.

No encontro simulado, a imagem da pequena Nayeon é exibida para a mãe que está em um fundo verde, também conhecido como chroma key, usando um headset de realidade virtual. A interação não foi só visual, como também foi possível conversar e brincar com a criança. Segundo Jang, a experiência foi como um sonho que ela sempre quis ter.

Encontro de Jang Ji-sung com a forma digitalizada da filha (Imagem: Reprodução)

Por mais que pareça uma tendência difícil de ser executada em massa na vida real, além de ser uma preocupação bastante antiga das produções de ficção científica, existem pessoas interessadas nesta forma de imortalidade. A questão que fica, no entanto, é se devemos fazer isso e como irá acontecer.

Em entrevista ao CNET, John Troyer, diretor do Centre for Death and Society (Centro para Morte e Sociedade) da Universidade de Bath, na Inglaterra, e autor do livro Technologies of the Human Corpse, conta que o interesse mais moderno pela imortalidade começou ainda na década de 1960. Na época, muitas pessoas acreditavam na ideia do processo criônico de preservação de corpos, quando um cadáver ou apenas uma cabeça humana eram congelados com a esperança de serem ressuscitados no futuro. Até o momento, ainda não houve tentativa de serem revividas.

“Aconteceu uma mudança na ciência da morte naquele tempo, e a ideia de que, de alguma forma, os humanos poderiam derrotar a morte”, explica Troyer. O especialista conta também que ainda não há uma pesquisa revisada que prove que o investimento de milhões no upload de dados do cérebro, ou ainda manter um corpo vivo, valha a pena.

Em 2016, um estudo publicado na revista acadêmica Plos One descobriu que expor um cérebro preservado a sondas químicas e elétricas o faz voltar a funcionar. “Tudo isso é uma aposta do que é possível no futuro. Mas eu não estou convencido de que é possível da maneira que estão descrevendo ou desejando”, completa.

Superando o luto

O caso que aconteceu na Coreia do Sul não foi o único que envolve o luto. Em 2015, Eugenia Kuyda, co-fundadora e CEO da empresa de softwares Replika, sofreu com a perda do seu melhor amigo Roman após um atropelamento em Moscou, na Rússia. A executiva decidiu, então, criar um chatbot treinado com milhares de mensagens de texto trocadas pelos dois ao longo dos anos, resultando em uma versão digital de Roman, que pode conversar com amigos e família.

“Foi muito emocionante. Eu não estava esperando me sentir assim porque eu trabalhei naquele chatbot e sabia como ele foi construído”, relata Kuyda. A experiência lembra bastante um dos episódios da série Black Mirror, que aborda um futuro distópico da tecnologia. Em Be Right Back, de 2013, uma jovem mulher perde o namorado em um acidente de carro e se inscreve em um projeto para que ela possa se comunicar com “ele” de forma digital, graças à inteligência artificial.

Por outro lado, Kuyda conta que o projeto não foi criado para ser comercializado, mas sim como uma forma pessoal de lidar com a perda do melhor amigo. Ela conta que qualquer pessoa que tentar reproduzir o feito vai encontrar uma série de empecilhos e dificuldades, como decidir qual tipo de informação será considerada pública ou privada, ou ainda com quem o chatbot poderá interagir. Isso porque a forma de se conversar com um amigo, por exemplo, não é a mesma com integrantes da família, e Kuyda diz que não há como fazer essa diferenciação.

A criação de uma versão digital de uma pessoa não vai desenvolver novas conversas e nem emitir novas opiniões, mas sim replicar frases e palavras já ditas, basicamente, se encaixando com o bate-papo. “Nós deixamos uma quantidade insana de dados, mas a maioria deles não é pessoal, privada ou baseada em termos de que tipo de pessoa nós somos”, diz Kuyda. Em resposta ao CNET, a executiva diz que é impossível obter dados 100% precisos de uma pessoa, pois atualmente não há alguma tecnologia que possa capturar o que está acontecendo em nossas mentes.

Sendo assim, a coleta de dados acaba sendo a maior barreira para criar algum tipo de software que represente uma pessoa após o falecimento. Parte disso acontece porque a maioria dos conteúdos postados online são de uma empresa, passando a pertencer à plataforma. Com isso, se um dia a companhia fechar, os dados vão embora junto com ela. Para Troyer, a tecnologia de memória não tende a sobreviver ao tempo.

Imagem: Reprodução

Cérebro fresco

A startup Nectome vem se dedicando à preservação do cérebro, pensando na possível extração da memória após a morte. Para que isso aconteça, no entanto, o órgão precisa estar “fresco”, o que significaria que a morte teria que acontecer por uma eutanásia.

O objetivo da startup é conduzir os testes com voluntários que estejam em estado terminal de alguma doença e que permitam o suicídio assistido por médicos. Até o momento a Nectome coletou US$ 10 mil reembolsáveis para uma lista de espera para o procedimento, caso um dia a oportunidade esteja disponível. Por enquanto, a companhia ainda precisa se esforçar em ensaios clínicos.

A startup já arrecadou um milhão de dólares em financiamento e vinha colaborando com um neurocientista do MIT. Porém, a publicação da história gerou muita polêmica negativa de cientistas e especialistas em ética, e o MIT encerrou o seu contrato com a startup. A repercussão afirmou que o projeto da empresa não é possível de ser realizado. 

Veja a declaração feita pelo MIT na época:

“A neurociência não é suficientemente avançada ao ponto de sabermos se um método de preservação do cérebro é o suficiente para preservar diferentes tipos de biomoléculas relacionadas à memória e à mente. Também não se sabe se é possível recriar a consciência de uma pessoa”, disse a nota ainda em 2018.

Eternização com a realidade aumentada

Enquanto alguns pensam em extrair a mente de um cérebro, outras empresas optam por uma “ressurreição” mais simples, mas não menos invasiva. A empresa Augmented Reality, por exemplo, tem como objetivo ajudar pessoas a viverem em um formato digital, transmitindo conhecimento das pessoas de hoje para as futuras gerações.

O fundador e CEO da empresa de computação FlyBits e professor do MIT Media Lab, Hossein Rahnama, vem tentando construir agentes de software que possam agir como herdeiros digitais. “Os Millennials estão criando gigabytes de dados diariamente e nós estamos alcançando um nível de maturidade em que podemos, realmente, criar uma versão digital de nós mesmos”, conta.

Para colocar o projeto em ação, a Augmented Reality alimenta um mecanismo de aprendizado de máquina com emails, fotos e atividades de redes sociais das pessoas, analisando como ela pensa e age. Assim, é possível fornecer uma cópia digital de uma pessoa real, e ela pode interagir via chatbot, vídeo digitalmente editado ou ainda como um robô humanoide.

Falando em humanoides, no laboratório de robótica Intelligent Robotics, da Universidade de Osaka, no Japão, já existem mais de 30 androides parecidos com humanos, inclusive uma versão robótica de Hiroshi Ishiguro, diretor do setor. O cientista vem inovando no campo de pesquisa de interações entre humanos e robôs, estudando a importância de detalhes, como movimentos sutis dos olhos e expressões faciais.

Reprodução: Hiroshi Ishiguro Laboratory, ATR

Quando Ishiguro morrer, segundo o próprio, ele poderá ser substituído pelo seu robô para dar aulas aos seus alunos, mesmo que esta máquina nunca seja realmente ele e nem possa gerar novas ideias. “Nós não podemos transmitir as nossas consciências aos robôs. Compartilhamos, talvez, as memórias. Um robô pode dizer ‘Eu sou Hiroshi Ishiguro’, mas mesmo assim a consciência é independente”, afirma.

Para Ishiguro, no futuro nada disso será parecido com o que vemos na ficção científica. O download de memória, por exemplo, é algo que não vai acontecer, pois simplesmente não é possível. “Precisamos ter diferentes formas de fazer uma cópia de nossos cérebros, mas nós não sabemos ainda como fazer isso”, completa. 

Mãe “reencontra” filha morta graças a realidade virtual

Rise of the internet has reduced voter turnout (Science Daily)

Date:
September 16, 2016
Source:
University of Bristol
Summary:
During the initial phase of the internet, a “crowding-out” of political information occurred, which has affected voter turnout, new research shows.

The internet has transformed the way in which voters access and receive political information. It has allowed politicians to directly communicate their message to voters, circumventing the mainstream media which would traditionally filter information.

Writing in IZA World of Labor, Dr Heblich from the Department of Economics, presents research from a number of countries, comparing voter behaviour of municipalities with internet access to the ones without in the early 2000s. It shows municipalities with broadband internet access faced a decrease in voter turnout, due to voters suddenly facing an overwhelmingly large pool of information and not knowing how to filter relevant knowledge efficiently. Similarly, the internet seemed to have crowded out other media at the expense of information quality.

However, the introduction of interactive social media and “user-defined” content appears to have reversed this. It helped voters to collect information more efficiently. Barack Obama’s successful election campaign in 2008 set the path for this development. In the so-called “Facebook election,” Obama successfully employed Chris Hughes, a Facebook co-founder, to lead his highly effective election campaign.

Using a combination of social networks, podcasts, and mobile messages, Obama connected directly with (young) American voters. In doing so, he gained nearly 70 per cent of the votes among Americans under the age of 25.

But there is a downside: voters can now be personally identified and strategically influenced by targeted information. What if politicians use this information in election campaigns to target voters that are easy to mobilize?

Dr Heblich’s research shows there is a thin line between desirable benefits of more efficient information dissemination and undesirable possibilities of voter manipulation. Therefore, policymakers need to consider introducing measures to educate voters to become more discriminating in their use of the internet.

Dr Heblich said: “To the extent that online consumption replaces the consumption of other media (newspapers, radio, or television) with a higher information content, there may be no information gains for the average voter and, in the worst case, even a crowding- out of information.

“One potential risk relates to the increasing possibilities to collect personal information known as ‘big data’. This development could result in situations in which individual rights are violated, since the personal information could be used, for example, to selectively disseminate information in election campaigns and in influence voters strategically.”

See the report at: http://wol.iza.org/articles/effect-of-internet-on-voting-behavior

Ensayan un dron que ‘siembra’ nubes para provocar lluvia (El Mundo)

El dron Savant pesa 24 kilos y tiene 3 metros de envergadura. KEVIN CLIFFORD

El vuelo experimental ha sido en Nevada (EEUU), azotada por la sequía

EUROPA PRESS

25/05/2016 19:46

Un avión no tripulado ha probado por primera vez con éxito la conocida como ‘siembra’ de nubes, con la que los científicos pretenden provocar lluvia en épocas de sequía. El vuelo experimental, de Desert Research Institute (DRI) se ha llevado a cabo en Nevada (Estados Unidos).

Este dron, conocido como Savant, alcanzó una altitud de más de 120 metros y voló durante aproximadamente 18 minutos. “Es un gran logro”, ha apuntado el científico principal del proyecto, Adam Watts, experto en aplicaciones ecológicas y de recursos naturales.

Este proyecto, primero en su tipo, está ayudando al Estado de Nevada abordar los impactos continuos de sequía y a explorar soluciones innovadoras para luchar contra la ausencia de recursos, tales como aumentar el abastecimiento de agua regionales.

El equipo de investigación lleva más de 30 años de investigación y experiencia en la modificación del clima con experiencia probada en operaciones de fabricación aeroespacial y de vuelo de aviones no tripulados, según apunta el DRI en su página web.

“Hemos alcanzado otro hito importante en nuestro esfuerzo por reducir los riesgos y los costes en la industria de la siembra de nubes y ayudar a mitigar los desastres naturales causados por la sequía, el granizo y la niebla extrema“, ha señalado el CEO de la asociación de aviones no tripulados de América, Mike Richards.

“Con una envergadura de 3 metros de ancho y unos 24 kilos de peso, Savant es el vehículo perfecto para llevar a cabo este tipo de operaciones, debido a su perfil de vuelo superior, el tiempo que permanece en el aire y su resistencia al viento y a otras condiciones climáticas adversas”, ha apuntado Richards.


¿Quién está disolviendo las nubes en Andalucía?

Miguel del Pino, de Asaja Granada, muestra una foto de una de las avionetas. M. RODRÍGUEZ

La patronal agraria Asaja denuncia la ‘siembra’ de yoduro de plata

Piden que su actividad esté regulada por ley para evitar los daños

RAMÓN RAMOS, Granada

07/04/2016 19:31

No es leyenda urbana ni ciencia ficción: las avionetas ‘rompenubes’ existen y su actividad es dañina para los cultivos en las zonas en las que actúan. El último episodio tiene lugar fecha y hora. Fue detectado el pasado lunes día 4 a las 15,50 horas en la comarca granadina del Marquesado. Ese día el pronóstico del tiempo anunciaba lluvias de hasta 30 litros por metro cuadrado y las nubes negras que presidían los cielos parecían certificar el augurio. A la hora citada apareció por el norte una avioneta, sobrevoló la comarca de Este a Oeste y desapareció. Las nubes cambiaron de color, del blanco al negro, y sus efectos de lluvia se quedaron en solo seis litros por metro cuadrado, apunta Luis Ramírez, un agricultor de Huéneja afectado por la actividad de estos vuelos ‘fantasma’.

El efecto cromático en las nubes y su consecuente disminución en la descarga de unas lluvias muy esperadas en la comarca tiene una explicación para los agricultores: la ‘siembra’ entre las nubes de yoduro de plata, una sustancia química actúa cristalizando el agua condensada en las nubes.

Asaja, organización patronal agraria, ha estallado contra esta práctica, que no es exclusiva de la provincia de Granada y se enmarca en los posibles intereses de empresas de energía solar y grandes extensiones agrarias, habitualmente instaladas en las zonas donde actúan las avionetas: el Levante español y también Soria.

La organización ha iniciado una recogida de firmas que aspira a reunir las 500.000 necesarias para promover una iniciativa legislativa que prohíba por ley estas intervenciones ‘rompenubes’ que alteran los ciclos hidrológicos, agravando la sequía y dañando los cultivos.

Los pastos para animales, afectados

En esta línea se constituyó el pasado año la Plataforma para la Defensa del Medio Ambiente y la Naturaleza de la Comarca del Marquesado y del Río Nacimiento, donde la acción de las avionetas ‘rompenubes’ está afectando a los cultivos de cereales y almendros, perjudicando además al crecimiento de los pastos para alimentación del ganado.

Asaja advierte de que la posible intervención en la fase atmosférica del ciclo integral del agua está recogida en la Ley de Aguas y en el Reglamento del Dominio Público Hidráulico con la finalidad de evitar precipitaciones en forma de granizo o pedrisco que causen daños.

En los llanos del Marquesado y otras zonas limítrofes como Guadix, Gor, Los Montes Orientales y río Nacimiento, Almería, una extensión de terrenos cultivables que abarca más de 30.000 hectáreas, están acostumbradas al ruido de avionetas de baja altitud ocultas entre las nubes cuando hay aviso de tormenta, “y es un hecho que desde hace cinco años allí no cae apenas agua”, relata el presidente provincial de Asaja, Manuel del Pino.

En esa zona, el cultivo del cereal ha desaparecido porque cosecha era cero e intentan salvar la actividad agrícola transformando las hectáreas baldías en almendro, más resistente y con mejores posibilidades técnicas de producción, y la ganadería extensiva también se resiente por la ausencia de pastos. Son tierras áridas, pero con la intervención artificial en el régimen de lluvias que se está practicando en ellas, “legal o no”, se están desertizando aun más.

El vuelo de las avionetas ‘rompenubes’ fue detectado en el norte de la provincia de Granada a mediados de los años 90, en plena sequía. Su actividad se ha reanudado en los últimos cinco años. La denuncia de los agricultores ante la Guardia Civil no ha dado fruto porque no es obligatorio comunicar los vuelos a menos de 3.000 metros de altura y se trata, además, de una práctica permitida y regulada en las leyes españolas con la finalidad de evitar precipitaciones en forma de granizo o pedrisco que causen daños.

Asaja asegura que los gobiernos conocen esta práctica pero “no aclaran ciertas cuestiones, como de dónde proceden, quién está detrás y qué intereses se buscan, sean compañías de seguros que pretenden evitar indemnizaciones, grandes corporaciones que quieren proteger sus cultivos, empresas de energía solar, la industria farmacéutica o incluso temas de seguridad”.

Curtailing global warming with bioengineering? Iron fertilization won’t work in much of Pacific (Science Daily)

Earth’s own experiments during ice ages showed little effect

Date:
May 16, 2016
Source:
The Earth Institute at Columbia University
Summary:
Over the past half-million years, the equatorial Pacific Ocean has seen five spikes in the amount of iron-laden dust blown in from the continents. In theory, those bursts should have turbo-charged the growth of the ocean’s carbon-capturing algae — algae need iron to grow — but a new study shows that the excess iron had little to no effect.

With the right mix of nutrients, phytoplankton grow quickly, creating blooms visible from space. This image, created from MODIS data, shows a phytoplankton bloom off New Zealand. Credit: Robert Simmon and Jesse Allen/NASA

Over the past half-million years, the equatorial Pacific Ocean has seen five spikes in the amount of iron-laden dust blown in from the continents. In theory, those bursts should have turbo-charged the growth of the ocean’s carbon-capturing algae — algae need iron to grow — but a new study shows that the excess iron had little to no effect.

The results are important today, because as groups search for ways to combat climate change, some are exploring fertilizing the oceans with iron as a solution.

Algae absorb carbon dioxide (CO2), a greenhouse gas that contributes to global warming. Proponents of iron fertilization argue that adding iron to the oceans would fuel the growth of algae, which would absorb more CO2 and sink it to the ocean floor. The most promising ocean regions are those high in nutrients but low in chlorophyll, a sign that algae aren’t as productive as they could be. The Southern Ocean, the North Pacific, and the equatorial Pacific all fit that description. What’s missing, proponents say, is enough iron.

The new study, published this week in the Proceedings of the National Academy of Sciences, adds to growing evidence, however, that iron fertilization might not work in the equatorial Pacific as suggested.

Essentially, earth has already run its own large-scale iron fertilization experiments. During the ice ages, nearly three times more airborne iron blew into the equatorial Pacific than during non-glacial periods, but the new study shows that that increase didn’t affect biological productivity. At some points, as levels of iron-bearing dust increased, productivity actually decreased.

What matters instead in the equatorial Pacific is how iron and other nutrients are stirred up from below by upwelling fueled by ocean circulation, said lead author Gisela Winckler, a geochemist at Columbia University’s Lamont-Doherty Earth Observatory. The study found seven to 100 times more iron was supplied from the equatorial undercurrent than from airborne dust at sites spread across the equatorial Pacific. The authors write that although all of the nutrients might not be used immediately, they are used up over time, so the biological pump is already operating at full efficiency.

“Capturing carbon dioxide is what it’s all about: does iron raining in with airborne dust drive the capture of atmospheric CO2? We found that it doesn’t, at least not in the equatorial Pacific,” Winckler said.

The new findings don’t rule out iron fertilization elsewhere. Winckler and coauthor Robert Anderson of Lamont-Doherty Earth Observatory are involved in ongoing research that is exploring the effects of iron from dust on the Southern Ocean, where airborne dust supplies a larger share of the iron reaching the surface.

The PNAS paper follows another paper Winckler and Anderson coauthored earlier this year in Nature with Lamont graduate student Kassandra Costa looking at the biological response to iron in the equatorial Pacific during just the last glacial maximum, some 20,000 years ago. The new paper expands that study from a snapshot in time to a time series across the past 500,000 years. It confirms that Costa’s finding, that iron fertilization had no effect then, fit a pattern that extends across the past five glacial periods.

To gauge how productive the algae were, the scientists in the PNAS paper used deep- sea sediment cores from three locations in the equatorial Pacific that captured 500,000 years of ocean history. They tested along those cores for barium, a measure of how much organic matter is exported to the sea floor at each point in time, and for opal, a silicate mineral that comes from diatoms. Measures of thorium-232 reflected the amount of dust that blew in from land at each point in time.

“Neither natural variability of iron sources in the past nor purposeful addition of iron to equatorial Pacific surface water today, proposed as a mechanism for mitigating the anthropogenic increase in atmospheric CO2 inventory, would have a significant impact,” the authors concluded.

Past experiments with iron fertilization have had mixed results. The European Iron Fertilization Experiment (EIFEX) in 2004, for example, added iron in the Southern Ocean and was able to produce a burst of diatoms, which captured CO2 in their organic tissue and sank to the ocean floor. However, the German-Indian LOHAFEX project in 2009 experimented in a nearby location in the South Atlantic and found few diatoms. Instead, most of its algae were eaten up by tiny marine creatures, passing CO2 into the food chain rather than sinking it. In the LOHAFEX case, the scientists determined that another nutrient that diatoms need — silicic acid — was lacking.

The Intergovernmental Panel on Climate Change (IPCC) cautiously discusses iron fertilization in its latest report on climate change mitigation. It warns of potential risks, including the impact that higher productivity in one area may have on nutrients needed by marine life downstream, and the potential for expanding low-oxygen zones, increasing acidification of the deep ocean, and increasing nitrous oxide, a greenhouse gas more potent than CO2.

“While it is well recognized that atmospheric dust plays a significant role in the climate system by changing planetary albedo, the study by Winckler et al. convincingly shows that dust and its associated iron content is not a key player in regulating the oceanic sequestration of CO2 in the equatorial Pacific on large spatial and temporal scales,” said Stephanie Kienast, a marine geologist and paleoceanographer at Dalhousie University who was not involved in the study. “The classic paradigm of ocean fertilization by iron during dustier glacials can thus be rejected for the equatorial Pacific, similar to the Northwest Pacific.”


Journal Reference:

  1. Gisela Winckler, Robert F. Anderson, Samuel L. Jaccard, and Franco Marcantonio. Ocean dynamics, not dust, have controlled equatorial Pacific productivity over the past 500,000 yearsPNAS, May 16, 2016 DOI: 10.1073/pnas.1600616113

Artificial intelligence replaces physicists (Science Daily)

Date:
May 16, 2016
Source:
Australian National University
Summary:
Physicists are putting themselves out of a job, using artificial intelligence to run a complex experiment. The experiment created an extremely cold gas trapped in a laser beam, known as a Bose-Einstein condensate, replicating the experiment that won the 2001 Nobel Prize.

The experiment, featuring the small red glow of a BEC trapped in infrared laser beams. Credit: Stuart Hay, ANU

Physicists are putting themselves out of a job, using artificial intelligence to run a complex experiment.

The experiment, developed by physicists from The Australian National University (ANU) and UNSW ADFA, created an extremely cold gas trapped in a laser beam, known as a Bose-Einstein condensate, replicating the experiment that won the 2001 Nobel Prize.

“I didn’t expect the machine could learn to do the experiment itself, from scratch, in under an hour,” said co-lead researcher Paul Wigley from the ANU Research School of Physics and Engineering.

“A simple computer program would have taken longer than the age of the Universe to run through all the combinations and work this out.”

Bose-Einstein condensates are some of the coldest places in the Universe, far colder than outer space, typically less than a billionth of a degree above absolute zero.

They could be used for mineral exploration or navigation systems as they are extremely sensitive to external disturbances, which allows them to make very precise measurements such as tiny changes in the Earth’s magnetic field or gravity.

The artificial intelligence system’s ability to set itself up quickly every morning and compensate for any overnight fluctuations would make this fragile technology much more useful for field measurements, said co-lead researcher Dr Michael Hush from UNSW ADFA.

“You could make a working device to measure gravity that you could take in the back of a car, and the artificial intelligence would recalibrate and fix itself no matter what,” he said.

“It’s cheaper than taking a physicist everywhere with you.”

The team cooled the gas to around 1 microkelvin, and then handed control of the three laser beams over to the artificial intelligence to cool the trapped gas down to nanokelvin.

Researchers were surprised by the methods the system came up with to ramp down the power of the lasers.

“It did things a person wouldn’t guess, such as changing one laser’s power up and down, and compensating with another,” said Mr Wigley.

“It may be able to come up with complicated ways humans haven’t thought of to get experiments colder and make measurements more precise.

The new technique will lead to bigger and better experiments, said Dr Hush.

“Next we plan to employ the artificial intelligence to build an even larger Bose-Einstein condensate faster than we’ve seen ever before,” he said.

The research is published in the Nature group journal Scientific Reports.


Journal Reference:

  1. P. B. Wigley, P. J. Everitt, A. van den Hengel, J. W. Bastian, M. A. Sooriyabandara, G. D. McDonald, K. S. Hardman, C. D. Quinlivan, P. Manju, C. C. N. Kuhn, I. R. Petersen, A. N. Luiten, J. J. Hope, N. P. Robins, M. R. Hush. Fast machine-learning online optimization of ultra-cold-atom experimentsScientific Reports, 2016; 6: 25890 DOI: 10.1038/srep25890

Há um limite para avanços tecnológicos? (OESP)

16 Maio 2016 | 03h 00

Está se tornando popular entre políticos e governos a ideia que a estagnação da economia mundial se deve ao fato de que o “século de ouro” da inovação científica e tecnológica acabou. Este “século de ouro” é usualmente definido como o período de 1870 a 1970, no qual os fundamentos da era tecnológica em que vivemos foram estabelecidos.

De fato, nesse período se verificaram grandes avanços no nosso conhecimento, que vão desde a Teoria da Evolução, de Darwin, até a descoberta das leis do eletromagnetismo, que levou à produção de eletricidade em larga escala, e telecomunicações, incluindo rádio e televisão, com os benefícios resultantes para o bem-estar das populações. Outros avanços, na área de medicina, como vacinas e antibióticos, estenderam a vida média dos seres humanos. A descoberta e o uso do petróleo e do gás natural estão dentro desse período.

São muitos os que argumentam que em nenhum outro período de um século – ao longo dos 10 mil anos da História da humanidade – tantos progressos foram alcançados. Essa visão da História, porém, pode e tem sido questionada. No século anterior, de 1770 a 1870, por exemplo, houve também grandes progressos, decorrentes do desenvolvimento dos motores que usavam o carvão como combustível, os quais permitiram construir locomotivas e deram início à Revolução Industrial.

Apesar disso, os saudosistas acreditam que o “período dourado” de inovações se tenha esgotado e, em decorrência, os governos adotam hoje medidas de caráter puramente econômico para fazer reviver o “progresso”: subsídios a setores específicos, redução de impostos e políticas sociais para reduzir as desigualdades, entre outras, negligenciando o apoio à ciência e tecnologia.

Algumas dessas políticas poderiam ajudar, mas não tocam no aspecto fundamental do problema, que é tentar manter vivo o avanço da ciência e da tecnologia, que resolveu problemas no passado e poderá ajudar a resolver problemas no futuro.

Para analisar melhor a questão é preciso lembrar que não é o número de novas descobertas que garante a sua relevância. O avanço da tecnologia lembra um pouco o que acontece às vezes com a seleção natural dos seres vivos: algumas espécies são tão bem adaptadas ao meio ambiente em que vivem que deixam de “evoluir”: esse é o caso dos besouros que existiam na época do apogeu do Egito, 5 mil anos atrás, e continuam lá até hoje; ou de espécies “fósseis” de peixes que evoluíram pouco em milhões de anos.

Outros exemplos são produtos da tecnologia moderna, como os magníficos aviões DC-3, produzidos há mais de 50 anos e que ainda representam uma parte importante do tráfego aéreo mundial.

Mesmo em áreas mais sofisticadas, como a informática, isso parece estar ocorrendo. A base dos avanços nessa área foi a “miniaturização” dos chips eletrônicos, onde estão os transistores. Em 1971 os chips produzidos pela Intel (empresa líder na área) tinham 2.300 transistores numa placa de 12 milímetros quadrados. Os chips de hoje são pouco maiores, mas têm 5 bilhões de transistores. Foi isso que permitiu a produção de computadores personalizados, telefones celulares e inúmeros outros produtos. E é por essa razão que a telefonia fixa está sendo abandonada e a comunicação via Skype é praticamente gratuita e revolucionou o mundo das comunicações.

Há agora indicações que essa miniaturização atingiu seus limites, o que causa uma certa depressão entre os “sacerdotes” desse setor. Essa é uma visão equivocada. O nível de sucesso foi tal que mais progressos nessa direção são realmente desnecessários, que é o que aconteceu com inúmeros seres vivos no passado.

O que parece ser a solução dos problemas do crescimento econômico no longo prazo é o avanço da tecnologia em outras áreas que não têm recebido a atenção necessária: novos materiais, inteligência artificial, robôs industriais, engenharia genética, prevenção de doenças e, mais do que tudo, entender o cérebro humano, o produto mais sofisticado da evolução da vida na Terra.

Entender como uma combinação de átomos e moléculas pode gerar um órgão tão criativo como o cérebro, capaz de possuir uma consciência e criatividade para compor sinfonias como as de Beethoven – e ao mesmo tempo promover o extermínio de milhões de seres humanos –, será provavelmente o avanço mais extraordinário que o Homo sapiens poderá atingir.

Avanços nessas áreas poderiam criar uma vaga de inovações e progresso material superior em quantidade e qualidade ao que se produziu no “século de ouro”. Mais ainda enfrentamos hoje um problema global, novo aqui, que é a degradação ambiental, resultante em parte do sucesso dos avanços da tecnologia do século 20. Apenas a tarefa de reduzir as emissões de gases que provocam o aquecimento global (resultante da queima de combustíveis fósseis) será uma tarefa hercúlea.

Antes disso, e num plano muito mais pedestre, os avanços que estão sendo feitos na melhoria da eficiência no uso de recursos naturais é extraordinário e não tem tido o crédito e o reconhecimento que merecem.

Só para dar um exemplo, em 1950 os americanos gastavam, em média, 30% da sua renda em alimentos. No ano de 2013 essa porcentagem havia caído para 10%. Os gastos com energia também caíram, graças à melhoria da eficiência dos automóveis e outros fins, como iluminação e aquecimento, o que, aliás, explica por que o preço do barril de petróleo caiu de US$ 150 para menos de US$ 30. É que simplesmente existe petróleo demais no mundo, como também existe capacidade ociosa de aço e cimento.

Um exemplo de um país que está seguindo esse caminho é o Japão, cuja economia não está crescendo muito, mas sua população tem um nível de vida elevado e continua a beneficiar-se gradualmente dos avanços da tecnologia moderna.

*José Goldemberg é professor emérito da Universidade de São Paulo (USP) e é presidente da Fundação de Amparo à Pesquisa do Estado de São Paulo (Fapesp)

Theoretical tiger chases statistical sheep to probe immune system behavior (Science Daily)

Physicists update predator-prey model for more clues on how bacteria evade attack from killer cells

Date:
April 29, 2016
Source:
IOP Publishing
Summary:
Studying the way that solitary hunters such as tigers, bears or sea turtles chase down their prey turns out to be very useful in understanding the interaction between individual white blood cells and colonies of bacteria. Researchers have created a numerical model that explores this behavior in more detail.

Studying the way that solitary hunters such as tigers, bears or sea turtles chase down their prey turns out to be very useful in understanding the interaction between individual white blood cells and colonies of bacteria. Reporting their results in the Journal of Physics A: Mathematical and Theoretical, researchers in Europe have created a numerical model that explores this behaviour in more detail.

Using mathematical expressions, the group can examine the dynamics of a single predator hunting a herd of prey. The routine splits the hunter’s motion into a diffusive part and a ballistic part, which represent the search for prey and then the direct chase that follows.

“We would expect this to be a fairly good approximation for many animals,” explained Ralf Metzler, who led the work and is based at the University of Potsdam in Germany.

Obstructions included

To further improve its analysis, the group, which includes scientists from the National Institute of Chemistry in Slovenia, and Sorbonne University in France, has incorporated volume effects into the latest version of its model. The addition means that prey can now inadvertently get in each other’s way and endanger their survival by blocking potential escape routes.

Thanks to this update, the team can study not just animal behaviour, but also gain greater insight into the way that killer cells such as macrophages (large white blood cells patrolling the body) attack colonies of bacteria.

One of the key parameters determining the life expectancy of the prey is the so-called ‘sighting range’ — the distance at which the prey is able to spot the predator. Examining this in more detail, the researchers found that the hunter profits more from the poor eyesight of the prey than from the strength of its own vision.

Long tradition with a new dimension

The analysis of predator-prey systems has a long tradition in statistical physics and today offers many opportunities for cooperative research, particularly in fields such as biology, biochemistry and movement ecology.

“With the ever more detailed experimental study of systems ranging from molecular processes in living biological cells to the motion patterns of animal herds and humans, the need for cross-fertilisation between the life sciences and the quantitative mathematical approaches of the physical sciences has reached a new dimension,” Metzler comments.

To help support this cross-fertilisation, he heads up a new section of the Journal of Physics A: Mathematical and Theoretical that is dedicated to biological modelling and examines the use of numerical techniques to study problems in the interdisciplinary field connecting biology, biochemistry and physics.


Journal Reference:

  1. Maria Schwarzl, Aljaz Godec, Gleb Oshanin, Ralf Metzler. A single predator charging a herd of prey: effects of self volume and predator–prey decision-makingJournal of Physics A: Mathematical and Theoretical, 2016; 49 (22): 225601 DOI: 10.1088/1751-8113/49/22/225601

Weasel Apparently Shuts Down World’s Most Powerful Particle Collider (NPR)

April 29, 201611:04 AM ET

GEOFF BRUMFIEL

The Large Hadron Collider uses superconducting magnets to smash sub-atomic particles together at enormous energies.

The Large Hadron Collider uses superconducting magnets to smash sub-atomic particles together at enormous energies. CERN

A small mammal has sabotaged the world’s most powerful scientific instrument.

The Large Hadron Collider, a 17-mile superconducting machine designed to smash protons together at close to the speed of light, went offline overnight. Engineers investigating the mishap found the charred remains of a furry creature near a gnawed-through power cable.

A small mammal, possibly a weasel, gnawed-through a power cable at the Large Hadron Collider.A small mammal, possibly a weasel, gnawed-through a power cable at the Large Hadron Collider. Ashley Buttle/Flickr

“We had electrical problems, and we are pretty sure this was caused by a small animal,” says Arnaud Marsollier, head of press for CERN, the organization that runs the $7 billion particle collider in Switzerland. Although they had not conducted a thorough analysis of the remains, Marsollier says they believe the creature was “a weasel, probably.” (Update: An official briefing document from CERN indicates the creature may have been a marten.)

The shutdown comes as the LHC was preparing to collect new data on the Higgs Boson, a fundamental particle it discovered in 2012. The Higgs is believed to endow other particles with mass, and it is considered to be a cornerstone of the modern theory of particle physics.

Researchers have seen some hints in recent data that other, yet-undiscovered particles might also be generated inside the LHC. If those other particles exist, they could revolutionize researcher’s understanding of everything from the laws of gravity, to quantum mechanics.

Unfortunately, Marsollier says, scientists will have to wait while workers bring the machine back online. Repairs will take a few days, but getting the machine fully ready to smash might take another week or two. “It may be mid-May,” he says.

These sorts of mishaps are not unheard of, says Marsollier. The LHC is located outside of Geneva. “We are in the countryside, and of course we have wild animals everywhere.” There have been previous incidents, including one in 2009, when a bird is believed to have dropped a baguette onto critical electrical systems.

Nor are the problems exclusive to the LHC: In 2006, raccoons conducted a “coordinated” attack on a particle accelerator in Illinois.

It is unclear whether the animals are trying to stop humanity from unlocking the secrets of the universe.

Of course, small mammals cause problems in all sorts of organizations. Yesterday, a group of children took National Public Radio off the air for over a minute before engineers could restore the broadcast.

Répteis têm atividade cerebral típica de sonhos humanos, revela estudo (Folha de S.Paulo)

Dr. Stephan Junek, Max Planck Institute for Brain Research
Sleeping dragon (Pogona vitticeps). [Credit: Dr. Stephan Junek, Max Planck Institute for Brain Research]
Estudo mostra que lagartos atingem padrão de sono que, em humanos, permite o surgimento de sonhos

REINALDO JOSÉ LOPES
COLABORAÇÃO PARA A FOLHA

28/04/2016 14h56

Será que os lagartos sonham com ovelhas escamosas? Ninguém ainda foi capaz de enxergar detalhadamente o que acontece no cérebro de tais bichos para que seja possível responder a essa pergunta, mas um novo estudo revela que o padrão de atividade cerebral típico dos sonhos humanos também surge nesses répteis quando dormem.

Trata-se do chamado sono REM (sigla inglesa da expressão “movimento rápido dos olhos”), que antes parecia ser exclusividade de mamíferos como nós e das aves. No entanto, a análise da atividade cerebral de um lagarto australiano, o dragão-barbudo (Pogona vitticeps), indica que, ao longo da noite, o cérebro do animal fica se revezando entre o sono REM e o sono de ondas lentas (grosso modo, o sono profundo, sem sonhos), num padrão parecido, ainda que não idêntico, ao observado em seres humanos.

Liderado por Gilles Laurent, do Instituto Max Planck de Pesquisa sobre o Cérebro, na Alemanha, o estudo está saindo na revista especializada “Science”. “Laurent não brinca em serviço”, diz Sidarta Ribeiro, pesquisador da UFRN (Universidade Federal do Rio Grande do Norte) e um dos principais especialistas do mundo em neurobiologia do sono e dos sonhos. “Foi feita uma demonstração bem clara do fenômeno.”

A metodologia usada para verificar o que acontecia no cérebro reptiliano não era exatamente um dragão de sete cabeças. Cinco exemplares da espécie receberam implantes de eletrodos no cérebro e, na hora de dormir, seu comportamento foi monitorado com câmeras infravermelhas, ideais para “enxergar no escuro”. Os animais costumavam dormir entre seis e dez horas por noite, num ciclo que podia ser mais ou menos controlado pelos cientistas do Max Planck, já que eles é que apagavam e acendiam as luzes e regulavam a temperatura do recinto.

O que os pesquisadores estavam medindo era a variação de atividade elétrica no cérebro dos dragões-barbudos durante a noite. São essas oscilações que produzem o padrão de ondas já conhecido a partir do sono de humanos e demais mamíferos, por exemplo.

Só foi possível chegar aos achados relatados no novo estudo por causa de seu nível de detalhamento, diz Suzana Herculano-Houzel, neurocientista da UFRJ (Universidade Federal do Rio de Janeiro) e colunista da Folha. “Estudos anteriores menos minuciosos não tinham como detectar sono REM porque, nesses animais, a alternância entre os dois tipos de sono é extremamente rápida, a cada 80 segundos”, explica ela, que já tinha visto Laurent apresentar os dados num congresso científico. Em humanos, os ciclos são bem mais lentos, com duração média de 90 minutos.

Além da semelhança no padrão de atividade cerebral, o sono REM dos répteis também tem correlação clara com os movimentos oculares que lhe dão o nome (os quais lembram vagamente a maneira como uma pessoa desperta mexe os olhos), conforme mostraram as imagens em infravermelho.

DORMIR, TALVEZ SONHAR

A primeira implicação das descobertas é evolutiva. Embora dormir seja um comportamento aparentemente universal no reino animal, o sono REM (e talvez os sonhos) pareciam exclusividade de espécies com cérebro supostamente mais complexo. “Para quem estuda os mecanismos do sono, é um estudo fundamental”, afirma Suzana.

Acontece que tanto mamíferos quanto aves descendem de grupos primitivos associados aos répteis, só que em momentos bem diferentes da história do planeta – mamíferos já caminhavam pela Terra havia dezenas de milhões de anos quando um grupo de pequenos dinossauros carnívoros deu origem às aves. Ou seja, em tese, mamíferos e aves precisariam ter “aprendido a sonhar” de forma totalmente independente. O achado “resolve esse paradoxo”, diz Ribeiro: o sono REM já estaria presente no ancestral comum de todos esses vertebrados.

O trabalho do pesquisador brasileiro e o de outros especialistas mundo afora tem mostrado que ambos os tipos de sono são fundamentais para “esculpir” memórias no cérebro, ao mesmo tempo fortalecendo o que é relevante e jogando fora o que não é importante. Sem os ciclos alternados de atividade cerebral, a capacidade de aprendizado de animais e humanos ficaria seriamente prejudicada.

Tanto Ribeiro quanto Suzana, porém, dizem que ainda não dá para cravar que lagartos ou outros animais sonham como nós. “Talvez um dia alguém faça ressonância magnética em lagartos adormecidos e veja se eles mostram a mesma reativação de áreas sensoriais que se vê em humanos em sono REM”, diz ela. “Claro que os donos de cachorro têm certeza que suas mascotes sonham, mas o ideal seria fazer a decodificação do sinal neural”, uma técnica que permite saber o que uma pessoa imagina estar vendo quando sonha e já foi aplicada com sucesso por cientistas japoneses.

Brasil e Japão assinam acordo para aprimorar sistema de prevenção de desastres naturais (MCTI)

JC 5374, 15 de março de 2016

Objetivo é produzir alertas mais precisos e reduzir o tempo das respostas nas situações de risco. Projeto piloto será implementado nas cidades de Blumenau (SC), Nova Friburgo (RJ) e Petrópolis (RJ)

Brasil e Japão assinaram nesta segunda-feira (14) um acordo de cooperação na área de prevenção de desastres naturais para melhorar a precisão dos alertas e reduzir o tempo gasto nas respostas. O documento valida condutas e procedimentos definidos por técnicos dos dois países para a instalação de projetos piloto nas cidades de Blumenau (SC), Nova Friburgo (RJ) e Petrópolis (RJ) – todas sofreram com deslizamentos de terra nos últimos anos. O Centro Nacional de Monitoramento e Alertas de Desastres Naturais (Cemaden/MCTI) participa da iniciativa, que faz parte do Projeto de Fortalecimento da Estratégia Nacional de Gestão Integrada de Riscos em Desastres Naturais (Gides).

“Isso vai ser um novo experimento em relação à coleta de informações e como se disponibiliza essas informações de forma rápida e integrada com vários órgãos do governo”, explicou o secretário de Políticas e Programas de Pesquisa e Desenvolvimento do MCTI, Jailson de Andrade.

Segundo o pesquisador da área de geodinâmica do Cemaden Angelo Consoni, o aprimoramento do protocolo dos alertas é fundamental para que eles sejam emitidos com mais eficiência para a população. Quanto mais preciso e rápido, menor o risco de calamidades.

“A finalidade do piloto é, principalmente, a precisão dos alertas e o tempo gasto nessa atividade. Então, otimizando fluxos de elaboração de emissão de alertas, juntamente com os municípios e com os estados, nós podemos melhorar significativamente a qualidade dos alertas que disponibilizamos para a população em situações de risco”, afirmou.

O acordo de cooperação também foi assinado pelo Ministério das Cidades, Ministério da Integração Nacional, Agência Brasileira de Cooperação (ABC) e Agência de Cooperação Internacional do Japão (Jica, na sigla em inglês).

Parceria

A parceria entre Brasil e Japão é baseada na troca de experiências entre recursos humanos das duas nações. Desde 2014, duas turmas de brasileiros já receberam capacitação de especialistas japoneses. Além disso, os asiáticos também vêm ao País para o intercâmbio de informações sobre a prevenção de desastres naturais.

“O Japão é uma referência. E essa cooperação tem sido muito boa para nós no sentido de formação de pessoal”, destacou Consoni.

MCTI

Study suggests different written languages are equally efficient at conveying meaning (Eureka/University of Southampton)

PUBLIC RELEASE: 1-FEB-2016

UNIVERSITY OF SOUTHAMPTON

IMAGE

IMAGE: A STUDY LED BY THE UNIVERSITY OF SOUTHAMPTON HAS FOUND THERE IS NO DIFFERENCE IN THE TIME IT TAKES PEOPLE FROM DIFFERENT COUNTRIES TO READ AND PROCESS DIFFERENT LANGUAGES. view more  CREDIT: UNIVERSITY OF SOUTHAMPTON

A study led by the University of Southampton has found there is no difference in the time it takes people from different countries to read and process different languages.

The research, published in the journal Cognition, finds the same amount of time is needed for a person, from for example China, to read and understand a text in Mandarin, as it takes a person from Britain to read and understand a text in English – assuming both are reading their native language.

Professor of Experimental Psychology at Southampton, Simon Liversedge, says: “It has long been argued by some linguists that all languages have common or universal underlying principles, but it has been hard to find robust experimental evidence to support this claim. Our study goes at least part way to addressing this – by showing there is universality in the way we process language during the act of reading. It suggests no one form of written language is more efficient in conveying meaning than another.”

The study, carried out by the University of Southampton (UK), Tianjin Normal University (China) and the University of Turku (Finland), compared the way three groups of people in the UK, China and Finland read their own languages.

The 25 participants in each group – one group for each country – were given eight short texts to read which had been carefully translated into the three different languages. A rigorous translation process was used to make the texts as closely comparable across languages as possible. English, Finnish and Mandarin were chosen because of the stark differences they display in their written form – with great variation in visual presentation of words, for example alphabetic vs. logographic(1), spaced vs. unspaced, agglutinative(2) vs. non-agglutinative.

The researchers used sophisticated eye-tracking equipment to assess the cognitive processes of the participants in each group as they read. The equipment was set up identically in each country to measure eye movement patterns of the individual readers – recording how long they spent looking at each word, sentence or paragraph.

The results of the study showed significant and substantial differences between the three language groups in relation to the nature of eye movements of the readers and how long participants spent reading each individual word or phrase. For example, the Finnish participants spent longer concentrating on some words compared to the English readers. However, most importantly and despite these differences, the time it took for the readers of each language to read each complete sentence or paragraph was the same.

Professor Liversedge says: “This finding suggests that despite very substantial differences in the written form of different languages, at a basic propositional level, it takes humans the same amount of time to process the same information regardless of the language it is written in.

“We have shown it doesn’t matter whether a native Chinese reader is processing Chinese, or a Finnish native reader is reading Finnish, or an English native reader is processing English, in terms of comprehending the basic propositional content of the language, one language is as good as another.”

The study authors believe more research would be needed to fully understand if true universality of language exists, but that their study represents a good first step towards demonstrating that there is universality in the process of reading.

###

Notes for editors:

1) Logographic language systems use signs or characters to represent words or phrases.

2) Agglutinative language tends to express concepts in complex words consisting of many sub-units that are strung together.

3) The paper Universality in eye movements and reading: A trilingual investigation, (Simon P. Liversedge, Denis Drieghe, Xin Li, Guoli Yan, Xuejun Bai, Jukka Hyönä) is published in the journal Cognition and can also be found at: http://eprints.soton.ac.uk/382899/1/Liversedge,%20Drieghe,%20Li,%20Yan,%20Bai,%20%26%20Hyona%20(in%20press)%20copy.pdf

 

Semantically speaking: Does meaning structure unite languages? (Eureka/Santa Fe Institute)

1-FEB-2016

Humans’ common cognitive abilities and language dependance may provide an underlying semantic order to the world’s languages

SANTA FE INSTITUTE

We create words to label people, places, actions, thoughts, and more so we can express ourselves meaningfully to others. Do humans’ shared cognitive abilities and dependence on languages naturally provide a universal means of organizing certain concepts? Or do environment and culture influence each language uniquely?

Using a new methodology that measures how closely words’ meanings are related within and between languages, an international team of researchers has revealed that for many universal concepts, the world’s languages feature a common structure of semantic relatedness.

“Before this work, little was known about how to measure [a culture’s sense of] the semantic nearness between concepts,” says co-author and Santa Fe Institute Professor Tanmoy Bhattacharya. “For example, are the concepts of sun and moon close to each other, as they are both bright blobs in the sky? How about sand and sea, as they occur close by? Which of these pairs is the closer? How do we know?”

Translation, the mapping of relative word meanings across languages, would provide clues. But examining the problem with scientific rigor called for an empirical means to denote the degree of semantic relatedness between concepts.

To get reliable answers, Bhattacharya needed to fully quantify a comparative method that is commonly used to infer linguistic history qualitatively. (He and collaborators had previously developed this quantitative method to study changes in sounds of words as languages evolve.)

“Translation uncovers a disagreement between two languages on how concepts are grouped under a single word,” says co-author and Santa Fe Institute and Oxford researcher Hyejin Youn. “Spanish, for example, groups ‘fire’ and ‘passion’ under ‘incendio,’ whereas Swahili groups ‘fire’ with ‘anger’ (but not ‘passion’).”

To quantify the problem, the researchers chose a few basic concepts that we see in nature (sun, moon, mountain, fire, and so on). Each concept was translated from English into 81 diverse languages, then back into English. Based on these translations, a weighted network was created. The structure of the network was used to compare languages’ ways of partitioning concepts.

The team found that the translated concepts consistently formed three theme clusters in a network, densely connected within themselves and weakly to one another: water, solid natural materials, and earth and sky.

“For the first time, we now have a method to quantify how universal these relations are,” says Bhattacharya. “What is universal – and what is not – about how we group clusters of meanings teaches us a lot about psycholinguistics, the conceptual structures that underlie language use.”

The researchers hope to expand this study’s domain, adding more concepts, then investigating how the universal structure they reveal underlies meaning shift.

Their research was published today in PNAS.

Impact of human activity on local climate mapped (Science Daily)

Date: January 20, 2016

Source: Concordia University

Summary: A new study pinpoints the temperature increases caused by carbon dioxide emissions in different regions around the world.


This is a map of climate change. Credit: Nature Climate Change

Earth’s temperature has increased by 1°C over the past century, and most of this warming has been caused by carbon dioxide emissions. But what does that mean locally?

A new study published in Nature Climate Change pinpoints the temperature increases caused by CO2 emissions in different regions around the world.

Using simulation results from 12 global climate models, Damon Matthews, a professor in Concordia’s Department of Geography, Planning and Environment, along with post-doctoral researcher Martin Leduc, produced a map that shows how the climate changes in response to cumulative carbon emissions around the world.

They found that temperature increases in most parts of the world respond linearly to cumulative emissions.

“This provides a simple and powerful link between total global emissions of carbon dioxide and local climate warming,” says Matthews. “This approach can be used to show how much human emissions are to blame for local changes.”

Leduc and Matthews, along with co-author Ramon de Elia from Ouranos, a Montreal-based consortium on regional climatology, analyzed the results of simulations in which CO2 emissions caused the concentration of CO2 in the atmosphere to increase by 1 per cent each year until it reached four times the levels recorded prior to the Industrial Revolution.

Globally, the researchers saw an average temperature increase of 1.7 ±0.4°C per trillion tonnes of carbon in CO2 emissions (TtC), which is consistent with reports from the Intergovernmental Panel on Climate Change.

But the scientists went beyond these globally averaged temperature rises, to calculate climate change at a local scale.

At a glance, here are the average increases per trillion tonnes of carbon that we emit, separated geographically:

  • Western North America 2.4 ± 0.6°C
  • Central North America 2.3 ± 0.4°C
  • Eastern North America 2.4 ± 0.5°C
  • Alaska 3.6 ± 1.4°C
  • Greenland and Northern Canada 3.1 ± 0.9°C
  • North Asia 3.1 ± 0.9°C
  • Southeast Asia 1.5 ± 0.3°C
  • Central America 1.8 ± 0.4°C
  • Eastern Africa 1.9 ± 0.4°C

“As these numbers show, equatorial regions warm the slowest, while the Arctic warms the fastest. Of course, this is what we’ve already seen happen — rapid changes in the Arctic are outpacing the rest of the planet,” says Matthews.

There are also marked differences between land and ocean, with the temperature increase for the oceans averaging 1.4 ± 0.3°C TtC, compared to 2.2 ± 0.5°C for land areas.

“To date, humans have emitted almost 600 billion tonnes of carbon,” says Matthews. “This means that land areas on average have already warmed by 1.3°C because of these emissions. At current emission rates, we will have emitted enough CO¬2 to warm land areas by 2°C within 3 decades.”


Journal Reference:

  1. Martin Leduc, H. Damon Matthews, Ramón de Elía. Regional estimates of the transient climate response to cumulative CO2 emissionsNature Climate Change, 2016; DOI: 10.1038/nclimate2913

Social media technology, rather than anonymity, is the problem (Science Daily)

Date: January 20, 2016

Source: University of Kent

Summary: Problems of anti-social behavior, privacy, and free speech on social media are not caused by anonymity but instead result from the way technology changes our presence. That’s the startling conclusion of a new book by an expert on the information society and developing media.


Problems of anti-social behaviour, privacy, and free speech on social media are not caused by anonymity but instead result from the way technology changes our presence.

That’s the startling conclusion of a new book by Dr Vincent Miller, a sociologist at the University of Kent and an expert on the information society and developing media.

In contending that the cause of issues such as online anti-social behaviour is the design/software of social media itself, Dr Miller suggests that social media architecture needs to be managed and planned in the same way as physical architecture. In the book, entitled The Crisis of Presence in Contemporary Culture: Ethics, Privacy and Speech in Mediated Social Life, Dr Miller examines the relationship between the freedom provided by the contemporary online world and the control, surveillance and censorship that operate in this environment.

The book questions the origins and sincerity of moral panics about use — and abuse — in the contemporary online environment and offers an analysis of ethics, privacy and free speech in this setting.

Investigating the ethical challenges that confront our increasingly digital culture, Dr Miller suggests a number of revisions to our ethical, legal and technological regimes to meet these challenges.

These including changing what he describes as ‘dehumanizing’ social media software, expanding the notion of our ‘selves’ or ‘bodies’ to include our digital traces, and the re-introduction of ‘time’ into social media through the creation of ‘expiry dates’ on social media communications.

Dr Miller is a Senior Lecturer in Sociology and Cultural Studies within the University’s School of Social Research, Sociology and Social Policy. The Crisis of Presence in Contemporary Culture: Ethics, Privacy and Speech in Mediated Social Life, is published by Sage.

More information can be found at: https://uk.sagepub.com/en-gb/eur/the-crisis-of-presence-in-contemporary-culture/book244328

‘Na África, indaguei rei da minha etnia por que nos venderam como escravos’ (BBC Brasil)

14 janeiro 2016

Zulu Araújo | Foto: Divulgação

Image captionA convite de produtora, arquiteto fez exame genético e foi até Camarões para conhecer seus ancestrais

“Somos o único grupo populacional no Brasil que não sabe de onde vem”, queixa-se o arquiteto baiano Zulu Araújo, de 63 anos, em referência à população negra descendente dos 4,8 milhões de africanos escravizados recebidos pelo país entre os séculos 16 e 19.

Araújo foi um dos 150 brasileiros convidados pela produtora Cine Group para fazer um exame de DNA e identificar suas origens africanas.

Ele descobriu ser descendente do povo tikar, de Camarões, e, como parte da série televisiva Brasil: DNA África, visitou o local para conhecer a terra de seus antepassados.

“A viagem me completou enquanto cidadão”, diz Araújo. Leia, abaixo, seu depoimento à BBC Brasil:

“Sempre tive a consciência de que um dos maiores crimes contra a população negra não foi nem a tortura, nem a violência: foi retirar a possibilidade de que conhecêssemos nossas origens. Somos o único grupo populacional no Brasil que não sabe de onde vem.

Meu sobrenome, Mendes de Araújo, é português. Carrego o nome da família que escravizou meus ancestrais, pois o ‘de’ indica posse. Também carrego o nome de um povo africano, Zulu.

 

Momento em que o Zulu confronta o rei tikar sobre a venda de seus antepassados

Ganhei o apelido porque meus amigos me acharam parecido com um rei zulu retratado num documentário. Virou meu nome.

Nasci no Solar do Unhão, uma colônia de pescadores no centro de Salvador, local de desembarque e leilão de escravos até o final do século 19. Comecei a trabalhar clandestinamente aos 9 anos numa gráfica da Igreja Católica. Trabalhava de forma profana para produzir livros sagrados.

Bom aluno, consegui passar no vestibular para arquitetura. Éramos dois negros numa turma de 600 estudantes – isso numa cidade onde 85% da população tem origem africana. Salvador é uma das cidades mais racistas que eu conheço no mundo.

Ao participar do projeto Brasil: DNA África e descobrir que era do grupo étnico tikar, fiquei surpreso. Na Bahia, todos nós especulamos que temos ou origem angolana ou iorubá. Eu imaginava que era iorubano. Mas os exames de DNA mostram que vieram ao Brasil muito mais etnias do que sabemos.

Zulu Araújo | Foto: Divulgação

“Era como se eu estivesse no meu bairro, na Bahia, e ao mesmo tempo tivesse voltado 500 anos no tempo”, diz Zulu sobre chegada a Camarões

Zulu Araújo | Foto: Divulgação

Pergunta sobre escravidão a rei camaronense foi tratada como “assunto delicado” e foi respondida apenas no dia seguinte

Quando cheguei ao centro do reino tikar, a eletricidade tinha caído, e o pessoal usava candeeiros e faróis dos carros para a iluminação. Mais de 2 mil pessoas me aguardavam. O que senti naquele momento não dá para descrever, de tão chocante e singular.

As pessoas gritavam. Eu não entendia uma palavra do que diziam, mas entendia tudo. Era como se eu estivesse no meu bairro, na Bahia, e ao mesmo tempo tivesse voltado 500 anos no tempo.

O povão me encarava como uma novidade: eu era o primeiro brasileiro de origem tikar a pisar ali. Mas também fiquei chocado com a pobreza. As pessoas me faziam inúmeros pedidos nas ruas, de camisetas de futebol a ajuda para gravar um disco. Não por acaso, ali perto o grupo fundamentalista Boko Haram (originário da vizinha Nigéria) tem uma de suas bases e conta com grande apoio popular.

De manhã, fui me encontrar com o rei, um homem alto e forte de 56 anos, casado com 20 mulheres e pai de mais de 40 filhos. Ele se vestia como um muçulmano do deserto, com uma túnica com estamparias e tecidos belíssimos.

Depois do café da manhã, tive uma audiência com ele numa das salas do palácio. Ele estava emocionado e curioso, pois sabia que muitos do povo Tikar haviam ido para as Américas, mas não para o Brasil.

Fiz uma pergunta que me angustiava: perguntei por que eles tinham permitido ou participado da venda dos meus ancestrais para o Brasil. O tradutor conferiu duas vezes se eu queria mesmo fazer aquela pergunta e disse que o assunto era muito sensível. Eu insisti.

Ficou um silêncio total na sala. Então o rei cochichou no ouvido de um conselheiro, que me disse que ele pedia desculpas, mas que o assunto era muito delicado e só poderia me responder no dia seguinte. O tema da escravidão é um tabu no continente africano, porque é evidente que houve um conluio da elite africana com a europeia para que o processo durasse tanto tempo e alcançasse tanta gente.

No dia seguinte, o rei finalmente me respondeu. Ele pediu desculpas e disse que foi melhor terem nos vendido, caso contrário todos teríamos sido mortos. E disse que, por termos sobrevivido, nós, da diáspora, agora poderíamos ajudá-los. Disse ainda que me adotaria como seu primeiro filho, o que me daria o direito a regalias e o acesso a bens materiais.

Foi uma resposta política, mas acho que foi sincera. Sei que eles não imaginavam que a escravidão ganharia a dimensão que ganhou, nem que a Europa a transformaria no maior negócio de todos os tempos. Houve um momento em que os africanos perderam o controle.

Zulu Araújo | Foto: Divulgação

“Se qualquer pessoa me perguntar de onde sou, agora já sei responder. Só quem é negro pode entender a dimensão que isso possui.”

Um intelectual senegalês me disse que, enquanto não superarmos a escravidão, não teremos paz – nem os escravizados, nem os escravizadores. É a pura verdade. Não dá para tratar uma questão de 500 anos com um sentimento de ódio ou vingança.

A viagem me completou enquanto cidadão. Se qualquer pessoa me perguntar de onde sou, agora já sei responder. Só quem é negro pode entender a dimensão que isso possui.

Acho que os exames de DNA deveriam ser reconhecidos pelo governo, pelas instituições acadêmicas brasileiras como um caminho para que possamos refazer e recontar a história dos 52% dos brasileiros que têm raízes africanas. Só conhecendo nossas origens poderemos entender quem somos de verdade.”

God of Thunder (NPR)

October 17, 201411:09 AM ET

In 1904, Charles Hatfield claimed he could turn around the Southern California drought. Little did he know, he was going to get much, much more water than he bargained for.

GLYNN WASHINGTON, HOST:

From PRX and NPR, welcome back to SNAP JUDGMENT the Presto episode. Today we’re calling on mysterious forces and we’re going to strap on the SNAP JUDGMENT time machine. Our own Eliza Smith takes the controls and spins the dial back 100 years into the past.

ELIZA SMITH, BYLINE: California, 1904. In the fields, oranges dry in their rinds. In the ‘burbs, lawns yellow. Poppies wilt on the hillsides. Meanwhile, Charles Hatfield sits at a desk in his father’s Los Angeles sewing machine business. His dad wants him to take over someday, but Charlie doesn’t want to spend the rest of his life knocking on doors and convincing housewives to buy his bobbins and thread. Charlie doesn’t look like the kind of guy who changes the world. He’s impossibly thin with a vanishing patch of mousy hair. He always wears the same drab tweed suit. But he thinks to himself just maybe he can quench the Southland’s thirst. So when he punches out his timecard, he doesn’t go home for dinner. Instead, he sneaks off to the Los Angeles Public Library and pores over stacks of books. He reads about shamans who believed that fumes from a pyre of herbs and alcohols could force rain from the sky. He reads modern texts too, about the pseudoscience of pluvo culture – rainmaking, the theory that explosives and pyrotechnics could crack the clouds. Charlie conducts his first weather experiment on his family ranch, just northeast of Los Angeles in the city of Pasadena. One night he pulls his youngest brother, Paul, out of bed to keep watch with a shotgun as he climbs atop a windmill, pours a cocktail of chemicals into a shallow pan and then waits.

He doesn’t have a burner or a fan or some hybrid, no – he just waits for the chemicals to evaporate into the clouds. Paul slumped into a slumber long ago and is now leaning against the foundation of the windmill, when the first droplet hits Charlie’s cheek. Then another. And another.

Charlie pulls out his rain gauge and measures .65 inches. It’s enough to convince him he can make rain.

That’s right, Charlie has the power. Word spreads in local papers and one by one, small towns Hemet, Volta, Gustine, Newman, Crows Landing, Patterson come to him begging for rain. And wherever Charlie goes, rain seems to follow. After he gives their town seven more inches of water than his contract stipulated, the Hemet News raves, Mr. Hatfield is proving beyond doubt that rain can be produced.

Within weeks he’s signing contracts with towns from the Pacific Coast to the Mississippi. Of course, there are doubters who claim that he tracks the weather, who claim he’s a fool chasing his luck.

But then Charlie gets an invitation to prove himself. San Diego, a major city, is starting to talk water rations and they call on him. Of course, most of the city councilmen are dubious of Charlie’s charlatan claims. But still, cows are keeling over in their pastures and farmers are worrying over dying crops. It won’t hurt to hire him. They reason if Charlie Hatfield can fill San Diego’s biggest reservoir, Morena Dam, with 10 billion gallons of water, he’ll earn himself $10,000. If he can’t, well then he’ll just walk away and the city will laugh the whole thing off.

One councilman jokes…

UNIDENTIFIED MAN #1: It’s heads – the city wins. Tails – Hatfield loses.

SMITH: Charlie and Paul set up camp in the remote hills surrounding the Morena Reservoir. This time they work for weeks building several towers. This is to be Charlie’s biggest rain yet. When visitors come to observe his experiments, Charlie turns his back to them, hiding his notebooks and chemicals and Paul fingers the trigger on his trusty rifle. And soon enough it’s pouring. Winds reach record speeds of over 60 miles per hour. But that isn’t good enough – Charlie needs the legitimacy a satisfied San Diego can grant him. And so he works non-stop dodging lightning bolts, relishing thunderclaps. He doesn’t care that he’s soaked to the bone – he can wield weather. The water downs power lines, floods streets, rips up rail tracks.

A Mission Valley man who had to be rescued by a row boat as he clung to a scrap of lumber wraps himself in a towel and shivers as he suggests…

UNIDENTIFIED MAN #2: Let’s pay Hatfield $100,000 to quit.

SMITH: But Charlie isn’t quitting. The rain comes down harder and harder. Dams and reservoirs across the county explode and the flood devastates every farm, every house in its wake. One winemaker is surfacing from the protection of his cellar when he spies a wave twice the height of a telephone pole tearing down his street. He grabs his wife and they run as fast as they can, only to turn and watch their house washed downstream.

And yet, Charlie smiles as he surveys his success. The Morena Reservoir is full. He grabs Paul and the two leave their camp to march the 50 odd miles to City Hall. He expects the indebted populist to kiss his mud-covered shoes. Instead, he’s met with glares and threats. By the time Charlie and Paul reach San Diego’s city center, they’ve stopped answering to the name Hatfield. They call themselves Benson to avoid bodily harm.

Still, when he stands before the city councilman, Charlie declares his operations successful and demands his payment. The men glower at him.

San Diego is in ruins and worst of all – they’ve got blood on their hands. The flood drowned more than 50 people. It also destroyed homes, farms, telephone lines, railroads, streets, highways and bridges. San Diegans file millions of dollars in claims but Charlie doesn’t budge. He folds his arms across his chest, holds his head high and proclaims, the time is coming when drought will overtake this portion of the state. It will be then that you call for my services again.

So the city councilman tells Charlie that if he’s sure he made it rain, they’ll give him his $10,000 – he’ll just have to take full responsibility for the flood. Charlie grits his teeth and tells them, it was coincidence. It rained because Mother Nature made it so. I am no rainmaker.

And then Charlie disappears. He goes on selling sewing machines and keeping quiet.

WASHINGTON: I’ll tell you what, California these days could use a little Charlie Hatfield. Big thanks to Eliza Smith for sharing that story and thanks as well to Leon Morimoto for sound design. Mischief managed – you’ve just gotten to the other side by means of other ways.

If you missed any part of this show, no need for a rampage – head on over to snapjudgment.org. There you’ll find the award-winning podcast – Mark, what award did we win? Movies, pictures, stuff. Amazing stories await. Get in on the conversation. SNAP JUDGMENT’s on Facebook, Twitter @snapjudgment.

Did you ever wind up in the slithering sitting room when you’re supposed to be in Gryffindor’s parlor? Well, me neither, but I’m sure it’s nothing like wandering the halls of the Corporation for Public Broadcasting. Completely different, but many thanks to them. PRX, Public Radio Exchange, hosts a similar annual Quidditch championships but instead of brooms they ride radios. Not quite the same visual effect, but it’s good clean fun all the same – prx.org.

WBEZ in Chicago has tricks up their sleeve and you may have reckoned that this is not the news. No way is this the news. In fact, if you’d just thrown that book with Voldemort trapped in it, thrown it in the fire, been done with the nonsense – and you would still not be as far away from the news as this is. But this is NPR.

Hit Steyerl | Politics of Post-Representation (Dis Blog)

[Accessed Nov 23, 2015]

In conversation with Marvin Jordan

From the militarization of social media to the corporatization of the art world, Hito Steyerl’s writings represent some of the most influential bodies of work in contemporary cultural criticism today. As a documentary filmmaker, she has created multiple works addressing the widespread proliferation of images in contemporary media, deepening her engagement with the technological conditions of globalization. Steyerl’s work has been exhibited in numerous solo and group exhibitions including documenta 12, Taipei Biennial 2010, and 7th Shanghai Biennial. She currently teaches New Media Art at Berlin University of the Arts.

Hito Steyerl, How Not To Be Seen: A Fucking Didactic Educational .MOV File (2013)

Hito Steyerl, How Not To Be Seen: A Fucking Didactic Educational .MOV File (2013)

Marvin Jordan I’d like to open our dialogue by acknowledging the central theme for which your work is well known — broadly speaking, the socio-technological conditions of visual culture — and move toward specific concepts that underlie your research (representation, identification, the relationship between art and capital, etc). In your essay titled “Is a Museum a Factory?” you describe a kind of ‘political economy’ of seeing that is structured in contemporary art spaces, and you emphasize that a social imbalance — an exploitation of affective labor — takes place between the projection of cinematic art and its audience. This analysis leads you to coin the term “post-representational” in service of experimenting with new modes of politics and aesthetics. What are the shortcomings of thinking in “representational” terms today, and what can we hope to gain from transitioning to a “post-representational” paradigm of art practices, if we haven’t arrived there already?

Hito Steyerl Let me give you one example. A while ago I met an extremely interesting developer in Holland. He was working on smart phone camera technology. A representational mode of thinking photography is: there is something out there and it will be represented by means of optical technology ideally via indexical link. But the technology for the phone camera is quite different. As the lenses are tiny and basically crap, about half of the data captured by the sensor are noise. The trick is to create the algorithm to clean the picture from the noise, or rather to define the picture from within noise. But how does the camera know this? Very simple. It scans all other pictures stored on the phone or on your social media networks and sifts through your contacts. It looks through the pictures you already made, or those that are networked to you and tries to match faces and shapes. In short: it creates the picture based on earlier pictures, on your/its memory. It does not only know what you saw but also what you might like to see based on your previous choices. In other words, it speculates on your preferences and offers an interpretation of data based on affinities to other data. The link to the thing in front of the lens is still there, but there are also links to past pictures that help create the picture. You don’t really photograph the present, as the past is woven into it.

The result might be a picture that never existed in reality, but that the phone thinks you might like to see. It is a bet, a gamble, some combination between repeating those things you have already seen and coming up with new versions of these, a mixture of conservatism and fabulation. The paradigm of representation stands to the present condition as traditional lens-based photography does to an algorithmic, networked photography that works with probabilities and bets on inertia. Consequently, it makes seeing unforeseen things more difficult. The noise will increase and random interpretation too. We might think that the phone sees what we want, but actually we will see what the phone thinks it knows about us. A complicated relationship — like a very neurotic marriage. I haven’t even mentioned external interference into what your phone is recording. All sorts of applications are able to remotely shut your camera on or off: companies, governments, the military. It could be disabled for whole regions. One could, for example, disable recording functions close to military installations, or conversely, live broadcast whatever you are up to. Similarly, the phone might be programmed to auto-pixellate secret or sexual content. It might be fitted with a so-called dick algorithm to screen out NSFW content or auto-modify pubic hair, stretch or omit bodies, exchange or collage context or insert AR advertisement and pop up windows or live feeds. Now lets apply this shift to the question of representative politics or democracy. The representational paradigm assumes that you vote for someone who will represent you. Thus the interests of the population will be proportionally represented. But current democracies work rather like smartphone photography by algorithmically clearing the noise and boosting some data over other. It is a system in which the unforeseen has a hard time happening because it is not yet in the database. It is about what to define as noise — something Jacques Ranciere has defined as the crucial act in separating political subjects from domestic slaves, women and workers. Now this act is hardwired into technology, but instead of the traditional division of people and rabble, the results are post-representative militias, brands, customer loyalty schemes, open source insurgents and tumblrs.

Additionally, Ranciere’s democratic solution: there is no noise, it is all speech. Everyone has to be seen and heard, and has to be realized online as some sort of meta noise in which everyone is monologuing incessantly, and no one is listening. Aesthetically, one might describe this condition as opacity in broad daylight: you could see anything, but what exactly and why is quite unclear. There are a lot of brightly lit glossy surfaces, yet they don’t reveal anything but themselves as surface. Whatever there is — it’s all there to see but in the form of an incomprehensible, Kafkaesque glossiness, written in extraterrestrial code, perhaps subject to secret legislation. It certainly expresses something: a format, a protocol or executive order, but effectively obfuscates its meaning. This is a far cry from a situation in which something—an image, a person, a notion — stood in for another and presumably acted in its interest. Today it stands in, but its relation to whatever it stands in for is cryptic, shiny, unstable; the link flickers on and off. Art could relish in this shiny instability — it does already. It could also be less baffled and mesmerised and see it as what the gloss mostly is about – the not-so-discreet consumer friendly veneer of new and old oligarchies, and plutotechnocracies.

MJ In your insightful essay, “The Spam of the Earth: Withdrawal from Representation”, you extend your critique of representation by focusing on an irreducible excess at the core of image spam, a residue of unattainability, or the “dark matter” of which it’s composed. It seems as though an unintelligible horizon circumscribes image spam by image spam itself, a force of un-identifiability, which you detect by saying that it is “an accurate portrayal of what humanity is actually not… a negative image.” Do you think this vacuous core of image spam — a distinctly negative property — serves as an adequate ground for a general theory of representation today? How do you see today’s visual culture affecting people’s behavior toward identification with images?

HS Think of Twitter bots for example. Bots are entities supposed to be mistaken for humans on social media web sites. But they have become formidable political armies too — in brilliant examples of how representative politics have mutated nowadays. Bot armies distort discussion on twitter hashtags by spamming them with advertisement, tourist pictures or whatever. Bot armies have been active in Mexico, Syria, Russia and Turkey, where most political parties, above all the ruling AKP are said to control 18,000 fake twitter accounts using photos of Robbie Williams, Megan Fox and gay porn stars. A recent article revealed that, “in order to appear authentic, the accounts don’t just tweet out AKP hashtags; they also quote philosophers such as Thomas Hobbes and movies like PS: I Love You.” It is ever more difficult to identify bots – partly because humans are being paid to enter CAPTCHAs on their behalf (1,000 CAPTCHAs equals 50 USD cents). So what is a bot army? And how and whom does it represent if anyone? Who is an AKP bot that wears the face of a gay porn star and quotes Hobbes’ Leviathan — extolling the need of transforming the rule of militias into statehood in order to escape the war of everyone against everyone else? Bot armies are a contemporary vox pop, the voice of the people, the voice of what the people are today. It can be a Facebook militia, your low cost personalized mob, your digital mercenaries. Imagine your photo is being used for one of these bots. It is the moment when your picture becomes quite autonomous, active, even militant. Bot armies are celebrity militias, wildly jump cutting between glamour, sectarianism, porn, corruption and Post-Baath Party ideology. Think of the meaning of the word “affirmative action” after twitter bots and like farms! What does it represent?

MJ You have provided a compelling account of the depersonalization of the status of the image: a new process of de-identification that favors materialist participation in the circulation of images today.  Within the contemporary technological landscape, you write that “if identification is to go anywhere, it has to be with this material aspect of the image, with the image as thing, not as representation. And then it perhaps ceases to be identification, and instead becomes participation.” How does this shift from personal identification to material circulation — that is, to cybernetic participation — affect your notion of representation? If an image is merely “a thing like you and me,” does this amount to saying that identity is no more, no less than a .jpeg file?

HS Social media makes the shift from representation to participation very clear: people participate in the launch and life span of images, and indeed their life span, spread and potential is defined by participation. Think of the image not as surface but as all the tiny light impulses running through fiber at any one point in time. Some images will look like deep sea swarms, some like cities from space, some are utter darkness. We could see the energy imparted to images by capital or quantified participation very literally, we could probably measure its popular energy in lumen. By partaking in circulation, people participate in this energy and create it.
What this means is a different question though — by now this type of circulation seems a little like the petting zoo of plutotechnocracies. It’s where kids are allowed to make a mess — but just a little one — and if anyone organizes serious dissent, the seemingly anarchic sphere of circulation quickly reveals itself as a pedantic police apparatus aggregating relational metadata. It turns out to be an almost Althusserian ISA (Internet State Apparatus), hardwired behind a surface of ‘kawaii’ apps and online malls. As to identity, Heartbleed and more deliberate governmental hacking exploits certainly showed that identity goes far beyond a relationship with images: it entails a set of private keys, passwords, etc., that can be expropriated and detourned. More generally, identity is the name of the battlefield over your code — be it genetic, informational, pictorial. It is also an option that might provide protection if you fall beyond any sort of modernist infrastructure. It might offer sustenance, food banks, medical service, where common services either fail or don’t exist. If the Hezbollah paradigm is so successful it is because it provides an infrastructure to go with the Twitter handle, and as long as there is no alternative many people need this kind of container for material survival. Huge religious and quasi-religious structures have sprung up in recent decades to take up the tasks abandoned by states, providing protection and survival in a reversal of the move described in Leviathan. Identity happens when the Leviathan falls apart and nothing is left of the commons but a set of policed relational metadata, Emoji and hijacked hashtags. This is the reason why the gay AKP pornstar bots are desperately quoting Hobbes’ book: they are already sick of the war of Robbie Williams (Israel Defense Forces) against Robbie Williams (Electronic Syrian Army) against Robbie Williams (PRI/AAP) and are hoping for just any entity to organize day care and affordable dentistry.

heartbleed

But beyond all the portentous vocabulary relating to identity, I believe that a widespread standard of the contemporary condition is exhaustion. The interesting thing about Heartbleed — to come back to one of the current threats to identity (as privacy) — is that it is produced by exhaustion and not effort. It is a bug introduced by open source developers not being paid for something that is used by software giants worldwide. Nor were there apparently enough resources to audit the code in the big corporations that just copy-pasted it into their applications and passed on the bug, fully relying on free volunteer labour to produce their proprietary products. Heartbleed records exhaustion by trying to stay true to an ethics of commonality and exchange that has long since been exploited and privatized. So, that exhaustion found its way back into systems. For many people and for many reasons — and on many levels — identity is just that: shared exhaustion.

MJ This is an opportune moment to address the labor conditions of social media practice in the context of the art space. You write that “an art space is a factory, which is simultaneously a supermarket — a casino and a place of worship whose reproductive work is performed by cleaning ladies and cellphone-video bloggers alike.” Incidentally, DIS launched a website called ArtSelfie just over a year ago, which encourages social media users to participate quite literally in “cellphone-video blogging” by aggregating their Instagram #artselfies in a separately integrated web archive. Given our uncanny coincidence, how can we grasp the relationship between social media blogging and the possibility of participatory co-curating on equal terms? Is there an irreconcilable antagonism between exploited affective labor and a genuinely networked art practice? Or can we move beyond — to use a phrase of yours — a museum crowd “struggling between passivity and overstimulation?”

HS I wrote this in relation to something my friend Carles Guerra noticed already around early 2009; big museums like the Tate were actively expanding their online marketing tools, encouraging people to basically build the museum experience for them by sharing, etc. It was clear to us that audience participation on this level was a tool of extraction and outsourcing, following a logic that has turned online consumers into involuntary data providers overall. Like in the previous example – Heartbleed – the paradigm of participation and generous contribution towards a commons tilts quickly into an asymmetrical relation, where only a minority of participants benefits from everyone’s input, the digital 1 percent reaping the attention value generated by the 99 percent rest.

Brian Kuan Wood put it very beautifully recently: Love is debt, an economy of love and sharing is what you end up with when left to your own devices. However, an economy based on love ends up being an economy of exhaustion – after all, love is utterly exhausting — of deregulation, extraction and lawlessness. And I don’t even want to mention likes, notes and shares, which are the child-friendly, sanitized versions of affect as currency.
All is fair in love and war. It doesn’t mean that love isn’t true or passionate, but just that love is usually uneven, utterly unfair and asymmetric, just as capital tends to be distributed nowadays. It would be great to have a little bit less love, a little more infrastructure.

MJ Long before Edward Snowden’s NSA revelations reshaped our discussions of mass surveillance, you wrote that “social media and cell-phone cameras have created a zone of mutual mass-surveillance, which adds to the ubiquitous urban networks of control,” underscoring the voluntary, localized, and bottom-up mutuality intrinsic to contemporary systems of control. You go on to say that “hegemony is increasingly internalized, along with the pressure to conform and perform, as is the pressure to represent and be represented.” But now mass government surveillance is common knowledge on a global scale — ‘externalized’, if you will — while social media representation practices remain as revealing as they were before. Do these recent developments, as well as the lack of change in social media behavior, contradict or reinforce your previous statements? In other words, how do you react to the irony that, in the same year as the unprecedented NSA revelations, “selfie” was deemed word of the year by Oxford Dictionaries?

HS Haha — good question!

Essentially I think it makes sense to compare our moment with the end of the twenties in the Soviet Union, when euphoria about electrification, NEP (New Economic Policy), and montage gives way to bureaucracy, secret directives and paranoia. Today this corresponds to the sheer exhilaration of having a World Wide Web being replaced by the drudgery of corporate apps, waterboarding, and “normcore”. I am not trying to say that Stalinism might happen again – this would be plain silly – but trying to acknowledge emerging authoritarian paradigms, some forms of algorithmic consensual governance techniques developed within neoliberal authoritarianism, heavily relying on conformism, “family” values and positive feedback, and backed up by all-out torture and secret legislation if necessary. On the other hand things are also falling apart into uncontrollable love. One also has to remember that people did really love Stalin. People love algorithmic governance too, if it comes with watching unlimited amounts of Game of Thrones. But anyone slightly interested in digital politics and technology is by now acquiring at least basic skills in disappearance and subterfuge.

Hito Steyerl, How Not To Be Seen: A Fucking Didactic Educational .MOV File (2013)

Hito Steyerl, How Not To Be Seen: A Fucking Didactic Educational .MOV File (2013)

MJ In “Politics of Art: Contemporary Art and the Transition to Post-Democracy,” you point out that the contemporary art industry “sustains itself on the time and energy of unpaid interns and self-exploiting actors on pretty much every level and in almost every function,” while maintaining that “we have to face up to the fact that there is no automatically available road to resistance and organization for artistic labor.” Bourdieu theorized qualitatively different dynamics in the composition of cultural capital vs. that of economic capital, arguing that the former is constituted by the struggle for distinction, whose value is irreducible to financial compensation. This basically translates to: everyone wants a piece of the art-historical pie, and is willing to go through economic self-humiliation in the process. If striving for distinction is antithetical to solidarity, do you see a possibility of reconciling it with collective political empowerment on behalf of those economically exploited by the contemporary art industry?

HS In Art and Money, William Goetzmann, Luc Renneboog, and Christophe Spaenjers conclude that income inequality correlates to art prices. The bigger the difference between top income and no income, the higher prices are paid for some art works. This means that the art market will benefit not only if less people have more money but also if more people have no money. This also means that increasing the amount of zero incomes is likely, especially under current circumstances, to raise the price of some art works. The poorer many people are (and the richer a few), the better the art market does; the more unpaid interns, the more expensive the art. But the art market itself may be following a similar pattern of inequality, basically creating a divide between the 0,01 percent if not less of artworks that are able to concentrate the bulk of sales and the 99,99 percent rest. There is no short term solution for this feedback loop, except of course not to accept this situation, individually or preferably collectively on all levels of the industry. This also means from the point of view of employers. There is a long term benefit to this, not only to interns and artists but to everyone. Cultural industries, which are too exclusively profit oriented lose their appeal. If you want exciting things to happen you need a bunch of young and inspiring people creating a dynamics by doing risky, messy and confusing things. If they cannot afford to do this, they will do it somewhere else eventually. There needs to be space and resources for experimentation, even failure, otherwise things go stale. If these people move on to more accommodating sectors the art sector will mentally shut down even more and become somewhat North-Korean in its outlook — just like contemporary blockbuster CGI industries. Let me explain: there is a managerial sleekness and awe inspiring military perfection to every pixel in these productions, like in North Korean pixel parades, where thousands of soldiers wave color posters to form ever new pixel patterns. The result is quite something but this something is definitely not inspiring nor exciting. If the art world keeps going down the way of raising art prices via starvation of it’s workers – and there is no reason to believe it will not continue to do this – it will become the Disney version of Kim Jong Un’s pixel parades. 12K starving interns waving pixels for giant CGI renderings of Marina Abramovic! Imagine the price it will fetch!

kim jon hito

kim hito jon

Preventing famine with mobile phones (Science Daily)

Date: November 19, 2015

Source: Vienna University of Technology, TU Vienna

Summary: With a mobile data collection app and satellite data, scientists will be able to predict whether a certain region is vulnerable to food shortages and malnutrition, say experts. By scanning Earth’s surface with microwave beams, researchers can measure the water content in soil. Comparing these measurements with extensive data sets obtained over the last few decades, it is possible to calculate whether the soil is sufficiently moist or whether there is danger of droughts. The method has now been tested in the Central African Republic.


Does drought lead to famine? A mobile app helps to collect information. Credit: Image courtesy of Vienna University of Technology, TU Vienna

With a mobile data collection app and satellite data, scientists will be able to predict whether a certain region is vulnerable to food shortages and malnutrition. The method has now been tested in the Central African Republic.

There are different possible causes for famine and malnutrition — not all of which are easy to foresee. Drought and crop failure can often be predicted by monitoring the weather and measuring soil moisture. But other risk factors, such as socio-economic problems or violent conflicts, can endanger food security too. For organizations such as Doctors without Borders / Médecins Sans Frontières (MSF), it is crucial to obtain information about vulnerable regions as soon as possible, so that they have a chance to provide help before it is too late.

Scientists from TU Wien in Vienna, Austria and the International Institute for Applied Systems Analysis (IIASA) in Laxenburg, Austria have now developed a way to monitor food security using a smartphone app, which combines weather and soil moisture data from satellites with crowd-sourced data on the vulnerability of the population, e.g. malnutrition and other relevant socioeconomic data. Tests in the Central African Republic have yielded promising results, which have now been published in the journal PLOS ONE.

Step One: Satellite Data

“For years, we have been working on methods of measuring soil moisture using satellite data,” says Markus Enenkel (TU Wien). By scanning Earth’s surface with microwave beams, researchers can measure the water content in soil. Comparing these measurements with extensive data sets obtained over the last few decades, it is possible to calculate whether the soil is sufficiently moist or whether there is danger of droughts. “This method works well and it provides us with very important information, but information about soil moisture deficits is not enough to estimate the danger of malnutrition,” says IIASA researcher Linda See. “We also need information about other factors that can affect the local food supply.” For example, political unrest may prevent people from farming, even if weather conditions are fine. Such problems can of course not be monitored from satellites, so the researchers had to find a way of collecting data directly in the most vulnerable regions.

“Today, smartphones are available even in developing countries, and so we decided to develop an app, which we called SATIDA COLLECT, to help us collect the necessary data,” says IIASA-based app developer Mathias Karner. For a first test, the researchers chose the Central African Republic- one of the world’s most vulnerable countries, suffering from chronic poverty, violent conflicts, and weak disaster resilience. Local MSF staff was trained for a day and collected data, conducting hundreds of interviews.

“How often do people eat? What are the current rates of malnutrition? Have any family members left the region recently, has anybody died? — We use the answers to these questions to statistically determine whether the region is in danger,” says Candela Lanusse, nutrition advisor from Doctors without Borders. “Sometimes all that people have left to eat is unripe fruit or the seeds they had stored for next year. Sometimes they have to sell their cattle, which may increase the chance of nutritional problems. This kind of behavior may indicate future problems, months before a large-scale crisis breaks out.”

A Map of Malnutrition Danger

The digital questionnaire of SATIDA COLLECT can be adapted to local eating habits, as the answers and the GPS coordinates of every assessment are stored locally on the phone. When an internet connection is available, the collected data are uploaded to a server and can be analyzed along with satellite-derived information about drought risk. In the end a map could be created, highlighting areas where the danger of malnutrition is high. For Doctors without Borders, such maps are extremely valuable. They help to plan future activities and provide help as soon as it is needed.

“Testing this tool in the Central African Republic was not easy,” says Markus Enenkel. “The political situation there is complicated. However, even under these circumstances we could show that our technology works. We were able to gather valuable information.” SATIDA COLLECT has the potential to become a powerful early warning tool. It may not be able to prevent crises, but it will at least help NGOs to mitigate their impacts via early intervention.


Story Source:

The above post is reprinted from materials provided by Vienna University of Technology, TU ViennaNote: Materials may be edited for content and length.


Journal Reference:

  1. Markus Enenkel, Linda See, Mathias Karner, Mònica Álvarez, Edith Rogenhofer, Carme Baraldès-Vallverdú, Candela Lanusse, Núria Salse. Food Security Monitoring via Mobile Data Collection and Remote Sensing: Results from the Central African RepublicPLOS ONE, 2015; 10 (11): e0142030 DOI: 10.1371/journal.pone.0142030

Indígena de 81 anos aprende a usar computador e cria dicionário para salvar seu idioma da extinção (QGA)

Marie Wilcox é a última pessoa no mundo fluente no idioma Wukchumi

Conheça Marie Wilcox, uma bisavó de 81 anos e a última pessoa no mundo fluente no idioma Wukchumi. O povo Wukchumi costumava ter uma população de 50.000 pessoas antes de terem contato com os colonizadores, mas agora são somente 200 pessoas vivendo no Vale de São Joaquim, na Califórnia. Sua linguagem foi morrendo aos poucos a cada nova geração, mas Marie se comprometeu com a tarefa de revivê-la, aprendendo a usar um computador para que conseguisse começar a escrever o primeiro dicionário Wukchumni. O processo levou sete anos, e agora que terminou ela não pretende parar seu trabalho de imortalizar sua língua nativa.

O documentário “Marie’s Dictionary”, disponível no Youtube, nos mostra a motivação de Marie e seu trabalho árduo para trazer de volta e registrar um idioma que foi quase totalmente apagado pela colonização, racismo institucionalizado e opressão.

No vídeo, Marie admite ter dúvidas sobre a gigantesca tarefa que ela se comprometeu: “Eu tenho dúvidas sobre minha língua, e sobre quem quer mantê-la viva. Ninguém parece querer aprender. É estranho que eu seja a última… Tudo vai estar perdido algum dia desses, não sei”.

Mas com sorte, esse dia ainda vai demorar. Marie e sua filha Jennifer agora dão aulas para membros da tribo, e trabalham num dicionário em áudio para acompanhar o dicionário escrito que ela já criou.

Veja o vídeo (em inglês).

(QGA)