Arquivo da tag: Inteligência artificial

Projeto que regulamenta uso da inteligência artificial é positivo, mas ainda é preciso discutir mais o tema, diz especialista (Rota Jurídica)

rotajuridica.com.br

5 de outubro de 2021


Após ser aprovado na Câmara dos Deputados, no último dia 29 de setembro, o projeto de lei que regulamenta o uso da inteligêcia artificial (IA) no Brasil (PL 21/20) passará agora pela análise do Senado. Enquanto isso não acontece, o PL que estabelece o Marco Civil da IA, ainda é alvo de discussões.

A proposta, de autoria do deputado federal Eduardo Bismarck (PDT-CE), foi aprovado na Câmara na forma de um substitutivo apresentado pela deputada federal Luisa Canziani (PTB-PR). O texto define como sistemas de inteligência artificial as representações tecnológicas oriundas do campo da informática e da ciência da computação. Caberá privativamente à União legislar e editar normas sobre a matéria.

Em entrevista ao Portal Rota Jurídica, o neurocientista Álvaro Machado Dias salientou, por exemplo, que as intenções contidas no referido do PL apontam um caminho positivo. Contudo, as definições genéricas dão a sensação de que, enquanto o projeto tramita no Senado, vai ser importante aprofundar o contato com a área.

O neurocientista, que é professor livre-docente da UNIFESP, sócio da WeMind Escritório de Inovação, do Instituto Locomotiva de Pesquisas e do Rhizom Blockchain, salienta por outro lado que, em termos sociais, o Marco Civil da Inteligência Artificial promete aumentar a consciência sobre os riscos trazidos pelos algoritmos enviesados, bem como estimular a autorregulação.

Isso, segundo diz, deve aumentar a “justiça líquida” destes dispositivos que tanto influenciam a vida em sociedade. Ressalta que, em termos econômicos, a interoperabilidade (o equivalente a todas as tomadas teremos o mesmo número de pinos) vai fortalecer um pouco o mercado.

“Porém, verdade seja dita, estes impactos não serão tão grandes, já que o PL não fala em colocar a IA como tema estratégico para o País, nem aponta para maior apoio ao progresso científico na área”, acrescenta.

Riscos

Para o neurocientista, os riscos são os de sempre: engessamento da inovação; endereçamento das responsabilidades aos alvos errados; externalidades abertas por estratégias que questionarão as bases epistemológicas do conceito com certa razão (o famoso: dada a definição X, isto aqui não é inteligência artificial).

Porém, o especialista diz que é importante ter em mente que é absolutamente fundamental regular esta indústria, cujo ponto mais alto é a singularidade. “Isto é, a criação de dispositivos capazes de fazer tudo aquilo que fazemos, do ponto de vista interativo e produtivo, só que com mais velocidade e precisão. Trata-se de um debate muito complexo. E, como sempre, na prática, a teoria é outra”, completou.

Objetivos

Álvaro Machado Dias explica que o objetivo principal do PL é definir obrigações para a União, estados e municípios, especialmente regras de governança, responsabilidades civis e parâmetros de impacto social, relacionadas à aplicação e comercialização de plataformas de inteligência artificial. Existe também uma parte mais técnica, que foca a interoperabilidade, isto é, a capacidade dos sistemas trocarem informações.

Observa, ainda, que a principal premissa do projeto é a de que estas tecnologias devem ter sua implementação determinada por princípios como a ausência da intenção de fazer o mal, a qual seria escorada na transparência e responsabilização dos chamados agentes da inteligência artificial.

Advogados precisam se tornar cocriadores da inteligência artificial, diz autor de livro sobre o tema (Folha de S.Paulo)

www1.folha.uol.com.br

Géssica Brandino, 5 de outubro de 2021

Americano Joshua Walker defende que decisões judiciais nunca sejam automatizadas


Identificar as melhores práticas e quais fatores influenciaram decisões judiciais são alguns dos exemplos de como o uso da inteligência artificial pode beneficiar o sistema de Justiça e, consequentemente, a população, afirma o advogado americano Joshua Walker.

Um dos fundadores do CodeX, centro de informática legal da Universidade de Stanford (EUA) —onde também lecionou— e fundador da Lex Machina, empresa pioneira no segmento jurídico tecnológico, Walker iniciou a carreira no mundo dos dados há mais de 20 anos, trabalhando com processos do genocídio de 1994 em Ruanda, que matou ao menos 800 mil pessoas em cem dias.

Autor do livro “On Legal AI: Um Rápido Tratado sobre a Inteligência Artificial no Direito” (Revista dos Tribunais, 2021), no qual fala sobre como softwares de análise podem ser usados na busca de soluções no direito, Walker palestrou sobre o tema na edição da Fenalaw —evento sobre uso da tecnologia por advogados.

Em entrevista à Folha por email, ele defende que os advogados não só aprendam a usar recursos de inteligência artificial, como também assumam o protagonismo nos processos de desenvolvimento de tecnologias voltadas ao direito.

“Nós [advogados] precisamos começar a nos tornar cocriadores porque, enquanto os engenheiros de software se lembram dos dados, nós nos lembramos da estória e das histórias”, afirma.

Ao longo de sua carreira, quais tabus foram superados e quais continuam quando o assunto é inteligência artificial? Como confrontar essas ideias? Tabus existem em abundância. Há mais e novos todos os dias. Você tem que se perguntar duas coisas: o que meus clientes precisam? E como posso ser —um ou o— melhor no que faço para ajudar meus clientes? Isso é tudo que você precisa se preocupar para “inovar”.

A tradição jurídica exige que nos adaptemos, e nos adaptemos rapidamente, porque temos: a) o dever de lealdade de ajudar nossos clientes com os melhores meios disponíveis; b) o dever de melhorar a prática e a administração da lei e do próprio sistema.

A inteligência artificial legal e outras técnicas básicas de outros campos podem impulsionar de forma massiva ambas as áreas. Para isso, o dever de competência profissional nos exige conhecimentos operacionais e sobre as plataformas, que são muito úteis para serem ignorados. Isso não significa que você deve adotar tudo. Seja cético.

Estamos aprendendo a classificar desafios humanos complexos em estruturas processuais que otimizam os resultados para todos os cidadãos, de qualquer origem. Estamos aprendendo qual impacto as diferentes regras locais se correlacionam com diferentes classes de resultados de casos. Estamos apenas começando.

FolhaJus

Seleção das principais notícias da semana sobre o cenário jurídico e conteúdos exclusivos com entrevistas e infográficos.

O sr. começou a trabalhar com análise de dados por causa do genocídio de Ruanda. O que aquela experiência lhe ensinou sobre as possibilidades e limites do trabalho com bancos de dados? O que me ensinou é que a arquitetura da informação é mais importante do que o número de doutores, consultores ou milhões de dólares do orçamento de TI (tecnologia da informação) que você tem à sua disposição.

Você tem que combinar a infraestrutura de TI, o design de dados, com o objetivo da equipe e da empresa. A empresa humana, seu cliente (e para nós eram os mortos) está em primeiro lugar. Todo o resto é uma variável dependente.

Talento, orçamento etc. são muito importantes. Mas você não precisa necessariamente de dinheiro para obter resultados sérios.

Como avalia o termo inteligência artificial? Como superar a estranheza que ele gera? É basicamente um meme de marketing que foi usado para inspirar financiadores a investir em projetos de ciência da computação, começando há muitas décadas. Uma boa descrição comercial de inteligência artificial —mais prática e menos jargão— é: software que faz análise. Tecnicamente falando, inteligência artificial é: dados mais matemática.

Se seus dados são terríveis, a IA resultante também o será. Se são tendenciosos, ou contêm comunicação abusiva, o resultado também será assim.

Esse é um dos motivos de tantas empresas de tecnologia jurídica e operações jurídicas dominadas pela engenharia fracassarem de forma tão espetacular. Você precisa de advogados altamente qualificados, técnicos, matemáticos e advogados céticos para desenvolver a melhor tecnologia/matemática.

Definir IA de forma mais simples também implica, precisamente, que cada inteligência artificial ​​é única, como uma criança. Ela sempre está se desenvolvendo, mudando etc. Esta é a maneira de pensar sobre isso. E, como acontece com as crianças, você pode ensinar, mas nenhum pai pode controlar operacionalmente um filho, além de um certo limite.

Como o uso de dados pode ampliar o acesso à Justiça e torná-lo mais ágil? Nunca entendi muito bem o que significa o termo “acesso à Justiça”. Talvez seja porque a maioria das pessoas, de todas as origens socioeconômicas e étnicas, compartilha a experiência comum de não ter esse acesso.

Posso fazer analogias com outras áreas, porém. Um pedaço de software tem um custo marginal de aproximadamente zero. Cada vez que um de nós usa uma ferramenta de busca, ela não nos custa o investimento que foi necessário para fazer esse software e sofisticá-lo. Há grandes custos fixos, mas baixo custo por usuário.

Essa é a razão pela qual o software é um ótimo negócio. Se bem governado, podemos torná-lo um modus operandi ainda melhor para um país moderno. Isso supondo que possamos evitar todos os pesadelos que podem acontecer!

Podemos criar software de inteligência artificial legal que ajuda todas as pessoas em um país inteiro. Esse software pode ser perfeitamente personalizado e tornar-se fiel a cada indivíduo. Pode custar quase zero por cada operação incremental.

Eu criei um pacote de metodologias chamado Citizen’s AI Lab (laboratório de IA dos cidadãos) que será levado a muitos países ao redor do mundo, incluindo o Brasil, se as pessoas quiserem colocá-lo para funcionar. Vai fazer exatamente isso. Novamente, esses sistemas não apenas podem ser usados ​​para cada operação (uso) de cada indivíduo, mas também para cada país.

FolhaJus Dia

Seleção diária das principais notícias sobre o cenário jurídico em diferentes áreas

Em quais situações não é recomendado que a Justiça use IA? Nunca para a própria tomada de decisão. Neste momento, em qualquer caso, e/ou em minha opinião, não é possível e nem desejável automatizar a tomada de decisões judiciais.

Por outro lado, juízes podem sempre se beneficiar com a inteligência artificial. Quais são as melhores práticas? Quantos casos um determinado juiz tem em sua pauta? Ou em todo o tribunal? Como isso se compara a outros tribunais e como os resultados poderiam ser diferentes por causa dos casos ou do cenário econômico, político ou outros fatores?

Há protocolos que ajudam as partes a obter uma resolução antecipada de disputas? Esses resultados são justos?, uma questão humana possibilitada por uma base ou plataforma empírica auxiliada por IA. Ou os resultados são apenas impulsionados pelo acesso relativo aos fundos de litígio pelos litigantes?

Como estruturamos as coisas para que tenhamos menos disputas estúpidas nos tribunais? Quais advogados apresentam os comportamentos de arquivamento mais malignos e abusivos em todos os tribunais? Como a lei deve ser regulamentada?

Essas são perguntas que não podemos nem começar a fazer sem IA —leia-se: matemática— para nos ajudar a analisar grandes quantidades de dados.

Quais são os limites éticos para o uso de bancos de dados? Como evitar abusos? Uma boa revisão legal é essencial para todo projeto de inteligência artificial e dados que tenha um impacto material na humanidade. Mas para fazer isso em escala, nós, os advogados, também precisamos de mecanismos legais de revisão de IA.

Apoio muito o trabalho atual da inteligência artificial ética. Infelizmente, nos Estados Unidos, e talvez em outros lugares, a “IA ética” é uma espécie de “falsa questão” para impedir os advogados de se intrometerem em projetos de engenharia lucrativos e divertidos. Isso tem sido um desastre político, operacional e comercial em muitos casos.

Nós [advogados] precisamos começar a nos tornar cocriadores porque, enquanto os engenheiros de software se lembram dos dados, nós nos lembramos da estória e das histórias. Nós somos os leitores. Nossas IAs estão imbuídas de um tipo diferente de sentido, evoluíram de um tipo diferente de educação. Cientistas da computação e advogados/estudiosos do direito estão intimamente alinhados, mas nosso trabalho precisa ser o de guardiões da memória social.

Pesquisa Datafolha com advogados brasileiros mostrou que apenas 29% dos 303 entrevistados usavam recursos de IA no dia a dia. Como é nos EUA? O que é necessário para avançar mais? O que observei no “microclima” da tecnologia legal de São Paulo foi que o “tabu” contra o uso de tecnologia legal foi praticamente eliminado. Claro, isso é um microclima e pode não ser representativo ou ser contrarrepresentativo. Mas as pessoas podem estar usando IA todos os dias na prática, sem estar cientes disso. Os motores de busca são um exemplo muito simples. Temos que saber o que é algo antes de saber o quanto realmente o usamos.

Nos EUA: suspeito que o uso ainda esteja no primeiro “trimestre” do jogo em aplicações de IA para a lei. Litígio e contrato são casos de uso razoavelmente estabelecidos. Na verdade, eu não acho que você pode ser um advogado de propriedade intelectual de nível nacional sem o impulsionamento de alguma forma de dados empíricos.

Ainda são raros cursos de análise de dados para estudantes de direito no Brasil. Diante dessa lacuna, o que os profissionais devem fazer para se adaptar a essa nova realidade? Qual é o risco para quem não fizer nada? Eu começaria ensinando processo cívil com dados. Essa é a regra, é assim que as pessoas aplicam a regra (o que arquivam), e o que acontece quando o fazem (consequências). Isso seria revolucionário. Alunos, professores e doutores podem desenvolver todos os tipos de estudos e utilidades sociais.

Existem inúmeros outros exemplos. Os acadêmicos precisam conduzir isso em parceria com juízes, reguladores, a imprensa e a Ordem dos Advogados.

Na verdade, meu melhor conselho para os novos alunos é: assuma que todos os dados são falsos até prova em contrário. E quanto mais sofisticada a forma, mais volumosa a definição, mais para se aprofundar.

RAIO X

Joshua Walker

Autor do livro “On Legal AI – Um Rápido Tratado sobre a Inteligência Artificial no Direito” (Revista dos Tribunais, 2021) e diretor da empresa Aon IPS. Graduado em Havard e doutor pela Faculdade de Direito da Universidade de Chicago, foi cofundador do CodeX, centro de informática legal da Universidade de Stanford, e fundador da Lex Machina, empresa pioneira do segmento jurídico tecnológico. Também lecionou nas universidades de Stanford e Berkeley

Inteligência Artificial e os dilemas éticos que não estamos prontos para resolver (Estadão)

politica.estadao.com.br

André Aléxis de Almeida, 4 de outubro de 2021


Durante a pandemia do novo coronavírus, vimos nascer uma série de inovações ligadas à Inteligência Artificial (IA). Um exemplo foi o projeto “IACOV-BR: Inteligência Artificial para Covid-19 no Brasil”, do Laboratório de Big Data e Análise Preditiva em Saúde da Faculdade de Saúde Pública da Universidade de São Paulo (USP), que desenvolve algoritmos de machine learning (aprendizagem de máquina) para antecipar o diagnóstico e o prognóstico da doença e é conduzido com hospitais parceiros em diversas regiões do Brasil para auxiliar médicos e gestores.

Já uma pesquisa da Universidade Federal de São Paulo (Unifesp), em parceria com a Rede D’Or e o Instituto Tecnológico de Aeronáutica (ITA), apontou, em fase piloto, ser possível identificar de forma rápida a gravidade dos casos de infecção por SARS-CoV-2 atendidos em pronto socorro lançando mão da IA para realizar a análise de diversos marcadores clínicos e de exames de sangue dos pacientes.

Esses são apenas dois – e nacionais – de uma infinidade de cases que mostram como o desenvolvimento e aprimoramento da IA pode ser benéfico para a sociedade. Temos que ressaltar, contudo, que a tecnologia é a famosa faca de dois gumes. De um lado, faz a humanidade avançar, otimiza processos, promove disrupções. De outro, cria divergências, paradoxos e traz problemas e dilemas que antes pareciam inimagináveis.

Em 2020, por exemplo, o departamento de polícia de Detroit, no Centro-Oeste dos Estados Unidos, foi processado por prender um homem negro identificado erroneamente por um software de reconhecimento facial como autor de um furto.

Ainda, um estudo publicado na revista Science em outubro de 2019 apontou que um software usado em atendimentos hospitalares nos EUA privilegiava pacientes brancos em detrimento de negros na fila de programas especiais voltados ao tratamento de doenças crônicas, como problemas renais e diabetes. A tecnologia, segundo os pesquisadores, tinha sido desenvolvida pela subsidiária de uma companhia de seguros e era utilizada no atendimento de, aproximadamente, 70 milhões de pacientes.

Mais recentemente, já em 2021, a startup russa Xsolla demitiu cerca de 150 funcionários com base em análise de big data. Dados dos colaboradores foram avaliados em ferramentas como o Jira – software que permite o monitoramento de tarefas e acompanhamento de projetos –, Gmail e o wiki corporativo Confluence, além de conversas e documentos, para classificá-los como “interessados” e “produtivos” no ambiente de trabalho remoto. Os que ficaram aquém do esperado foram desligados. Controverso, no mínimo, vez que houve a substituição de uma avaliação de resultados pelo simples monitoramento dos funcionários.

Novamente, esses são apenas alguns exemplos em um mar de diversos outros envolvendo polêmicas similares, cuja realidade demonstra que os gestores não estão preparados para lidar. O estudo “O estado da IA responsável: 2021”, produzido pela FICO em parceria com a empresa de inteligência de mercado Corinium, apontou que 65% das organizações não conseguem explicar como as decisões ou previsões dos seus modelos de IA são feitas. A pesquisa foi elaborada com base em conversas com 100 líderes de grandes empresas globais, inclusive brasileiros. Ainda, 73% dos entrevistados afirmaram estar enfrentando dificuldades para conseguir suporte executivo voltado a priorizar a ética e as práticas responsáveis de IA.

Softwares e aplicativos de Inteligência Artificial, que envolvem técnicas como big data e machine learning, não são perfeitos porque, justamente, foram programados por seres humanos. Há uma diferença, que pode até parecer sutil à primeira vista, entre ser inteligente e ser sábio, o que as máquinas, ao menos por enquanto, ainda não são. Em um mundo algorítmico, a IA responsável, pautada pela ética, deve ser o modelo de governança. Ao que tudo indica, entretanto, como demonstrou o estudo da FICO, é que tanto executivos como programadores não sabem como se guiar nesse sentido.

É aqui que entra a importância dos marcos regulatórios, que jogam luz sobre um tema, procuram prevenir conflitos e, caso estes ocorram, demonstram como os problemas devem ser solucionados.

Assim como ocorreu em relação à proteção de dados pessoais, a União Europeia busca ser protagonista e se tornar modelo global na regulação da IA. Por lá, o debate ainda é incipiente, mas já envolve pontos como a criação de uma autoridade para promover as normas de IA em cada país da União Europeia (EU). A regulação também mira o IA que potencialmente coloque em risco a segurança e os direitos fundamentais dos cidadãos, além da necessidade de uma maior transparência no uso de automações, como chatbots.

No Brasil, o Marco Legal da Inteligência Artificial (Projeto de Lei 21/2020) já está em tramitação no Congresso Nacional, para o qual o regime de urgência, que dispensa algumas formalidades regimentais, foi aprovado na Câmara dos Deputados. Além de toda a problemática envolvendo a falta de uma discussão aprofundada sobre o tema no Legislativo, o substitutivo do projeto se mostrou uma verdadeira bomba quanto à responsabilidade, trazendo que:

“(…) normas sobre responsabilidade dos agentes que atuam na cadeia de desenvolvimento e operação de sistemas de inteligência artificial devem, salvo disposição em contrário, se pautar na responsabilidade subjetiva, levar em consideração a efetiva participação desses agentes, os danos específicos que se deseja evitar ou remediar, e como esses agentes podem demonstrar adequação às normas aplicáveis por meio de esforços razoáveis compatíveis com padrões internacionais e melhores práticas de mercado”.

Enquanto a responsabilidade objetiva depende apenas de comprovação de nexo causal, a responsabilidade subjetiva pressupõe dolo ou culpa na conduta. Significa que agentes que atuam na cadeia de desenvolvimento e operação de sistemas de IA somente responderão por eventuais danos causados por esses sistemas se for comprovado que eles desejaram o resultado danoso ou que foram negligentes, imprudentes ou imperitos. Ademais, quem são tais agentes? Não há quaisquer definições sobre quem seriam esses operadores.

Na pressa de regular, corre-se o risco de termos, assim como diversas outras leis de nosso país, uma legislação “para inglês ver”, que mais atrapalha do que ajuda; que em vez de fazer justiça, é, na verdade, injusta. Por enquanto, no Brasil, não se tem registros de casos como os trazidos no início do texto, mas, invariavelmente, haverá. É apenas questão de tempo. E quando isso ocorrer, o risco que correremos é o de termos em mãos uma legislação incompatível com os preceitos constitucionais, que não protegem o cidadão, mas o tornam ainda mais vulnerável.

*André Aléxis de Almeida é advogado, especialista em Direito Constitucional, mestre em Direito Empresarial e mentor jurídico de empresas

Is everything in the world a little bit conscious? (MIT Technology Review)

technologyreview.com

Christof Koch – August 25, 2021

The idea that consciousness is widespread is attractive to many for intellectual and, perhaps, also emotional reasons. But can it be tested? Surprisingly, perhaps it can.

Panpsychism is the belief that consciousness is found throughout the universe—not only in people and animals, but also in trees, plants, and bacteria. Panpsychists hold that some aspect of mind is present even in elementary particles. The idea that consciousness is widespread is attractive to many for intellectual and, perhaps, also emotional reasons. But can it be empirically tested? Surprisingly, perhaps it can. That’s because one of the most popular scientific theories of consciousness, integrated information theory (IIT), shares many—though not all—features of panpsychism.

As the American philosopher Thomas Nagel has argued, something is conscious if there is “something that it is like to be” that thing in the state that it is in. A human brain in a state of wakefulness feels like something specific. 

IIT specifies a unique number, a system’s integrated information, labeled by the Greek letter φ (pronounced phi). If φ is zero, the system does not feel like anything; indeed, the system does not exist as a whole, as it is fully reducible to its constituent components. The larger φ, the more conscious a system is, and the more irreducible. Given an accurate and complete description of a system, IIT predicts both the quantity and the quality of its experience (if any). IIT predicts that because of the structure of the human brain, people have high values of φ, while animals have smaller (but positive) values and classical digital computers have almost none.

A person’s value of φ is not constant. It increases during early childhood with the development of the self and may decrease with onset of dementia and other cognitive impairments. φ will fluctuate during sleep, growing larger during dreams and smaller in deep, dreamless states. 

IIT starts by identifying five true and essential properties of any and every conceivable conscious experience. For example, experiences are definite (exclusion). This means that an experience is not less than it is (experiencing only the sensation of the color blue but not the moving ocean that brought the color to mind), nor is it more than it is (say, experiencing the ocean while also being aware of the canopy of trees behind one’s back). In a second step, IIT derives five associated physical properties that any system—brain, computer, pine tree, sand dune—has to exhibit in order to feel like something. A “mechanism” in IIT is anything that has a causal role in a system; this could be a logical gate in a computer or a neuron in the brain. IIT says that consciousness arises only in systems of mechanisms that have a particular structure. To simplify somewhat, that structure must be maximally integrated—not accurately describable by breaking it into its constituent parts. It must also have cause-and-effect power upon itself, which is to say the current state of a given mechanism must constrain the future states of not only that particular mechanism, but the system as a whole. 

Given a precise physical description of a system, the theory provides a way to calculate the φ of that system. The technical details of how this is done are complicated, but the upshot is that one can, in principle, objectively measure the φ of a system so long as one has such a precise description of it. (We can compute the φ of computers because, having built them, we understand them precisely. Computing the φ of a human brain is still an estimate.)

Debating the nature of consciousness might at first sound like an academic exercise, but it has real and important consequences.

Systems can be evaluated at different levels—one could measure the φ of a sugar-cube-size piece of my brain, or of my brain as a whole, or of me and you together. Similarly, one could measure the φ of a silicon atom, of a particular circuit on a microchip, or of an assemblage of microchips that make up a supercomputer. Consciousness, according to the theory, exists for systems for which φ is at a maximum. It exists for all such systems, and only for such systems. 

The φ of my brain is bigger than the φ values of any of its parts, however one sets out to subdivide it. So I am conscious. But the φ of me and you together is less than my φ or your φ, so we are not “jointly” conscious. If, however, a future technology could create a dense communication hub between my brain and your brain, then such brain-bridging would create a single mind, distributed across four cortical hemispheres. 

Conversely, the φ of a supercomputer is less than the φs of any of the circuits composing it, so a supercomputer—however large and powerful—is not conscious. The theory predicts that even if some deep-learning system could pass the Turing test, it would be a so-called “zombie”—simulating consciousness, but not actually conscious. 

Like panpsychism, then, IIT considers consciousness an intrinsic, fundamental property of reality that is graded and most likely widespread in the tree of life, since any system with a non-zero amount of integrated information will feel like something. This does not imply that a bee feels obese or makes weekend plans. But a bee can feel a measure of happiness when returning pollen-laden in the sun to its hive. When a bee dies, it ceases to experience anything. Likewise, given the vast complexity of even a single cell, with millions of proteins interacting, it may feel a teeny-tiny bit like something. 

Debating the nature of consciousness might at first sound like an academic exercise, but it has real and important consequences. Most obviously, it matters to how we think about people in vegetative states. Such patients may groan or otherwise move unprovoked but fail to respond to commands to signal in a purposeful manner by moving their eyes or nodding. Are they conscious minds, trapped in their damaged body, able to perceive but unable to respond? Or are they without consciousness?

Evaluating such patients for the presence of consciousness is tricky. IIT proponents have developed a procedure that can test for consciousness in an unresponsive person. First they set up a network of EEG electrodes that can measure electrical activity in the brain. Then they stimulate the brain with a gentle magnetic pulse, and record the echoes of that pulse. They can then calculate a mathematical measure of the complexity of those echoes, called a perturbational complexity index (PCI).

In healthy, conscious individuals—or in people who have brain damage but are clearly conscious—the PCI is always above a particular threshold. On the other hand, 100% of the time, if healthy people are asleep, their PCI is below that threshold (0.31). So it is reasonable to take PCI as a proxy for the presence of a conscious mind. If the PCI of someone in a persistent vegetative state is always measured to be below this threshold, we can with confidence say that this person is not covertly conscious. 

This method is being investigated in a number of clinical centers across the US and Europe. Other tests seek to validate the predictions that IIT makes about the location and timing of the footprints of sensory consciousness in the brains of humans, nonhuman primates, and mice. 

Unlike panpsychism, the startling claims of IIT can be empirically tested. If they hold up, science may have found a way to cut through a knot that has puzzled philosophers for as long as philosophy has existed.

Christof Koch is the chief scientist of the MindScope program at the Allen Institute for Brain Science in Seattle.

The Mind issue

This story was part of our September 2021 issue

Review: Why Facebook can never fix itself (MIT Technology Review)

technologyreview.com

Karen Hao – July 21, 2021


The Facebook engineer was itching to know why his date hadn’t responded to his messages. Perhaps there was a simple explanation—maybe she was sick or on vacation.

So at 10 p.m. one night in the company’s Menlo Park headquarters, he brought up her Facebook profile on the company’s internal systems and began looking at her personal data. Her politics, her lifestyle, her interests—even her real-time location.

The engineer would be fired for his behavior, along with 51 other employees who had inappropriately abused their access to company data, a privilege that was then available to everyone who worked at Facebook, regardless of their job function or seniority. The vast majority of the 51 were just like him: men looking up information about the women they were interested in.

In September 2015, after Alex Stamos, the new chief security officer, brought the issue to Mark Zuckerberg’s attention, the CEO ordered a system overhaul to restrict employee access to user data. It was a rare victory for Stamos, one in which he convinced Zuckerberg that Facebook’s design was to blame, rather than individual behavior.

So begins An Ugly Truth, a new book about Facebook written by veteran New York Times reporters Sheera Frenkel and Cecilia Kang. With Frenkel’s expertise in cybersecurity, Kang’s expertise in technology and regulatory policy, and their deep well of sources, the duo provide a compelling account of Facebook’s years spanning the 2016 and 2020 elections.

Stamos would no longer be so lucky. The issues that derived from Facebook’s business model would only escalate in the years that followed but as Stamos unearthed more egregious problems, including Russian interference in US elections, he was pushed out for making Zuckerberg and Sheryl Sandberg face inconvenient truths. Once he left, the leadership continued to refuse to address a whole host of profoundly disturbing problems, including the Cambridge Analytica scandal, the genocide in Myanmar, and rampant covid misinformation.

The authors, Cecilia Kang and Sheera Frenkel

Frenkel and Kang argue that Facebook’s problems today are not the product of a company that lost its way. Instead they are part of its very design, built atop Zuckerberg’s narrow worldview, the careless privacy culture he cultivated, and the staggering ambitions he chased with Sandberg.

When the company was still small, perhaps such a lack of foresight and imagination could be excused. But since then, Zuckerberg’s and Sandberg’s decisions have shown that growth and revenue trump everything else.

In a chapter titled “Company Over Country,” for example, the authors chronicle how the leadership tried to bury the extent of Russian election interference on the platform from the US intelligence community, Congress, and the American public. They censored the Facebook security team’s multiple attempts to publish details of what they had found, and cherry-picked the data to downplay the severity and partisan nature of the problem. When Stamos proposed a redesign of the company’s organization to prevent a repeat of the issue, other leaders dismissed the idea as “alarmist” and focused their resources on getting control of the public narrative and keeping regulators at bay.

In 2014, a similar pattern began to play out in Facebook’s response to the escalating violence in Myanmar, detailed in the chapter “Think Before You Share.” A year prior, Myanmar-based activists had already begun to warn the company about the concerning levels of hate speech and misinformation on the platform being directed at the country’s Rohingya Muslim minority. But driven by Zuckerberg’s desire to expand globally, Facebook didn’t take the warnings seriously.

When riots erupted in the country, the company further underscored their priorities. It remained silent in the face of two deaths and fourteen injured but jumped in the moment the Burmese government cut off Facebook access for the country. Leadership then continued to delay investments and platform changes that could have prevented the violence from getting worse because it risked reducing user engagement. By 2017, ethnic tensions had devolved into a full-blown genocide, which the UN later found had been “substantively contributed to” by Facebook, resulting in the killing of more than 24,000 Rohingya Muslims.

This is what Frenkel and Kang call Facebook’s “ugly truth.” Its “irreconcilable dichotomy” of wanting to connect people to advance society but also enrich its bottom line. Chapter after chapter makes abundantly clear that it isn’t possible to satisfy both—and Facebook has time again chosen the latter at the expense of the former.

The book is as much a feat of storytelling as it is reporting. Whether you have followed Facebook’s scandals closely as I have, or only heard bits and pieces at a distance, Frenkel and Kang weave it together in a way that leaves something for everyone. The detailed anecdotes take readers behind the scenes into Zuckerberg’s conference room known as “Aquarium,” where key decisions shaped the course of the company. The pacing of each chapter guarantees fresh revelations with every turn of the page.

While I recognized each of the events that the authors referenced, the degree to which the company sought to protect itself at the cost of others was still worse than I had previously known. Meanwhile, my partner who read it side-by-side with me and squarely falls into the second category of reader repeatedly looked up stunned by what he had learned.

The authors keep their own analysis light, preferring to let the facts speak for themselves. In this spirit, they demur at the end of their account from making any hard conclusions about what to do with Facebook, or where this leaves us. “Even if the company undergoes a radical transformation in the coming year,” they write, “that change is unlikely to come from within.” But between the lines, the message is loud and clear: Facebook will never fix itself.

What AI still can’t do (MIT Technology Review)

technologyreview.com

Brian Bergstein

February 19, 2020


Machine-learning systems can be duped or confounded by situations they haven’t seen before. A self-driving car gets flummoxed by a scenario that a human driver could handle easily. An AI system laboriously trained to carry out one task (identifying cats, say) has to be taught all over again to do something else (identifying dogs). In the process, it’s liable to lose some of the expertise it had in the original task. Computer scientists call this problem “catastrophic forgetting.”

These shortcomings have something in common: they exist because AI systems don’t understand causation. They see that some events are associated with other events, but they don’t ascertain which things directly make other things happen. It’s as if you knew that the presence of clouds made rain likelier, but you didn’t know clouds caused rain.

Elias Bareinboim
Elias Bareinboim: AI systems are clueless when it comes to causation.

Understanding cause and effect is a big aspect of what we call common sense, and it’s an area in which AI systems today “are clueless,” says Elias Bareinboim. He should know: as the director of the new Causal Artificial Intelligence Lab at Columbia University, he’s at the forefront of efforts to fix this problem.

His idea is to infuse artificial-intelligence research with insights from the relatively new science of causality, a field shaped to a huge extent by Judea Pearl, a Turing Award–winning scholar who considers Bareinboim his protégé.

As Bareinboim and Pearl describe it, AI’s ability to spot correlations—e.g., that clouds make rain more likely—is merely the simplest level of causal reasoning. It’s good enough to have driven the boom in the AI technique known as deep learning over the past decade. Given a great deal of data about familiar situations, this method can lead to very good predictions. A computer can calculate the probability that a patient with certain symptoms has a certain disease, because it has learned just how often thousands or even millions of other people with the same symptoms had that disease.

But there’s a growing consensus that progress in AI will stall if computers don’t get better at wrestling with causation. If machines could grasp that certain things lead to other things, they wouldn’t have to learn everything anew all the time—they could take what they had learned in one domain and apply it to another. And if machines could use common sense we’d be able to put more trust in them to take actions on their own, knowing that they aren’t likely to make dumb errors.

Today’s AI has only a limited ability to infer what will result from a given action. In reinforcement learning, a technique that has allowed machines to master games like chess and Go, a system uses extensive trial and error to discern which moves will essentially cause them to win. But this approach doesn’t work in messier settings in the real world. It doesn’t even leave a machine with a general understanding of how it might play other games.

An even higher level of causal thinking would be the ability to reason about why things happened and ask “what if” questions. A patient dies while in a clinical trial; was it the fault of the experimental medicine or something else? School test scores are falling; what policy changes would most improve them? This kind of reasoning is far beyond the current capability of artificial intelligence.

Performing miracles

The dream of endowing computers with causal reasoning drew Bareinboim from Brazil to the United States in 2008, after he completed a master’s in computer science at the Federal University of Rio de Janeiro. He jumped at an opportunity to study under Judea Pearl, a computer scientist and statistician at UCLA. Pearl, 83, is a giant—the giant—of causal inference, and his career helps illustrate why it’s hard to create AI that understands causality.

Even well-trained scientists are apt to misinterpret correlations as signs of causation—or to err in the opposite direction, hesitating to call out causation even when it’s justified. In the 1950s, for example, a few prominent statisticians muddied the waters around whether tobacco caused cancer. They argued that without an experiment randomly assigning people to be smokers or nonsmokers, no one could rule out the possibility that some unknown—stress, perhaps, or some gene—caused people both to smoke and to get lung cancer.

Eventually, the fact that smoking causes cancer was definitively established, but it needn’t have taken so long. Since then, Pearl and other statisticians have devised a mathematical approach to identifying what facts would be required to support a causal claim. Pearl’s method shows that, given the prevalence of smoking and lung cancer, an independent factor causing both would be extremely unlikely.

Conversely, Pearl’s formulas also help identify when correlations can’t be used to determine causation. Bernhard Schölkopf, who researches causal AI techniques as a director at Germany’s Max Planck Institute for Intelligent Systems, points out that you can predict a country’s birth rate if you know its population of storks. That isn’t because storks deliver babies or because babies attract storks, but probably because economic development leads to more babies and more storks. Pearl has helped give statisticians and computer scientists ways of attacking such problems, Schölkopf says.

Judea Pearl
Judea Pearl: His theory of causal reasoning has transformed science.

Pearl’s work has also led to the development of causal Bayesian networks—software that sifts through large amounts of data to detect which variables appear to have the most influence on other variables. For example, GNS Healthcare, a company in Cambridge, Massachusetts, uses these techniques to advise researchers about experiments that look promising.

In one project, GNS worked with researchers who study multiple myeloma, a kind of blood cancer. The researchers wanted to know why some patients with the disease live longer than others after getting stem-cell transplants, a common form of treatment. The software churned through data with 30,000 variables and pointed to a few that seemed especially likely to be causal. Biostatisticians and experts in the disease zeroed in on one in particular: the level of a certain protein in patients’ bodies. Researchers could then run a targeted clinical trial to see whether patients with the protein did indeed benefit more from the treatment. “It’s way faster than poking here and there in the lab,” says GNS cofounder Iya Khalil.

Nonetheless, the improvements that Pearl and other scholars have achieved in causal theory haven’t yet made many inroads in deep learning, which identifies correlations without too much worry about causation. Bareinboim is working to take the next step: making computers more useful tools for human causal explorations.

Pearl says AI can’t be truly intelligent until it has a rich understanding of cause and effect, which would enable the introspection that is at the core of cognition.

One of his systems, which is still in beta, can help scientists determine whether they have sufficient data to answer a causal question. Richard McElreath, an anthropologist at the Max Planck Institute for Evolutionary Anthropology, is using the software to guide research into why humans go through menopause (we are the only apes that do).

The hypothesis is that the decline of fertility in older women benefited early human societies because women who put more effort into caring for grandchildren ultimately had more descendants. But what evidence might exist today to support the claim that children do better with grandparents around? Anthropologists can’t just compare the educational or medical outcomes of children who have lived with grandparents and those who haven’t. There are what statisticians call confounding factors: grandmothers might be likelier to live with grandchildren who need the most help. Bareinboim’s software can help McElreath discern which studies about kids who grew up with their grandparents are least riddled with confounding factors and could be valuable in answering his causal query. “It’s a huge step forward,” McElreath says.

The last mile

Bareinboim talks fast and often gestures with two hands in the air, as if he’s trying to balance two sides of a mental equation. It was halfway through the semester when I visited him at Columbia in October, but it seemed as if he had barely moved into his office—hardly anything on the walls, no books on the shelves, only a sleek Mac computer and a whiteboard so dense with equations and diagrams that it looked like a detail from a cartoon about a mad professor.

He shrugged off the provisional state of the room, saying he had been very busy giving talks about both sides of the causal revolution. Bareinboim believes work like his offers the opportunity not just to incorporate causal thinking into machines, but also to improve it in humans.

Getting people to think more carefully about causation isn’t necessarily much easier than teaching it to machines, he says. Researchers in a wide range of disciplines, from molecular biology to public policy, are sometimes content to unearth correlations that are not actually rooted in causal relationships. For instance, some studies suggest drinking alcohol will kill you early, while others indicate that moderate consumption is fine and even beneficial, and still other research has found that heavy drinkers outlive nondrinkers. This phenomenon, known as the “reproducibility crisis,” crops up not only in medicine and nutrition but also in psychology and economics. “You can see the fragility of all these inferences,” says Bareinboim. “We’re flipping results every couple of years.”

He argues that anyone asking “what if”—medical researchers setting up clinical trials, social scientists developing pilot programs, even web publishers preparing A/B tests—should start not merely by gathering data but by using Pearl’s causal logic and software like Bareinboim’s to determine whether the available data could possibly answer a causal hypothesis. Eventually, he envisions this leading to “automated scientist” software: a human could dream up a causal question to go after, and the software would combine causal inference theory with machine-learning techniques to rule out experiments that wouldn’t answer the question. That might save scientists from a huge number of costly dead ends.

Bareinboim described this vision while we were sitting in the lobby of MIT’s Sloan School of Management, after a talk he gave last fall. “We have a building here at MIT with, I don’t know, 200 people,” he said. How do those social scientists, or any scientists anywhere, decide which experiments to pursue and which data points to gather? By following their intuition: “They are trying to see where things will lead, based on their current understanding.”

That’s an inherently limited approach, he said, because human scientists designing an experiment can consider only a handful of variables in their minds at once. A computer, on the other hand, can see the interplay of hundreds or thousands of variables. Encoded with “the basic principles” of Pearl’s causal calculus and able to calculate what might happen with new sets of variables, an automated scientist could suggest exactly which experiments the human researchers should spend their time on. Maybe some public policy that has been shown to work only in Texas could be made to work in California if a few causally relevant factors were better appreciated. Scientists would no longer be “doing experiments in the darkness,” Bareinboim said.

He also doesn’t think it’s that far off: “This is the last mile before the victory.”

What if?

Finishing that mile will probably require techniques that are just beginning to be developed. For example, Yoshua Bengio, a computer scientist at the University of Montreal who shared the 2018 Turing Award for his work on deep learning, is trying to get neural networks—the software at the heart of deep learning—to do “meta-learning” and notice the causes of things.

As things stand now, if you wanted a neural network to detect when people are dancing, you’d show it many, many images of dancers. If you wanted it to identify when people are running, you’d show it many, many images of runners. The system would learn to distinguish runners from dancers by identifying features that tend to be different in the images, such as the positions of a person’s hands and arms. But Bengio points out that fundamental knowledge about the world can be gleaned by analyzing the things that are similar or “invariant” across data sets. Maybe a neural network could learn that movements of the legs physically cause both running and dancing. Maybe after seeing these examples and many others that show people only a few feet off the ground, a machine would eventually understand something about gravity and how it limits human movement. Over time, with enough meta-learning about variables that are consistent across data sets, a computer could gain causal knowledge that would be reusable in many domains.

For his part, Pearl says AI can’t be truly intelligent until it has a rich understanding of cause and effect. Although causal reasoning wouldn’t be sufficient for an artificial general intelligence, it’s necessary, he says, because it would enable the introspection that is at the core of cognition. “What if” questions “are the building blocks of science, of moral attitudes, of free will, of consciousness,” Pearl told me.

You can’t draw Pearl into predicting how long it will take for computers to get powerful causal reasoning abilities. “I am not a futurist,” he says. But in any case, he thinks the first move should be to develop machine-learning tools that combine data with available scientific knowledge: “We have a lot of knowledge that resides in the human skull which is not utilized.”

Brian Bergstein, a former editor at MIT Technology Review, is deputy opinion editor at the Boston Globe.

This story was part of our March 2020 issue.

The predictions issue

We’re not prepared for the end of Moore’s Law (MIT Technology Review)

technologyreview.com

David Rotman


February 24, 2020

Moore’s argument was an economic one. Integrated circuits, with multiple transistors and other electronic devices interconnected with aluminum metal lines on a tiny square of silicon wafer, had been invented a few years earlier by Robert Noyce at Fairchild Semiconductor. Moore, the company’s R&D director, realized, as he wrote in 1965, that with these new integrated circuits, “the cost per component is nearly inversely proportional to the number of components.” It was a beautiful bargain—in theory, the more transistors you added, the cheaper each one got. Moore also saw that there was plenty of room for engineering advances to increase the number of transistors you could affordably and reliably put on a chip.

Soon these cheaper, more powerful chips would become what economists like to call a general purpose technology—one so fundamental that it spawns all sorts of other innovations and advances in multiple industries. A few years ago, leading economists credited the information technology made possible by integrated circuits with a third of US productivity growth since 1974. Almost every technology we care about, from smartphones to cheap laptops to GPS, is a direct reflection of Moore’s prediction. It has also fueled today’s breakthroughs in artificial intelligence and genetic medicine, by giving machine-learning techniques the ability to chew through massive amounts of data to find answers.

But how did a simple prediction, based on extrapolating from a graph of the number of transistors by year—a graph that at the time had only a few data points—come to define a half-century of progress? In part, at least, because the semiconductor industry decided it would.

Cover of Electronics Magazine April, 1965
The April 1965 Electronics Magazine in which Moore’s article appeared.Wikimedia

Moore wrote that “cramming more components onto integrated circuits,” the title of his 1965 article, would “lead to such wonders as home computers—or at least terminals connected to a central computer—automatic controls for automobiles, and personal portable communications equipment.” In other words, stick to his road map of squeezing ever more transistors onto chips and it would lead you to the promised land. And for the following decades, a booming industry, the government, and armies of academic and industrial researchers poured money and time into upholding Moore’s Law, creating a self-fulfilling prophecy that kept progress on track with uncanny accuracy. Though the pace of progress has slipped in recent years, the most advanced chips today have nearly 50 billion transistors.

Every year since 2001, MIT Technology Review has chosen the 10 most important breakthrough technologies of the year. It’s a list of technologies that, almost without exception, are possible only because of the computation advances described by Moore’s Law.

For some of the items on this year’s list the connection is obvious: consumer devices, including watches and phones, infused with AI; climate-change attribution made possible by improved computer modeling and data gathered from worldwide atmospheric monitoring systems; and cheap, pint-size satellites. Others on the list, including quantum supremacy, molecules discovered using AI, and even anti-aging treatments and hyper-personalized drugs, are due largely to the computational power available to researchers.

But what happens when Moore’s Law inevitably ends? Or what if, as some suspect, it has already died, and we are already running on the fumes of the greatest technology engine of our time?

RIP

“It’s over. This year that became really clear,” says Charles Leiserson, a computer scientist at MIT and a pioneer of parallel computing, in which multiple calculations are performed simultaneously. The newest Intel fabrication plant, meant to build chips with minimum feature sizes of 10 nanometers, was much delayed, delivering chips in 2019, five years after the previous generation of chips with 14-nanometer features. Moore’s Law, Leiserson says, was always about the rate of progress, and “we’re no longer on that rate.” Numerous other prominent computer scientists have also declared Moore’s Law dead in recent years. In early 2019, the CEO of the large chipmaker Nvidia agreed.

In truth, it’s been more a gradual decline than a sudden death. Over the decades, some, including Moore himself at times, fretted that they could see the end in sight, as it got harder to make smaller and smaller transistors. In 1999, an Intel researcher worried that the industry’s goal of making transistors smaller than 100 nanometers by 2005 faced fundamental physical problems with “no known solutions,” like the quantum effects of electrons wandering where they shouldn’t be.

For years the chip industry managed to evade these physical roadblocks. New transistor designs were introduced to better corral the electrons. New lithography methods using extreme ultraviolet radiation were invented when the wavelengths of visible light were too thick to precisely carve out silicon features of only a few tens of nanometers. But progress grew ever more expensive. Economists at Stanford and MIT have calculated that the research effort going into upholding Moore’s Law has risen by a factor of 18 since 1971.

Likewise, the fabs that make the most advanced chips are becoming prohibitively pricey. The cost of a fab is rising at around 13% a year, and is expected to reach $16 billion or more by 2022. Not coincidentally, the number of companies with plans to make the next generation of chips has now shrunk to only three, down from eight in 2010 and 25 in 2002.

Finding successors to today’s silicon chips will take years of research.If you’re worried about what will replace moore’s Law, it’s time to panic.

Nonetheless, Intel—one of those three chipmakers—isn’t expecting a funeral for Moore’s Law anytime soon. Jim Keller, who took over as Intel’s head of silicon engineering in 2018, is the man with the job of keeping it alive. He leads a team of some 8,000 hardware engineers and chip designers at Intel. When he joined the company, he says, many were anticipating the end of Moore’s Law. If they were right, he recalls thinking, “that’s a drag” and maybe he had made “a really bad career move.”

But Keller found ample technical opportunities for advances. He points out that there are probably more than a hundred variables involved in keeping Moore’s Law going, each of which provides different benefits and faces its own limits. It means there are many ways to keep doubling the number of devices on a chip—innovations such as 3D architectures and new transistor designs.

These days Keller sounds optimistic. He says he has been hearing about the end of Moore’s Law for his entire career. After a while, he “decided not to worry about it.” He says Intel is on pace for the next 10 years, and he will happily do the math for you: 65 billion (number of transistors) times 32 (if chip density doubles every two years) is 2 trillion transistors. “That’s a 30 times improvement in performance,” he says, adding that if software developers are clever, we could get chips that are a hundred times faster in 10 years.

Still, even if Intel and the other remaining chipmakers can squeeze out a few more generations of even more advanced microchips, the days when you could reliably count on faster, cheaper chips every couple of years are clearly over. That doesn’t, however, mean the end of computational progress.

Time to panic

Neil Thompson is an economist, but his office is at CSAIL, MIT’s sprawling AI and computer center, surrounded by roboticists and computer scientists, including his collaborator Leiserson. In a new paper, the two document ample room for improving computational performance through better software, algorithms, and specialized chip architecture.

One opportunity is in slimming down so-called software bloat to wring the most out of existing chips. When chips could always be counted on to get faster and more powerful, programmers didn’t need to worry much about writing more efficient code. And they often failed to take full advantage of changes in hardware architecture, such as the multiple cores, or processors, seen in chips used today.

Thompson and his colleagues showed that they could get a computationally intensive calculation to run some 47 times faster just by switching from Python, a popular general-purpose programming language, to the more efficient C. That’s because C, while it requires more work from the programmer, greatly reduces the required number of operations, making a program run much faster. Further tailoring the code to take full advantage of a chip with 18 processing cores sped things up even more. In just 0.41 seconds, the researchers got a result that took seven hours with Python code.

That sounds like good news for continuing progress, but Thompson worries it also signals the decline of computers as a general purpose technology. Rather than “lifting all boats,” as Moore’s Law has, by offering ever faster and cheaper chips that were universally available, advances in software and specialized architecture will now start to selectively target specific problems and business opportunities, favoring those with sufficient money and resources.

Indeed, the move to chips designed for specific applications, particularly in AI, is well under way. Deep learning and other AI applications increasingly rely on graphics processing units (GPUs) adapted from gaming, which can handle parallel operations, while companies like Google, Microsoft, and Baidu are designing AI chips for their own particular needs. AI, particularly deep learning, has a huge appetite for computer power, and specialized chips can greatly speed up its performance, says Thompson.

But the trade-off is that specialized chips are less versatile than traditional CPUs. Thompson is concerned that chips for more general computing are becoming a backwater, slowing “the overall pace of computer improvement,” as he writes in an upcoming paper, “The Decline of Computers as a General Purpose Technology.”

At some point, says Erica Fuchs, a professor of engineering and public policy at Carnegie Mellon, those developing AI and other applications will miss the decreases in cost and increases in performance delivered by Moore’s Law. “Maybe in 10 years or 30 years—no one really knows when—you’re going to need a device with that additional computation power,” she says.

The problem, says Fuchs, is that the successors to today’s general purpose chips are unknown and will take years of basic research and development to create. If you’re worried about what will replace Moore’s Law, she suggests, “the moment to panic is now.” There are, she says, “really smart people in AI who aren’t aware of the hardware constraints facing long-term advances in computing.” What’s more, she says, because application–specific chips are proving hugely profitable, there are few incentives to invest in new logic devices and ways of doing computing.

Wanted: A Marshall Plan for chips

In 2018, Fuchs and her CMU colleagues Hassan Khan and David Hounshell wrote a paper tracing the history of Moore’s Law and identifying the changes behind today’s lack of the industry and government collaboration that fostered so much progress in earlier decades. They argued that “the splintering of the technology trajectories and the short-term private profitability of many of these new splinters” means we need to greatly boost public investment in finding the next great computer technologies.

If economists are right, and much of the growth in the 1990s and early 2000s was a result of microchips—and if, as some suggest, the sluggish productivity growth that began in the mid-2000s reflects the slowdown in computational progress—then, says Thompson, “it follows you should invest enormous amounts of money to find the successor technology. We’re not doing it. And it’s a public policy failure.”

There’s no guarantee that such investments will pay off. Quantum computing, carbon nanotube transistors, even spintronics, are enticing possibilities—but none are obvious replacements for the promise that Gordon Moore first saw in a simple integrated circuit. We need the research investments now to find out, though. Because one prediction is pretty much certain to come true: we’re always going to want more computing power.

This story was part of our March 2020 issue.

The predictions issue

Understanding fruit fly behavior may be next step toward autonomous vehicles (Science Daily)

Could the way drosophila use antennae to sense heat help us teach self-driving cars make decisions?

Date: April 6, 2021

Source: Northwestern University

Summary: With over 70% of respondents to a AAA annual survey on autonomous driving reporting they would fear being in a fully self-driving car, makers like Tesla may be back to the drawing board before rolling out fully autonomous self-driving systems. But new research shows us we may be better off putting fruit flies behind the wheel instead of robots.


With over 70% of respondents to a AAA annual survey on autonomous driving reporting they would fear being in a fully self-driving car, makers like Tesla may be back to the drawing board before rolling out fully autonomous self-driving systems. But new research from Northwestern University shows us we may be better off putting fruit flies behind the wheel instead of robots.

Drosophila have been subjects of science as long as humans have been running experiments in labs. But given their size, it’s easy to wonder what can be learned by observing them. Research published today in the journal Nature Communications demonstrates that fruit flies use decision-making, learning and memory to perform simple functions like escaping heat. And researchers are using this understanding to challenge the way we think about self-driving cars.

“The discovery that flexible decision-making, learning and memory are used by flies during such a simple navigational task is both novel and surprising,” said Marco Gallio, the corresponding author on the study. “It may make us rethink what we need to do to program safe and flexible self-driving vehicles.”

According to Gallio, an associate professor of neurobiology in the Weinberg College of Arts and Sciences, the questions behind this study are similar to those vexing engineers building cars that move on their own. How does a fruit fly (or a car) cope with novelty? How can we build a car that is flexibly able to adapt to new conditions?

This discovery reveals brain functions in the household pest that are typically associated with more complex brains like those of mice and humans.

“Animal behavior, especially that of insects, is often considered largely fixed and hard-wired — like machines,” Gallio said. “Most people have a hard time imagining that animals as different from us as a fruit fly may possess complex brain functions, such as the ability to learn, remember or make decisions.”

To study how fruit flies tend to escape heat, the Gallio lab built a tiny plastic chamber with four floor tiles whose temperatures could be independently controlled and confined flies inside. They then used high-resolution video recordings to map how a fly reacted when it encountered a boundary between a warm tile and a cool tile. They found flies were remarkably good at treating heat boundaries as invisible barriers to avoid pain or harm.

Using real measurements, the team created a 3D model to estimate the exact temperature of each part of the fly’s tiny body throughout the experiment. During other trials, they opened a window in the fly’s head and recorded brain activity in neurons that process external temperature signals.

Miguel Simões, a postdoctoral fellow in the Gallio lab and co-first author of the study, said flies are able to determine with remarkable accuracy if the best path to thermal safety is to the left or right. Mapping the direction of escape, Simões said flies “nearly always” escape left when they approach from the right, “like a tennis ball bouncing off a wall.”

“When flies encounter heat, they have to make a rapid decision,” Simões said. “Is it safe to continue, or should it turn back? This decision is highly dependent on how dangerous the temperature is on the other side.”

Observing the simple response reminded the scientists of one of the classic concepts in early robotics.

“In his famous book, the cyberneticist Valentino Braitenberg imagined simple models made of sensors and motors that could come close to reproducing animal behavior,” said Josh Levy, an applied math graduate student and a member of the labs of Gallio and applied math professor William Kath. “The vehicles are a combination of simple wires, but the resulting behavior appears complex and even intelligent.”

Braitenberg argued that much of animal behavior could be explained by the same principles. But does that mean fly behavior is as predictable as that of one of Braitenberg’s imagined robots?

The Northwestern team built a vehicle using a computer simulation of fly behavior with the same wiring and algorithm as a Braitenberg vehicle to see how closely they could replicate animal behavior. After running model race simulations, the team ran a natural selection process of sorts, choosing the cars that did best and mutating them slightly before recombining them with other high-performing vehicles. Levy ran 500 generations of evolution in the powerful NU computing cluster, building cars they ultimately hoped would do as well as flies at escaping the virtual heat.

This simulation demonstrated that “hard-wired” vehicles eventually evolved to perform nearly as well as flies. But while real flies continued to improve performance over time and learn to adopt better strategies to become more efficient, the vehicles remain “dumb” and inflexible. The researchers also discovered that even as flies performed the simple task of escaping the heat, fly behavior remains somewhat unpredictable, leaving space for individual decisions. Finally, the scientists observed that while flies missing an antenna adapt and figure out new strategies to escape heat, vehicles “damaged” in the same way are unable to cope with the new situation and turn in the direction of the missing part, eventually getting trapped in a spin like a dog chasing its tail.

Gallio said the idea that simple navigation contains such complexity provides fodder for future work in this area.

Work in the Gallio lab is supported by the NIH (Award No. R01NS086859 and R21EY031849), a Pew Scholars Program in the Biomedical Sciences and a McKnight Technological Innovation in Neuroscience Awards.


Story Source:

Materials provided by Northwestern University. Original written by Lila Reynolds. Note: Content may be edited for style and length.


Journal Reference:

  1. José Miguel Simões, Joshua I. Levy, Emanuela E. Zaharieva, Leah T. Vinson, Peixiong Zhao, Michael H. Alpert, William L. Kath, Alessia Para, Marco Gallio. Robustness and plasticity in Drosophila heat avoidance. Nature Communications, 2021; 12 (1) DOI: 10.1038/s41467-021-22322-w

How Facebook got addicted to spreading misinformation (MIT Tech Review)

technologyreview.com

Karen Hao, March 11, 2021


Joaquin Quiñonero Candela, a director of AI at Facebook, was apologizing to his audience.

It was March 23, 2018, just days after the revelation that Cambridge Analytica, a consultancy that worked on Donald Trump’s 2016 presidential election campaign, had surreptitiously siphoned the personal data of tens of millions of Americans from their Facebook accounts in an attempt to influence how they voted. It was the biggest privacy breach in Facebook’s history, and Quiñonero had been previously scheduled to speak at a conference on, among other things, “the intersection of AI, ethics, and privacy” at the company. He considered canceling, but after debating it with his communications director, he’d kept his allotted time.

As he stepped up to face the room, he began with an admission. “I’ve just had the hardest five days in my tenure at Facebook,” he remembers saying. “If there’s criticism, I’ll accept it.”

The Cambridge Analytica scandal would kick off Facebook’s largest publicity crisis ever. It compounded fears that the algorithms that determine what people see on the platform were amplifying fake news and hate speech, and that Russian hackers had weaponized them to try to sway the election in Trump’s favor. Millions began deleting the app; employees left in protest; the company’s market capitalization plunged by more than $100 billion after its July earnings call.

In the ensuing months, Mark Zuckerberg began his own apologizing. He apologized for not taking “a broad enough view” of Facebook’s responsibilities, and for his mistakes as a CEO. Internally, Sheryl Sandberg, the chief operating officer, kicked off a two-year civil rights audit to recommend ways the company could prevent the use of its platform to undermine democracy.

Finally, Mike Schroepfer, Facebook’s chief technology officer, asked Quiñonero to start a team with a directive that was a little vague: to examine the societal impact of the company’s algorithms. The group named itself the Society and AI Lab (SAIL); last year it combined with another team working on issues of data privacy to form Responsible AI.

Quiñonero was a natural pick for the job. He, as much as anybody, was the one responsible for Facebook’s position as an AI powerhouse. In his six years at Facebook, he’d created some of the first algorithms for targeting users with content precisely tailored to their interests, and then he’d diffused those algorithms across the company. Now his mandate would be to make them less harmful.

Facebook has consistently pointed to the efforts by Quiñonero and others as it seeks to repair its reputation. It regularly trots out various leaders to speak to the media about the ongoing reforms. In May of 2019, it granted a series of interviews with Schroepfer to the New York Times, which rewarded the company with a humanizing profile of a sensitive, well-intentioned executive striving to overcome the technical challenges of filtering out misinformation and hate speech from a stream of content that amounted to billions of pieces a day. These challenges are so hard that it makes Schroepfer emotional, wrote the Times: “Sometimes that brings him to tears.”

In the spring of 2020, it was apparently my turn. Ari Entin, Facebook’s AI communications director, asked in an email if I wanted to take a deeper look at the company’s AI work. After talking to several of its AI leaders, I decided to focus on Quiñonero. Entin happily obliged. As not only the leader of the Responsible AI team but also the man who had made Facebook into an AI-driven company, Quiñonero was a solid choice to use as a poster boy.

He seemed a natural choice of subject to me, too. In the years since he’d formed his team following the Cambridge Analytica scandal, concerns about the spread of lies and hate speech on Facebook had only grown. In late 2018 the company admitted that this activity had helped fuel a genocidal anti-Muslim campaign in Myanmar for several years. In 2020 Facebook started belatedly taking action against Holocaust deniers, anti-vaxxers, and the conspiracy movement QAnon. All these dangerous falsehoods were metastasizing thanks to the AI capabilities Quiñonero had helped build. The algorithms that underpin Facebook’s business weren’t created to filter out what was false or inflammatory; they were designed to make people share and engage with as much content as possible by showing them things they were most likely to be outraged or titillated by. Fixing this problem, to me, seemed like core Responsible AI territory.

I began video-calling Quiñonero regularly. I also spoke to Facebook executives, current and former employees, industry peers, and external experts. Many spoke on condition of anonymity because they’d signed nondisclosure agreements or feared retaliation. I wanted to know: What was Quiñonero’s team doing to rein in the hate and lies on its platform?

Joaquin Quinonero Candela
Joaquin Quiñonero Candela outside his home in the Bay Area, where he lives with his wife and three kids.

But Entin and Quiñonero had a different agenda. Each time I tried to bring up these topics, my requests to speak about them were dropped or redirected. They only wanted to discuss the Responsible AI team’s plan to tackle one specific kind of problem: AI bias, in which algorithms discriminate against particular user groups. An example would be an ad-targeting algorithm that shows certain job or housing opportunities to white people but not to minorities.

By the time thousands of rioters stormed the US Capitol in January, organized in part on Facebook and fueled by the lies about a stolen election that had fanned out across the platform, it was clear from my conversations that the Responsible AI team had failed to make headway against misinformation and hate speech because it had never made those problems its main focus. More important, I realized, if it tried to, it would be set up for failure.

The reason is simple. Everything the company does and chooses not to do flows from a single motivation: Zuckerberg’s relentless desire for growth. Quiñonero’s AI expertise supercharged that growth. His team got pigeonholed into targeting AI bias, as I learned in my reporting, because preventing such bias helps the company avoid proposed regulation that might, if passed, hamper that growth. Facebook leadership has also repeatedly weakened or halted many initiatives meant to clean up misinformation on the platform because doing so would undermine that growth.

In other words, the Responsible AI team’s work—whatever its merits on the specific problem of tackling AI bias—is essentially irrelevant to fixing the bigger problems of misinformation, extremism, and political polarization. And it’s all of us who pay the price.

“When you’re in the business of maximizing engagement, you’re not interested in truth. You’re not interested in harm, divisiveness, conspiracy. In fact, those are your friends,” says Hany Farid, a professor at the University of California, Berkeley who collaborates with Facebook to understand image- and video-based misinformation on the platform.

“They always do just enough to be able to put the press release out. But with a few exceptions, I don’t think it’s actually translated into better policies. They’re never really dealing with the fundamental problems.”

In March of 2012, Quiñonero visited a friend in the Bay Area. At the time, he was a manager in Microsoft Research’s UK office, leading a team using machine learning to get more visitors to click on ads displayed by the company’s search engine, Bing. His expertise was rare, and the team was less than a year old. Machine learning, a subset of AI, had yet to prove itself as a solution to large-scale industry problems. Few tech giants had invested in the technology.

Quiñonero’s friend wanted to show off his new employer, one of the hottest startups in Silicon Valley: Facebook, then eight years old and already with close to a billion monthly active users (i.e., those who have logged in at least once in the past 30 days). As Quiñonero walked around its Menlo Park headquarters, he watched a lone engineer make a major update to the website, something that would have involved significant red tape at Microsoft. It was a memorable introduction to Zuckerberg’s “Move fast and break things” ethos. Quiñonero was awestruck by the possibilities. Within a week, he had been through interviews and signed an offer to join the company.

His arrival couldn’t have been better timed. Facebook’s ads service was in the middle of a rapid expansion as the company was preparing for its May IPO. The goal was to increase revenue and take on Google, which had the lion’s share of the online advertising market. Machine learning, which could predict which ads would resonate best with which users and thus make them more effective, could be the perfect tool. Shortly after starting, Quiñonero was promoted to managing a team similar to the one he’d led at Microsoft.

Joaquin Quinonero Candela
Quiñonero started raising chickens in late 2019 as a way to unwind from the intensity of his job.

Unlike traditional algorithms, which are hard-coded by engineers, machine-learning algorithms “train” on input data to learn the correlations within it. The trained algorithm, known as a machine-learning model, can then automate future decisions. An algorithm trained on ad click data, for example, might learn that women click on ads for yoga leggings more often than men. The resultant model will then serve more of those ads to women. Today at an AI-based company like Facebook, engineers generate countless models with slight variations to see which one performs best on a given problem.

Facebook’s massive amounts of user data gave Quiñonero a big advantage. His team could develop models that learned to infer the existence not only of broad categories like “women” and “men,” but of very fine-grained categories like “women between 25 and 34 who liked Facebook pages related to yoga,” and targeted ads to them. The finer-grained the targeting, the better the chance of a click, which would give advertisers more bang for their buck.

Within a year his team had developed these models, as well as the tools for designing and deploying new ones faster. Before, it had taken Quiñonero’s engineers six to eight weeks to build, train, and test a new model. Now it took only one.

News of the success spread quickly. The team that worked on determining which posts individual Facebook users would see on their personal news feeds wanted to apply the same techniques. Just as algorithms could be trained to predict who would click what ad, they could also be trained to predict who would like or share what post, and then give those posts more prominence. If the model determined that a person really liked dogs, for instance, friends’ posts about dogs would appear higher up on that user’s news feed.

Quiñonero’s success with the news feed—coupled with impressive new AI research being conducted outside the company—caught the attention of Zuckerberg and Schroepfer. Facebook now had just over 1 billion users, making it more than eight times larger than any other social network, but they wanted to know how to continue that growth. The executives decided to invest heavily in AI, internet connectivity, and virtual reality.

They created two AI teams. One was FAIR, a fundamental research lab that would advance the technology’s state-of-the-art capabilities. The other, Applied Machine Learning (AML), would integrate those capabilities into Facebook’s products and services. In December 2013, after months of courting and persuasion, the executives recruited Yann LeCun, one of the biggest names in the field, to lead FAIR. Three months later, Quiñonero was promoted again, this time to lead AML. (It was later renamed FAIAR, pronounced “fire.”)

“That’s how you know what’s on his mind. I was always, for a couple of years, a few steps from Mark’s desk.”

Joaquin Quiñonero Candela

In his new role, Quiñonero built a new model-development platform for anyone at Facebook to access. Called FBLearner Flow, it allowed engineers with little AI experience to train and deploy machine-learning models within days. By mid-2016, it was in use by more than a quarter of Facebook’s engineering team and had already been used to train over a million models, including models for image recognition, ad targeting, and content moderation.

Zuckerberg’s obsession with getting the whole world to use Facebook had found a powerful new weapon. Teams had previously used design tactics, like experimenting with the content and frequency of notifications, to try to hook users more effectively. Their goal, among other things, was to increase a metric called L6/7, the fraction of people who logged in to Facebook six of the previous seven days. L6/7 is just one of myriad ways in which Facebook has measured “engagement”—the propensity of people to use its platform in any way, whether it’s by posting things, commenting on them, liking or sharing them, or just looking at them. Now every user interaction once analyzed by engineers was being analyzed by algorithms. Those algorithms were creating much faster, more personalized feedback loops for tweaking and tailoring each user’s news feed to keep nudging up engagement numbers.

Zuckerberg, who sat in the center of Building 20, the main office at the Menlo Park headquarters, placed the new FAIR and AML teams beside him. Many of the original AI hires were so close that his desk and theirs were practically touching. It was “the inner sanctum,” says a former leader in the AI org (the branch of Facebook that contains all its AI teams), who recalls the CEO shuffling people in and out of his vicinity as they gained or lost his favor. “That’s how you know what’s on his mind,” says Quiñonero. “I was always, for a couple of years, a few steps from Mark’s desk.”

With new machine-learning models coming online daily, the company created a new system to track their impact and maximize user engagement. The process is still the same today. Teams train up a new machine-learning model on FBLearner, whether to change the ranking order of posts or to better catch content that violates Facebook’s community standards (its rules on what is and isn’t allowed on the platform). Then they test the new model on a small subset of Facebook’s users to measure how it changes engagement metrics, such as the number of likes, comments, and shares, says Krishna Gade, who served as the engineering manager for news feed from 2016 to 2018.

If a model reduces engagement too much, it’s discarded. Otherwise, it’s deployed and continually monitored. On Twitter, Gade explained that his engineers would get notifications every few days when metrics such as likes or comments were down. Then they’d decipher what had caused the problem and whether any models needed retraining.

But this approach soon caused issues. The models that maximize engagement also favor controversy, misinformation, and extremism: put simply, people just like outrageous stuff. Sometimes this inflames existing political tensions. The most devastating example to date is the case of Myanmar, where viral fake news and hate speech about the Rohingya Muslim minority escalated the country’s religious conflict into a full-blown genocide. Facebook admitted in 2018, after years of downplaying its role, that it had not done enough “to help prevent our platform from being used to foment division and incite offline violence.”

While Facebook may have been oblivious to these consequences in the beginning, it was studying them by 2016. In an internal presentation from that year, reviewed by the Wall Street Journal, a company researcher, Monica Lee, found that Facebook was not only hosting a large number of extremist groups but also promoting them to its users: “64% of all extremist group joins are due to our recommendation tools,” the presentation said, predominantly thanks to the models behind the “Groups You Should Join” and “Discover” features.

“The question for leadership was: Should we be optimizing for engagement if you find that somebody is in a vulnerable state of mind?”

A former AI researcher who joined in 2018

In 2017, Chris Cox, Facebook’s longtime chief product officer, formed a new task force to understand whether maximizing user engagement on Facebook was contributing to political polarization. It found that there was indeed a correlation, and that reducing polarization would mean taking a hit on engagement. In a mid-2018 document reviewed by the Journal, the task force proposed several potential fixes, such as tweaking the recommendation algorithms to suggest a more diverse range of groups for people to join. But it acknowledged that some of the ideas were “antigrowth.” Most of the proposals didn’t move forward, and the task force disbanded.

Since then, other employees have corroborated these findings. A former Facebook AI researcher who joined in 2018 says he and his team conducted “study after study” confirming the same basic idea: models that maximize engagement increase polarization. They could easily track how strongly users agreed or disagreed on different issues, what content they liked to engage with, and how their stances changed as a result. Regardless of the issue, the models learned to feed users increasingly extreme viewpoints. “Over time they measurably become more polarized,” he says.

The researcher’s team also found that users with a tendency to post or engage with melancholy content—a possible sign of depression—could easily spiral into consuming increasingly negative material that risked further worsening their mental health. The team proposed tweaking the content-ranking models for these users to stop maximizing engagement alone, so they would be shown less of the depressing stuff. “The question for leadership was: Should we be optimizing for engagement if you find that somebody is in a vulnerable state of mind?” he remembers. (A Facebook spokesperson said she could not find documentation for this proposal.)

But anything that reduced engagement, even for reasons such as not exacerbating someone’s depression, led to a lot of hemming and hawing among leadership. With their performance reviews and salaries tied to the successful completion of projects, employees quickly learned to drop those that received pushback and continue working on those dictated from the top down.

One such project heavily pushed by company leaders involved predicting whether a user might be at risk for something several people had already done: livestreaming their own suicide on Facebook Live. The task involved building a model to analyze the comments that other users were posting on a video after it had gone live, and bringing at-risk users to the attention of trained Facebook community reviewers who could call local emergency responders to perform a wellness check. It didn’t require any changes to content-ranking models, had negligible impact on engagement, and effectively fended off negative press. It was also nearly impossible, says the researcher: “It’s more of a PR stunt. The efficacy of trying to determine if somebody is going to kill themselves in the next 30 seconds, based on the first 10 seconds of video analysis—you’re not going to be very effective.”

Facebook disputes this characterization, saying the team that worked on this effort has since successfully predicted which users were at risk and increased the number of wellness checks performed. But the company does not release data on the accuracy of its predictions or how many wellness checks turned out to be real emergencies.

That former employee, meanwhile, no longer lets his daughter use Facebook.

Quiñonero should have been perfectly placed to tackle these problems when he created the SAIL (later Responsible AI) team in April 2018. His time as the director of Applied Machine Learning had made him intimately familiar with the company’s algorithms, especially the ones used for recommending posts, ads, and other content to users.

It also seemed that Facebook was ready to take these problems seriously. Whereas previous efforts to work on them had been scattered across the company, Quiñonero was now being granted a centralized team with leeway in his mandate to work on whatever he saw fit at the intersection of AI and society.

At the time, Quiñonero was engaging in his own reeducation about how to be a responsible technologist. The field of AI research was paying growing attention to problems of AI bias and accountability in the wake of high-profile studies showing that, for example, an algorithm was scoring Black defendants as more likely to be rearrested than white defendants who’d been arrested for the same or a more serious offense. Quiñonero began studying the scientific literature on algorithmic fairness, reading books on ethical engineering and the history of technology, and speaking with civil rights experts and moral philosophers.

Joaquin Quinonero Candela

Over the many hours I spent with him, I could tell he took this seriously. He had joined Facebook amid the Arab Spring, a series of revolutions against oppressive Middle Eastern regimes. Experts had lauded social media for spreading the information that fueled the uprisings and giving people tools to organize. Born in Spain but raised in Morocco, where he’d seen the suppression of free speech firsthand, Quiñonero felt an intense connection to Facebook’s potential as a force for good.

Six years later, Cambridge Analytica had threatened to overturn this promise. The controversy forced him to confront his faith in the company and examine what staying would mean for his integrity. “I think what happens to most people who work at Facebook—and definitely has been my story—is that there’s no boundary between Facebook and me,” he says. “It’s extremely personal.” But he chose to stay, and to head SAIL, because he believed he could do more for the world by helping turn the company around than by leaving it behind.

“I think if you’re at a company like Facebook, especially over the last few years, you really realize the impact that your products have on people’s lives—on what they think, how they communicate, how they interact with each other,” says Quiñonero’s longtime friend Zoubin Ghahramani, who helps lead the Google Brain team. “I know Joaquin cares deeply about all aspects of this. As somebody who strives to achieve better and improve things, he sees the important role that he can have in shaping both the thinking and the policies around responsible AI.”

At first, SAIL had only five people, who came from different parts of the company but were all interested in the societal impact of algorithms. One founding member, Isabel Kloumann, a research scientist who’d come from the company’s core data science team, brought with her an initial version of a tool to measure the bias in AI models.

The team also brainstormed many other ideas for projects. The former leader in the AI org, who was present for some of the early meetings of SAIL, recalls one proposal for combating polarization. It involved using sentiment analysis, a form of machine learning that interprets opinion in bits of text, to better identify comments that expressed extreme points of view. These comments wouldn’t be deleted, but they would be hidden by default with an option to reveal them, thus limiting the number of people who saw them.

And there were discussions about what role SAIL could play within Facebook and how it should evolve over time. The sentiment was that the team would first produce responsible-AI guidelines to tell the product teams what they should or should not do. But the hope was that it would ultimately serve as the company’s central hub for evaluating AI projects and stopping those that didn’t follow the guidelines.

Former employees described, however, how hard it could be to get buy-in or financial support when the work didn’t directly improve Facebook’s growth. By its nature, the team was not thinking about growth, and in some cases it was proposing ideas antithetical to growth. As a result, it received few resources and languished. Many of its ideas stayed largely academic.

On August 29, 2018, that suddenly changed. In the ramp-up to the US midterm elections, President Donald Trump and other Republican leaders ratcheted up accusations that Facebook, Twitter, and Google had anti-conservative bias. They claimed that Facebook’s moderators in particular, in applying the community standards, were suppressing conservative voices more than liberal ones. This charge would later be debunked, but the hashtag #StopTheBias, fueled by a Trump tweet, was rapidly spreading on social media.

For Trump, it was the latest effort to sow distrust in the country’s mainstream information distribution channels. For Zuckerberg, it threatened to alienate Facebook’s conservative US users and make the company more vulnerable to regulation from a Republican-led government. In other words, it threatened the company’s growth.

Facebook did not grant me an interview with Zuckerberg, but previous reporting has shown how he increasingly pandered to Trump and the Republican leadership. After Trump was elected, Joel Kaplan, Facebook’s VP of global public policy and its highest-ranking Republican, advised Zuckerberg to tread carefully in the new political environment.

On September 20, 2018, three weeks after Trump’s #StopTheBias tweet, Zuckerberg held a meeting with Quiñonero for the first time since SAIL’s creation. He wanted to know everything Quiñonero had learned about AI bias and how to quash it in Facebook’s content-moderation models. By the end of the meeting, one thing was clear: AI bias was now Quiñonero’s top priority. “The leadership has been very, very pushy about making sure we scale this aggressively,” says Rachad Alao, the engineering director of Responsible AI who joined in April 2019.

It was a win for everybody in the room. Zuckerberg got a way to ward off charges of anti-conservative bias. And Quiñonero now had more money and a bigger team to make the overall Facebook experience better for users. They could build upon Kloumann’s existing tool in order to measure and correct the alleged anti-conservative bias in content-moderation models, as well as to correct other types of bias in the vast majority of models across the platform.

This could help prevent the platform from unintentionally discriminating against certain users. By then, Facebook already had thousands of models running concurrently, and almost none had been measured for bias. That would get it into legal trouble a few months later with the US Department of Housing and Urban Development (HUD), which alleged that the company’s algorithms were inferring “protected” attributes like race from users’ data and showing them ads for housing based on those attributes—an illegal form of discrimination. (The lawsuit is still pending.) Schroepfer also predicted that Congress would soon pass laws to regulate algorithmic discrimination, so Facebook needed to make headway on these efforts anyway.

(Facebook disputes the idea that it pursued its work on AI bias to protect growth or in anticipation of regulation. “We built the Responsible AI team because it was the right thing to do,” a spokesperson said.)

But narrowing SAIL’s focus to algorithmic fairness would sideline all Facebook’s other long-standing algorithmic problems. Its content-recommendation models would continue pushing posts, news, and groups to users in an effort to maximize engagement, rewarding extremist content and contributing to increasingly fractured political discourse.

Zuckerberg even admitted this. Two months after the meeting with Quiñonero, in a public note outlining Facebook’s plans for content moderation, he illustrated the harmful effects of the company’s engagement strategy with a simplified chart. It showed that the more likely a post is to violate Facebook’s community standards, the more user engagement it receives, because the algorithms that maximize engagement reward inflammatory content.

A chart titled "natural engagement pattern" that shows allowed content on the X axis, engagement on the Y axis, and an exponential increase in engagement as content nears the policy line for prohibited content.

But then he showed another chart with the inverse relationship. Rather than rewarding content that came close to violating the community standards, Zuckerberg wrote, Facebook could choose to start “penalizing” it, giving it “less distribution and engagement” rather than more. How would this be done? With more AI. Facebook would develop better content-moderation models to detect this “borderline content” so it could be retroactively pushed lower in the news feed to snuff out its virality, he said.

A chart titled "adjusted to discourage borderline content" that shows the same chart but the curve inverted to reach no engagement when it reaches the policy line.

The problem is that for all Zuckerberg’s promises, this strategy is tenuous at best.

Misinformation and hate speech constantly evolve. New falsehoods spring up; new people and groups become targets. To catch things before they go viral, content-moderation models must be able to identify new unwanted content with high accuracy. But machine-learning models do not work that way. An algorithm that has learned to recognize Holocaust denial can’t immediately spot, say, Rohingya genocide denial. It must be trained on thousands, often even millions, of examples of a new type of content before learning to filter it out. Even then, users can quickly learn to outwit the model by doing things like changing the wording of a post or replacing incendiary phrases with euphemisms, making their message illegible to the AI while still obvious to a human. This is why new conspiracy theories can rapidly spiral out of control, and partly why, even after such content is banned, forms of it can persist on the platform.

In his New York Times profile, Schroepfer named these limitations of the company’s content-moderation strategy. “Every time Mr. Schroepfer and his more than 150 engineering specialists create A.I. solutions that flag and squelch noxious material, new and dubious posts that the A.I. systems have never seen before pop up—and are thus not caught,” wrote the Times. “It’s never going to go to zero,” Schroepfer told the publication.

Meanwhile, the algorithms that recommend this content still work to maximize engagement. This means every toxic post that escapes the content-moderation filters will continue to be pushed higher up the news feed and promoted to reach a larger audience. Indeed, a study from New York University recently found that among partisan publishers’ Facebook pages, those that regularly posted political misinformation received the most engagement in the lead-up to the 2020 US presidential election and the Capitol riots. “That just kind of got me,” says a former employee who worked on integrity issues from 2018 to 2019. “We fully acknowledged [this], and yet we’re still increasing engagement.”

But Quiñonero’s SAIL team wasn’t working on this problem. Because of Kaplan’s and Zuckerberg’s worries about alienating conservatives, the team stayed focused on bias. And even after it merged into the bigger Responsible AI team, it was never mandated to work on content-recommendation systems that might limit the spread of misinformation. Nor has any other team, as I confirmed after Entin and another spokesperson gave me a full list of all Facebook’s other initiatives on integrity issues—the company’s umbrella term for problems including misinformation, hate speech, and polarization.

A Facebook spokesperson said, “The work isn’t done by one specific team because that’s not how the company operates.” It is instead distributed among the teams that have the specific expertise to tackle how content ranking affects misinformation for their part of the platform, she said. But Schroepfer told me precisely the opposite in an earlier interview. I had asked him why he had created a centralized Responsible AI team instead of directing existing teams to make progress on the issue. He said it was “best practice” at the company.

“[If] it’s an important area, we need to move fast on it, it’s not well-defined, [we create] a dedicated team and get the right leadership,” he said. “As an area grows and matures, you’ll see the product teams take on more work, but the central team is still needed because you need to stay up with state-of-the-art work.”

When I described the Responsible AI team’s work to other experts on AI ethics and human rights, they noted the incongruity between the problems it was tackling and those, like misinformation, for which Facebook is most notorious. “This seems to be so oddly removed from Facebook as a product—the things Facebook builds and the questions about impact on the world that Facebook faces,” said Rumman Chowdhury, whose startup, Parity, advises firms on the responsible use of AI, and was acquired by Twitter after our interview. I had shown Chowdhury the Quiñonero team’s documentation detailing its work. “I find it surprising that we’re going to talk about inclusivity, fairness, equity, and not talk about the very real issues happening today,” she said.

“It seems like the ‘responsible AI’ framing is completely subjective to what a company decides it wants to care about. It’s like, ‘We’ll make up the terms and then we’ll follow them,’” says Ellery Roberts Biddle, the editorial director of Ranking Digital Rights, a nonprofit that studies the impact of tech companies on human rights. “I don’t even understand what they mean when they talk about fairness. Do they think it’s fair to recommend that people join extremist groups, like the ones that stormed the Capitol? If everyone gets the recommendation, does that mean it was fair?”

“We’re at a place where there’s one genocide [Myanmar] that the UN has, with a lot of evidence, been able to specifically point to Facebook and to the way that the platform promotes content,” Biddle adds. “How much higher can the stakes get?”

Over the last two years, Quiñonero’s team has built out Kloumann’s original tool, called Fairness Flow. It allows engineers to measure the accuracy of machine-learning models for different user groups. They can compare a face-detection model’s accuracy across different ages, genders, and skin tones, or a speech-recognition algorithm’s accuracy across different languages, dialects, and accents.

Fairness Flow also comes with a set of guidelines to help engineers understand what it means to train a “fair” model. One of the thornier problems with making algorithms fair is that there are different definitions of fairness, which can be mutually incompatible. Fairness Flow lists four definitions that engineers can use according to which suits their purpose best, such as whether a speech-recognition model recognizes all accents with equal accuracy or with a minimum threshold of accuracy.

But testing algorithms for fairness is still largely optional at Facebook. None of the teams that work directly on Facebook’s news feed, ad service, or other products are required to do it. Pay incentives are still tied to engagement and growth metrics. And while there are guidelines about which fairness definition to use in any given situation, they aren’t enforced.

This last problem came to the fore when the company had to deal with allegations of anti-conservative bias.

In 2014, Kaplan was promoted from US policy head to global vice president for policy, and he began playing a more heavy-handed role in content moderation and decisions about how to rank posts in users’ news feeds. After Republicans started voicing claims of anti-conservative bias in 2016, his team began manually reviewing the impact of misinformation-detection models on users to ensure—among other things—that they didn’t disproportionately penalize conservatives.

All Facebook users have some 200 “traits” attached to their profile. These include various dimensions submitted by users or estimated by machine-learning models, such as race, political and religious leanings, socioeconomic class, and level of education. Kaplan’s team began using the traits to assemble custom user segments that reflected largely conservative interests: users who engaged with conservative content, groups, and pages, for example. Then they’d run special analyses to see how content-moderation decisions would affect posts from those segments, according to a former researcher whose work was subject to those reviews.

The Fairness Flow documentation, which the Responsible AI team wrote later, includes a case study on how to use the tool in such a situation. When deciding whether a misinformation model is fair with respect to political ideology, the team wrote, “fairness” does not mean the model should affect conservative and liberal users equally. If conservatives are posting a greater fraction of misinformation, as judged by public consensus, then the model should flag a greater fraction of conservative content. If liberals are posting more misinformation, it should flag their content more often too.

But members of Kaplan’s team followed exactly the opposite approach: they took “fairness” to mean that these models should not affect conservatives more than liberals. When a model did so, they would stop its deployment and demand a change. Once, they blocked a medical-misinformation detector that had noticeably reduced the reach of anti-vaccine campaigns, the former researcher told me. They told the researchers that the model could not be deployed until the team fixed this discrepancy. But that effectively made the model meaningless. “There’s no point, then,” the researcher says. A model modified in that way “would have literally no impact on the actual problem” of misinformation.

“I don’t even understand what they mean when they talk about fairness. Do they think it’s fair to recommend that people join extremist groups, like the ones that stormed the Capitol? If everyone gets the recommendation, does that mean it was fair?”

Ellery Roberts Biddle, editorial director of Ranking Digital Rights

This happened countless other times—and not just for content moderation. In 2020, the Washington Post reported that Kaplan’s team had undermined efforts to mitigate election interference and polarization within Facebook, saying they could contribute to anti-conservative bias. In 2018, it used the same argument to shelve a project to edit Facebook’s recommendation models even though researchers believed it would reduce divisiveness on the platform, according to the Wall Street Journal. His claims about political bias also weakened a proposal to edit the ranking models for the news feed that Facebook’s data scientists believed would strengthen the platform against the manipulation tactics Russia had used during the 2016 US election.

And ahead of the 2020 election, Facebook policy executives used this excuse, according to the New York Times, to veto or weaken several proposals that would have reduced the spread of hateful and damaging content.

Facebook disputed the Wall Street Journal’s reporting in a follow-up blog post, and challenged the New York Times’s characterization in an interview with the publication. A spokesperson for Kaplan’s team also denied to me that this was a pattern of behavior, saying the cases reported by the Post, the Journal, and the Times were “all individual instances that we believe are then mischaracterized.” He declined to comment about the retraining of misinformation models on the record.

Many of these incidents happened before Fairness Flow was adopted. But they show how Facebook’s pursuit of fairness in the service of growth had already come at a steep cost to progress on the platform’s other challenges. And if engineers used the definition of fairness that Kaplan’s team had adopted, Fairness Flow could simply systematize behavior that rewarded misinformation instead of helping to combat it.

Often “the whole fairness thing” came into play only as a convenient way to maintain the status quo, the former researcher says: “It seems to fly in the face of the things that Mark was saying publicly in terms of being fair and equitable.”

The last time I spoke with Quiñonero was a month after the US Capitol riots. I wanted to know how the storming of Congress had affected his thinking and the direction of his work.

In the video call, it was as it always was: Quiñonero dialing in from his home office in one window and Entin, his PR handler, in another. I asked Quiñonero what role he felt Facebook had played in the riots and whether it changed the task he saw for Responsible AI. After a long pause, he sidestepped the question, launching into a description of recent work he’d done to promote greater diversity and inclusion among the AI teams.

I asked him the question again. His Facebook Portal camera, which uses computer-vision algorithms to track the speaker, began to slowly zoom in on his face as he grew still. “I don’t know that I have an easy answer to that question, Karen,” he said. “It’s an extremely difficult question to ask me.”

Entin, who’d been rapidly pacing with a stoic poker face, grabbed a red stress ball.

I asked Quiñonero why his team hadn’t previously looked at ways to edit Facebook’s content-ranking models to tamp down misinformation and extremism. He told me it was the job of other teams (though none, as I confirmed, have been mandated to work on that task). “It’s not feasible for the Responsible AI team to study all those things ourselves,” he said. When I asked whether he would consider having his team tackle those issues in the future, he vaguely admitted, “I would agree with you that that is going to be the scope of these types of conversations.”

Near the end of our hour-long interview, he began to emphasize that AI was often unfairly painted as “the culprit.” Regardless of whether Facebook used AI or not, he said, people would still spew lies and hate speech, and that content would still spread across the platform.

I pressed him one more time. Certainly he couldn’t believe that algorithms had done absolutely nothing to change the nature of these issues, I said.

“I don’t know,” he said with a halting stutter. Then he repeated, with more conviction: “That’s my honest answer. Honest to God. I don’t know.”

Corrections: We amended a line that suggested that Joel Kaplan, Facebook’s vice president of global policy, had used Fairness Flow. He has not. But members of his team have used the notion of fairness to request the retraining of misinformation models in ways that directly contradict Responsible AI’s guidelines. We also clarified when Rachad Alao, the engineering director of Responsible AI, joined the company.

Cálculos mostram que será impossível controlar uma Inteligência Artificial super inteligente (Engenharia é:)

engenhariae.com.br

Ademilson Ramos, 23 de janeiro de 2021


Foto de Alex Knight no Unsplash

A ideia da inteligência artificial derrubar a humanidade tem sido discutida por muitas décadas, e os cientistas acabaram de dar seu veredicto sobre se seríamos capazes de controlar uma superinteligência de computador de alto nível. A resposta? Quase definitivamente não.

O problema é que controlar uma superinteligência muito além da compreensão humana exigiria uma simulação dessa superinteligência que podemos analisar. Mas se não formos capazes de compreendê-lo, é impossível criar tal simulação.

Regras como ‘não causar danos aos humanos’ não podem ser definidas se não entendermos o tipo de cenário que uma IA irá criar, sugerem os pesquisadores. Uma vez que um sistema de computador está trabalhando em um nível acima do escopo de nossos programadores, não podemos mais estabelecer limites.

“Uma superinteligência apresenta um problema fundamentalmente diferente daqueles normalmente estudados sob a bandeira da ‘ética do robô’”, escrevem os pesquisadores.

“Isso ocorre porque uma superinteligência é multifacetada e, portanto, potencialmente capaz de mobilizar uma diversidade de recursos para atingir objetivos que são potencialmente incompreensíveis para os humanos, quanto mais controláveis.”

Parte do raciocínio da equipe vem do problema da parada apresentado por Alan Turing em 1936. O problema centra-se em saber se um programa de computador chegará ou não a uma conclusão e responderá (para que seja interrompido), ou simplesmente ficar em um loop eterno tentando encontrar uma.

Como Turing provou por meio de uma matemática inteligente, embora possamos saber isso para alguns programas específicos, é logicamente impossível encontrar uma maneira que nos permita saber isso para cada programa potencial que poderia ser escrito. Isso nos leva de volta à IA, que, em um estado superinteligente, poderia armazenar todos os programas de computador possíveis em sua memória de uma vez.

Qualquer programa escrito para impedir que a IA prejudique humanos e destrua o mundo, por exemplo, pode chegar a uma conclusão (e parar) ou não – é matematicamente impossível para nós estarmos absolutamente seguros de qualquer maneira, o que significa que não pode ser contido.

“Na verdade, isso torna o algoritmo de contenção inutilizável”, diz o cientista da computação Iyad Rahwan, do Instituto Max-Planck para o Desenvolvimento Humano, na Alemanha.

A alternativa de ensinar alguma ética à IA e dizer a ela para não destruir o mundo – algo que nenhum algoritmo pode ter certeza absoluta de fazer, dizem os pesquisadores – é limitar as capacidades da superinteligência. Ele pode ser cortado de partes da Internet ou de certas redes, por exemplo.

O novo estudo também rejeita essa ideia, sugerindo que isso limitaria o alcance da inteligência artificial – o argumento é que se não vamos usá-la para resolver problemas além do escopo dos humanos, então por que criá-la?

Se vamos avançar com a inteligência artificial, podemos nem saber quando chega uma superinteligência além do nosso controle, tal é a sua incompreensibilidade. Isso significa que precisamos começar a fazer algumas perguntas sérias sobre as direções que estamos tomando.

“Uma máquina superinteligente que controla o mundo parece ficção científica”, diz o cientista da computação Manuel Cebrian, do Instituto Max-Planck para o Desenvolvimento Humano. “Mas já existem máquinas que executam certas tarefas importantes de forma independente, sem que os programadores entendam totalmente como as aprenderam.”

“Portanto, surge a questão de saber se isso poderia em algum momento se tornar incontrolável e perigoso para a humanidade.”

A pesquisa foi publicada no Journal of Artificial Intelligence Research.

Developing Algorithms That Might One Day Be Used Against You (Gizmodo)

gizmodo.com

Ryan F. Mandelbaum, Jan 24, 2021


Brian Nord is an astrophysicist and machine learning researcher.
Brian Nord is an astrophysicist and machine learning researcher. Photo: Mark Lopez/Argonne National Laboratory

Machine learning algorithms serve us the news we read, the ads we see, and in some cases even drive our cars. But there’s an insidious layer to these algorithms: They rely on data collected by and about humans, and they spit our worst biases right back out at us. For example, job candidate screening algorithms may automatically reject names that sound like they belong to nonwhite people, while facial recognition software is often much worse at recognizing women or nonwhite faces than it is at recognizing white male faces. An increasing number of scientists and institutions are waking up to these issues, and speaking out about the potential for AI to cause harm.

Brian Nord is one such researcher weighing his own work against the potential to cause harm with AI algorithms. Nord is a cosmologist at Fermilab and the University of Chicago, where he uses artificial intelligence to study the cosmos, and he’s been researching a concept for a “self-driving telescope” that can write and test hypotheses with the help of a machine learning algorithm. At the same time, he’s struggling with the idea that the algorithms he’s writing may one day be biased against him—and even used against him—and is working to build a coalition of physicists and computer scientists to fight for more oversight in AI algorithm development.

This interview has been edited and condensed for clarity.

Gizmodo: How did you become a physicist interested in AI and its pitfalls?

Brian Nord: My Ph.d is in cosmology, and when I moved to Fermilab in 2012, I moved into the subfield of strong gravitational lensing. [Editor’s note: Gravitational lenses are places in the night sky where light from distant objects has been bent by the gravitational field of heavy objects in the foreground, making the background objects appear warped and larger.] I spent a few years doing strong lensing science in the traditional way, where we would visually search through terabytes of images, through thousands of candidates of these strong gravitational lenses, because they’re so weird, and no one had figured out a more conventional algorithm to identify them. Around 2015, I got kind of sad at the prospect of only finding these things with my eyes, so I started looking around and found deep learning.

Here we are a few years later—myself and a few other people popularized this idea of using deep learning—and now it’s the standard way to find these objects. People are unlikely to go back to using methods that aren’t deep learning to do galaxy recognition. We got to this point where we saw that deep learning is the thing, and really quickly saw the potential impact of it across astronomy and the sciences. It’s hitting every science now. That is a testament to the promise and peril of this technology, with such a relatively simple tool. Once you have the pieces put together right, you can do a lot of different things easily, without necessarily thinking through the implications.

Gizmodo: So what is deep learning? Why is it good and why is it bad?

BN: Traditional mathematical models (like the F=ma of Newton’s laws) are built by humans to describe patterns in data: We use our current understanding of nature, also known as intuition, to choose the pieces, the shape of these models. This means that they are often limited by what we know or can imagine about a dataset. These models are also typically smaller and are less generally applicable for many problems.

On the other hand, artificial intelligence models can be very large, with many, many degrees of freedom, so they can be made very general and able to describe lots of different data sets. Also, very importantly, they are primarily sculpted by the data that they are exposed to—AI models are shaped by the data with which they are trained. Humans decide what goes into the training set, which is then limited again by what we know or can imagine about that data. It’s not a big jump to see that if you don’t have the right training data, you can fall off the cliff really quickly.

The promise and peril are highly related. In the case of AI, the promise is in the ability to describe data that humans don’t yet know how to describe with our ‘intuitive’ models. But, perilously, the data sets used to train them incorporate our own biases. When it comes to AI recognizing galaxies, we’re risking biased measurements of the universe. When it comes to AI recognizing human faces, when our data sets are biased against Black and Brown faces for example, we risk discrimination that prevents people from using services, that intensifies surveillance apparatus, that jeopardizes human freedoms. It’s critical that we weigh and address these consequences before we imperil people’s lives with our research.

Gizmodo: When did the light bulb go off in your head that AI could be harmful?

BN: I gotta say that it was with the Machine Bias article from ProPublica in 2016, where they discuss recidivism and sentencing procedure in courts. At the time of that article, there was a closed-source algorithm used to make recommendations for sentencing, and judges were allowed to use it. There was no public oversight of this algorithm, which ProPublica found was biased against Black people; people could use algorithms like this willy nilly without accountability. I realized that as a Black man, I had spent the last few years getting excited about neural networks, then saw it quite clearly that these applications that could harm me were already out there, already being used, and we’re already starting to become embedded in our social structure through the criminal justice system. Then I started paying attention more and more. I realized countries across the world were using surveillance technology, incorporating machine learning algorithms, for widespread oppressive uses.

Gizmodo: How did you react? What did you do?

BN: I didn’t want to reinvent the wheel; I wanted to build a coalition. I started looking into groups like Fairness, Accountability and Transparency in Machine Learning, plus Black in AI, who is focused on building communities of Black researchers in the AI field, but who also has the unique awareness of the problem because we are the people who are affected. I started paying attention to the news and saw that Meredith Whittaker had started a think tank to combat these things, and Joy Buolamwini had helped found the Algorithmic Justice League. I brushed up on what computer scientists were doing and started to look at what physicists were doing, because that’s my principal community.

It became clear to folks like me and Savannah Thais that physicists needed to realize that they have a stake in this game. We get government funding, and we tend to take a fundamental approach to research. If we bring that approach to AI, then we have the potential to affect the foundations of how these algorithms work and impact a broader set of applications. I asked myself and my colleagues what our responsibility in developing these algorithms was and in having some say in how they’re being used down the line.

Gizmodo: How is it going so far?

BN: Currently, we’re going to write a white paper for SNOWMASS, this high-energy physics event. The SNOWMASS process determines the vision that guides the community for about a decade. I started to identify individuals to work with, fellow physicists, and experts who care about the issues, and develop a set of arguments for why physicists from institutions, individuals, and funding agencies should care deeply about these algorithms they’re building and implementing so quickly. It’s a piece that’s asking people to think about how much they are considering the ethical implications of what they’re doing.

We’ve already held a workshop at the University of Chicago where we’ve begun discussing these issues, and at Fermilab we’ve had some initial discussions. But we don’t yet have the critical mass across the field to develop policy. We can’t do it ourselves as physicists; we don’t have backgrounds in social science or technology studies. The right way to do this is to bring physicists together from Fermilab and other institutions with social scientists and ethicists and science and technology studies folks and professionals, and build something from there. The key is going to be through partnership with these other disciplines.

Gizmodo: Why haven’t we reached that critical mass yet?

BN: I think we need to show people, as Angela Davis has said, that our struggle is also their struggle. That’s why I’m talking about coalition building. The thing that affects us also affects them. One way to do this is to clearly lay out the potential harm beyond just race and ethnicity. Recently, there was this discussion of a paper that used neural networks to try and speed up the selection of candidates for Ph.D programs. They trained the algorithm on historical data. So let me be clear, they said here’s a neural network, here’s data on applicants who were denied and accepted to universities. Those applicants were chosen by faculty and people with biases. It should be obvious to anyone developing that algorithm that you’re going to bake in the biases in that context. I hope people will see these things as problems and help build our coalition.

Gizmodo: What is your vision for a future of ethical AI?

BN: What if there were an agency or agencies for algorithmic accountability? I could see these existing at the local level, the national level, and the institutional level. We can’t predict all of the future uses of technology, but we need to be asking questions at the beginning of the processes, not as an afterthought. An agency would help ask these questions and still allow the science to get done, but without endangering people’s lives. Alongside agencies, we need policies at various levels that make a clear decision about how safe the algorithms have to be before they are used on humans or other living things. If I had my druthers, these agencies and policies would be built by an incredibly diverse group of people. We’ve seen instances where a homogeneous group develops an app or technology and didn’t see the things that another group who’s not there would have seen. We need people across the spectrum of experience to participate in designing policies for ethical AI.

Gizmodo: What are your biggest fears about all of this?

BN: My biggest fear is that people who already have access to technology resources will continue to use them to subjugate people who are already oppressed; Pratyusha Kalluri has also advanced this idea of power dynamics. That’s what we’re seeing across the globe. Sure, there are cities that are trying to ban facial recognition, but unless we have a broader coalition, unless we have more cities and institutions willing to take on this thing directly, we’re not going to be able to keep this tool from exacerbating white supremacy, racism, and misogyny that that already exists inside structures today. If we don’t push policy that puts the lives of marginalized people first, then they’re going to continue being oppressed, and it’s going to accelerate.

Gizmodo: How has thinking about AI ethics affected your own research?

BN: I have to question whether I want to do AI work and how I’m going to do it; whether or not it’s the right thing to do to build a certain algorithm. That’s something I have to keep asking myself… Before, it was like, how fast can I discover new things and build technology that can help the world learn something? Now there’s a significant piece of nuance to that. Even the best things for humanity could be used in some of the worst ways. It’s a fundamental rethinking of the order of operations when it comes to my research.

I don’t think it’s weird to think about safety first. We have OSHA and safety groups at institutions who write down lists of things you have to check off before you’re allowed to take out a ladder, for example. Why are we not doing the same thing in AI? A part of the answer is obvious: Not all of us are people who experience the negative effects of these algorithms. But as one of the few Black people at the institutions I work in, I’m aware of it, I’m worried about it, and the scientific community needs to appreciate that my safety matters too, and that my safety concerns don’t end when I walk out of work.

Gizmodo: Anything else?

BN: I’d like to re-emphasize that when you look at some of the research that has come out, like vetting candidates for graduate school, or when you look at the biases of the algorithms used in criminal justice, these are problems being repeated over and over again, with the same biases. It doesn’t take a lot of investigation to see that bias enters these algorithms very quickly. The people developing them should really know better. Maybe there needs to be more educational requirements for algorithm developers to think about these issues before they have the opportunity to unleash them on the world.

This conversation needs to be raised to the level where individuals and institutions consider these issues a priority. Once you’re there, you need people to see that this is an opportunity for leadership. If we can get a grassroots community to help an institution to take the lead on this, it incentivizes a lot of people to start to take action.

And finally, people who have expertise in these areas need to be allowed to speak their minds. We can’t allow our institutions to quiet us so we can’t talk about the issues we’re bringing up. The fact that I have experience as a Black man doing science in America, and the fact that I do AI—that should be appreciated by institutions. It gives them an opportunity to have a unique perspective and take a unique leadership position. I would be worried if individuals felt like they couldn’t speak their mind. If we can’t get these issues out into the sunlight, how will we be able to build out of the darkness?

Ryan F. Mandelbaum – Former Gizmodo physics writer and founder of Birdmodo, now a science communicator specializing in quantum computing and birds

The Petabyte Age: Because More Isn’t Just More — More Is Different (Wired)

WIRED Staff, Science, 06.23.2008 12:00 PM

Introduction: Sensors everywhere. Infinite storage. Clouds of processors. Our ability to capture, warehouse, and understand massive amounts of data is changing science, medicine, business, and technology. As our collection of facts and figures grows, so will the opportunity to find answers to fundamental questions. Because in the era of big data, more isn’t just more. […]

petabyte age
Marian Bantjes

Introduction:

Sensors everywhere. Infinite storage. Clouds of processors. Our ability to capture, warehouse, and understand massive amounts of data is changing science, medicine, business, and technology. As our collection of facts and figures grows, so will the opportunity to find answers to fundamental questions. Because in the era of big data, more isn’t just more. More is different.

The End of Theory:

The Data Deluge Makes the Scientific Method Obsolete

Feeding the Masses:
Data In, Crop Predictions Out

Chasing the Quark:
Sometimes You Need to Throw Information Away

Winning the Lawsuit:
Data Miners Dig for Dirt

Tracking the News:
A Smarter Way to Predict Riots and Wars

__Spotting the Hot Zones: __
Now We Can Monitor Epidemics Hour by Hour

__ Sorting the World:__
Google Invents New Way to Manage Data

__ Watching the Skies:__
Space Is Big — But Not Too Big to Map

Scanning Our Skeletons:
Bone Images Show Wear and Tear

Tracking Air Fares:
Elaborate Algorithms Predict Ticket Prices

Predicting the Vote:
Pollsters Identify Tiny Voting Blocs

Pricing Terrorism:
Insurers Gauge Risks, Costs

Visualizing Big Data:
Bar Charts for Words

Big data and the end of theory? (The Guardian)

theguardian.com

Mark Graham, Fri 9 Mar 2012 14.39 GM

Does big data have the answers? Maybe some, but not all, says Mark Graham

In 2008, Chris Anderson, then editor of Wired, wrote a provocative piece titled The End of Theory. Anderson was referring to the ways that computers, algorithms, and big data can potentially generate more insightful, useful, accurate, or true results than specialists or
domain experts who traditionally craft carefully targeted hypotheses
and research strategies.

This revolutionary notion has now entered not just the popular imagination, but also the research practices of corporations, states, journalists and academics. The idea being that the data shadows and information trails of people, machines, commodities and even nature can reveal secrets to us that we now have the power and prowess to uncover.

In other words, we no longer need to speculate and hypothesise; we simply need to let machines lead us to the patterns, trends, and relationships in social, economic, political, and environmental relationships.

It is quite likely that you yourself have been the unwitting subject of a big data experiment carried out by Google, Facebook and many other large Web platforms. Google, for instance, has been able to collect extraordinary insights into what specific colours, layouts, rankings, and designs make people more efficient searchers. They do this by slightly tweaking their results and website for a few million searches at a time and then examining the often subtle ways in which people react.

Most large retailers similarly analyse enormous quantities of data from their databases of sales (which are linked to you by credit card numbers and loyalty cards) in order to make uncanny predictions about your future behaviours. In a now famous case, the American retailer, Target, upset a Minneapolis man by knowing more about his teenage daughter’s sex life than he did. Target was able to predict his daughter’s pregnancy by monitoring her shopping patterns and comparing that information to an enormous database detailing billions of dollars of sales. This ultimately allows the company to make uncanny
predictions about its shoppers.

More significantly, national intelligence agencies are mining vast quantities of non-public Internet data to look for weak signals that might indicate planned threats or attacks.

There can by no denying the significant power and potentials of big data. And the huge resources being invested in both the public and private sectors to study it are a testament to this.

However, crucially important caveats are needed when using such datasets: caveats that, worryingly, seem to be frequently overlooked.

The raw informational material for big data projects is often derived from large user-generated or social media platforms (e.g. Twitter or Wikipedia). Yet, in all such cases we are necessarily only relying on information generated by an incredibly biased or skewed user-base.

Gender, geography, race, income, and a range of other social and economic factors all play a role in how information is produced and reproduced. People from different places and different backgrounds tend to produce different sorts of information. And so we risk ignoring a lot of important nuance if relying on big data as a social/economic/political mirror.

We can of course account for such bias by segmenting our data. Take the case of using Twitter to gain insights into last summer’s London riots. About a third of all UK Internet users have a twitter profile; a subset of that group are the active tweeters who produce the bulk of content; and then a tiny subset of that group (about 1%) geocode their tweets (essential information if you want to know about where your information is coming from).

Despite the fact that we have a database of tens of millions of data points, we are necessarily working with subsets of subsets of subsets. Big data no longer seems so big. Such data thus serves to amplify the information produced by a small minority (a point repeatedly made by UCL’s Muki Haklay), and skew, or even render invisible, ideas, trends, people, and patterns that aren’t mirrored or represented in the datasets that we work with.

Big data is undoubtedly useful for addressing and overcoming many important issues face by society. But we need to ensure that we aren’t seduced by the promises of big data to render theory unnecessary.

We may one day get to the point where sufficient quantities of big data can be harvested to answer all of the social questions that most concern us. I doubt it though. There will always be digital divides; always be uneven data shadows; and always be biases in how information and technology are used and produced.

And so we shouldn’t forget the important role of specialists to contextualise and offer insights into what our data do, and maybe more importantly, don’t tell us.

Mark Graham is a research fellow at the Oxford Internet Institute and is one of the creators of the Floating Sheep blog

The End of Theory: The Data Deluge Makes the Scientific Method Obsolete (Wired)

wired.com

Chris Anderson, Science, 06.23.2008 12:00 PM


Illustration: Marian Bantjes “All models are wrong, but some are useful.”

So proclaimed statistician George Box 30 years ago, and he was right. But what choice did we have? Only models, from cosmological equations to theories of human behavior, seemed to be able to consistently, if imperfectly, explain the world around us. Until now. Today companies like Google, which have grown up in an era of massively abundant data, don’t have to settle for wrong models. Indeed, they don’t have to settle for models at all.

Sixty years ago, digital computers made information readable. Twenty years ago, the Internet made it reachable. Ten years ago, the first search engine crawlers made it a single database. Now Google and like-minded companies are sifting through the most measured age in history, treating this massive corpus as a laboratory of the human condition. They are the children of the Petabyte Age.

The Petabyte Age is different because more is different. Kilobytes were stored on floppy disks. Megabytes were stored on hard disks. Terabytes were stored in disk arrays. Petabytes are stored in the cloud. As we moved along that progression, we went from the folder analogy to the file cabinet analogy to the library analogy to — well, at petabytes we ran out of organizational analogies.

At the petabyte scale, information is not a matter of simple three- and four-dimensional taxonomy and order but of dimensionally agnostic statistics. It calls for an entirely different approach, one that requires us to lose the tether of data as something that can be visualized in its totality. It forces us to view data mathematically first and establish a context for it later. For instance, Google conquered the advertising world with nothing more than applied mathematics. It didn’t pretend to know anything about the culture and conventions of advertising — it just assumed that better data, with better analytical tools, would win the day. And Google was right.

Google’s founding philosophy is that we don’t know why this page is better than that one: If the statistics of incoming links say it is, that’s good enough. No semantic or causal analysis is required. That’s why Google can translate languages without actually “knowing” them (given equal corpus data, Google can translate Klingon into Farsi as easily as it can translate French into German). And why it can match ads to content without any knowledge or assumptions about the ads or the content.

Speaking at the O’Reilly Emerging Technology Conference this past March, Peter Norvig, Google’s research director, offered an update to George Box’s maxim: “All models are wrong, and increasingly you can succeed without them.”

This is a world where massive amounts of data and applied mathematics replace every other tool that might be brought to bear. Out with every theory of human behavior, from linguistics to sociology. Forget taxonomy, ontology, and psychology. Who knows why people do what they do? The point is they do it, and we can track and measure it with unprecedented fidelity. With enough data, the numbers speak for themselves.

The big target here isn’t advertising, though. It’s science. The scientific method is built around testable hypotheses. These models, for the most part, are systems visualized in the minds of scientists. The models are then tested, and experiments confirm or falsify theoretical models of how the world works. This is the way science has worked for hundreds of years.

Scientists are trained to recognize that correlation is not causation, that no conclusions should be drawn simply on the basis of correlation between X and Y (it could just be a coincidence). Instead, you must understand the underlying mechanisms that connect the two. Once you have a model, you can connect the data sets with confidence. Data without a model is just noise.

But faced with massive data, this approach to science — hypothesize, model, test — is becoming obsolete. Consider physics: Newtonian models were crude approximations of the truth (wrong at the atomic level, but still useful). A hundred years ago, statistically based quantum mechanics offered a better picture — but quantum mechanics is yet another model, and as such it, too, is flawed, no doubt a caricature of a more complex underlying reality. The reason physics has drifted into theoretical speculation about n-dimensional grand unified models over the past few decades (the “beautiful story” phase of a discipline starved of data) is that we don’t know how to run the experiments that would falsify the hypotheses — the energies are too high, the accelerators too expensive, and so on.

Now biology is heading in the same direction. The models we were taught in school about “dominant” and “recessive” genes steering a strictly Mendelian process have turned out to be an even greater simplification of reality than Newton’s laws. The discovery of gene-protein interactions and other aspects of epigenetics has challenged the view of DNA as destiny and even introduced evidence that environment can influence inheritable traits, something once considered a genetic impossibility.

In short, the more we learn about biology, the further we find ourselves from a model that can explain it.

There is now a better way. Petabytes allow us to say: “Correlation is enough.” We can stop looking for models. We can analyze the data without hypotheses about what it might show. We can throw the numbers into the biggest computing clusters the world has ever seen and let statistical algorithms find patterns where science cannot.

The best practical example of this is the shotgun gene sequencing by J. Craig Venter. Enabled by high-speed sequencers and supercomputers that statistically analyze the data they produce, Venter went from sequencing individual organisms to sequencing entire ecosystems. In 2003, he started sequencing much of the ocean, retracing the voyage of Captain Cook. And in 2005 he started sequencing the air. In the process, he discovered thousands of previously unknown species of bacteria and other life-forms.

If the words “discover a new species” call to mind Darwin and drawings of finches, you may be stuck in the old way of doing science. Venter can tell you almost nothing about the species he found. He doesn’t know what they look like, how they live, or much of anything else about their morphology. He doesn’t even have their entire genome. All he has is a statistical blip — a unique sequence that, being unlike any other sequence in the database, must represent a new species.

This sequence may correlate with other sequences that resemble those of species we do know more about. In that case, Venter can make some guesses about the animals — that they convert sunlight into energy in a particular way, or that they descended from a common ancestor. But besides that, he has no better model of this species than Google has of your MySpace page. It’s just data. By analyzing it with Google-quality computing resources, though, Venter has advanced biology more than anyone else of his generation.

This kind of thinking is poised to go mainstream. In February, the National Science Foundation announced the Cluster Exploratory, a program that funds research designed to run on a large-scale distributed computing platform developed by Google and IBM in conjunction with six pilot universities. The cluster will consist of 1,600 processors, several terabytes of memory, and hundreds of terabytes of storage, along with the software, including IBM’s Tivoli and open source versions of Google File System and MapReduce.111 Early CluE projects will include simulations of the brain and the nervous system and other biological research that lies somewhere between wetware and software.

Learning to use a “computer” of this scale may be challenging. But the opportunity is great: The new availability of huge amounts of data, along with the statistical tools to crunch these numbers, offers a whole new way of understanding the world. Correlation supersedes causation, and science can advance even without coherent models, unified theories, or really any mechanistic explanation at all.

There’s no reason to cling to our old ways. It’s time to ask: What can science learn from Google?

Chris Anderson (canderson@wired.com) is the editor in chief of Wired.

Related The Petabyte Age: Sensors everywhere. Infinite storage. Clouds of processors. Our ability to capture, warehouse, and understand massive amounts of data is changing science, medicine, business, and technology. As our collection of facts and figures grows, so will the opportunity to find answers to fundamental questions. Because in the era of big data, more isn’t just more. More is different.

Correction:
1 This story originally stated that the cluster software would include the actual Google File System.
06.27.08

Papa Francisco pede orações para robôs e IA (Tecmundo)

11/11/2020 às 18:30 1 min de leitura

Imagem de: Papa Francisco pede orações para robôs e IA

Jorge Marin

O Papa Francisco pediu aos fiéis do mundo inteiro para que, durante o mês de novembro, rezem para que o progresso da robótica e da inteligência artificial (IA) possam sempre servir a humanidade.

A mensagem faz parte de uma série de intenções de oração que o pontífice divulga anualmente, e compartilha a cada mês no YouTube para auxiliar os católicos a “aprofundar sua oração diária”, concentrando-se em tópicos específicos. Em setembro, o papa pediu orações para o “compartilhamento dos recursos do planeta”; em agosto, para o “mundo marítimo”; e agora chegou a vez dos robôs e da IA.

Na sua mensagem, o Papa Francisco pediu uma atenção especial para a IA que, segundo ele, está “no centro da mudança histórica que estamos experimentando”. E que não se trata apenas dos benefícios que a robótica pode trazer para o mundo.

Progresso tecnológico e algoritmos

Francisco afirma que nem sempre o progresso tecnológico é sinal de bem-estar para a humanidade, pois, se esse progresso contribuir para aumentar as desigualdades, não poderá ser considerado como um progresso verdadeiro. “Os avanços futuros devem ser orientados para o respeito à dignidade da pessoa”, alerta o papa.

A preocupação com que a tecnologia possa aumentar as divisões sociais já existentes levou o Vaticano assinar no início deste ano, em conjunto com a Microsoft e a IBM, a “Chamada de Roma por Ética de IA”, um documento em que são fixados alguns princípios para orientar a implantação da IA: transparência, inclusão, imparcialidade e confiabilidade.

Mesmo pessoas não religiosas são capazes de reconhecer que, quando se trata de implantar algoritmos, a preocupação do papa faz todo o sentido.

How will AI shape our lives post-Covid? (BBC)

Original article

BBC, 09 Nov 2020

Audrey Azoulay: Director-General, Unesco
How will AI shape our lives post-Covid?

Covid-19 is a test like no other. Never before have the lives of so many people around the world been affected at this scale or speed.

Over the past six months, thousands of AI innovations have sprung up in response to the challenges of life under lockdown. Governments are mobilising machine-learning in many ways, from contact-tracing apps to telemedicine and remote learning.

However, as the digital transformation accelerates exponentially, it is highlighting the challenges of AI. Ethical dilemmas are already a reality – including privacy risks and discriminatory bias.

It is up to us to decide what we want AI to look like: there is a legislative vacuum that needs to be filled now. Principles such as proportionality, inclusivity, human oversight and transparency can create a framework allowing us to anticipate these issues.

This is why Unesco is working to build consensus among 193 countries to lay the ethical foundations of AI. Building on these principles, countries will be able to develop national policies that ensure AI is designed, developed and deployed in compliance with fundamental human values.

As we face new, previously unimaginable challenges – like the pandemic – we must ensure that the tools we are developing work for us, and not against us.

Inner Workings: Crop researchers harness artificial intelligence to breed crops for the changing climate (PNAS)

Carolyn Beans PNAS November 3, 2020 117 (44) 27066-27069; first published October 14, 2020; https://doi.org/10.1073/pnas.2018732117

Until recently, the field of plant breeding looked a lot like it did in centuries past. A breeder might examine, for example, which tomato plants were most resistant to drought and then cross the most promising plants to produce the most drought-resistant offspring. This process would be repeated, plant generation after generation, until, over the course of roughly seven years, the breeder arrived at what seemed the optimal variety.

Figure1
Researchers at ETH Zürich use standard color images and thermal images collected by drone to determine how plots of wheat with different genotypes vary in grain ripeness. Image credit: Norbert Kirchgessner (ETH Zürich, Zürich, Switzerland).

Now, with the global population expected to swell to nearly 10 billion by 2050 (1) and climate change shifting growing conditions (2), crop breeder and geneticist Steven Tanksley doesn’t think plant breeders have that kind of time. “We have to double the productivity per acre of our major crops if we’re going to stay on par with the world’s needs,” says Tanksley, a professor emeritus at Cornell University in Ithaca, NY.

To speed up the process, Tanksley and others are turning to artificial intelligence (AI). Using computer science techniques, breeders can rapidly assess which plants grow the fastest in a particular climate, which genes help plants thrive there, and which plants, when crossed, produce an optimum combination of genes for a given location, opting for traits that boost yield and stave off the effects of a changing climate. Large seed companies in particular have been using components of AI for more than a decade. With computing power rapidly advancing, the techniques are now poised to accelerate breeding on a broader scale.

AI is not, however, a panacea. Crop breeders still grapple with tradeoffs such as higher yield versus marketable appearance. And even the most sophisticated AI cannot guarantee the success of a new variety. But as AI becomes integrated into agriculture, some crop researchers envisage an agricultural revolution with computer science at the helm.

An Art and a Science

During the “green revolution” of the 1960s, researchers developed new chemical pesticides and fertilizers along with high-yielding crop varieties that dramatically increased agricultural output (3). But the reliance on chemicals came with the heavy cost of environmental degradation (4). “If we’re going to do this sustainably,” says Tanksley, “genetics is going to carry the bulk of the load.”

Plant breeders lean not only on genetics but also on mathematics. As the genomics revolution unfolded in the early 2000s, plant breeders found themselves inundated with genomic data that traditional statistical techniques couldn’t wrangle (5). Plant breeding “wasn’t geared toward dealing with large amounts of data and making precise decisions,” says Tanksley.

In 1997, Tanksley began chairing a committee at Cornell that aimed to incorporate data-driven research into the life sciences. There, he encountered an engineering approach called operations research that translates data into decisions. In 2006, Tanksley cofounded the Ithaca, NY-based company Nature Source Improved Plants on the principle that this engineering tool could make breeding decisions more efficient. “What we’ve been doing almost 15 years now,” says Tanksley, “is redoing how breeding is approached.”

A Manufacturing Process

Such approaches try to tackle complex scenarios. Suppose, for example, a wheat breeder has 200 genetically distinct lines. The breeder must decide which lines to breed together to optimize yield, disease resistance, protein content, and other traits. The breeder may know which genes confer which traits, but it’s difficult to decipher which lines to cross in what order to achieve the optimum gene combination. The number of possible combinations, says Tanksley, “is more than the stars in the universe.”

An operations research approach enables a researcher to solve this puzzle by defining the primary objective and then using optimization algorithms to predict the quickest path to that objective given the relevant constraints. Auto manufacturers, for example, optimize production given the expense of employees, the cost of auto parts, and fluctuating global currencies. Tanksley’s team optimizes yield while selecting for traits such as resistance to a changing climate. “We’ve seen more erratic climate from year to year, which means you have to have crops that are more robust to different kinds of changes,” he says.

For each plant line included in a pool of possible crosses, Tanksley inputs DNA sequence data, phenotypic data on traits like drought tolerance, disease resistance, and yield, as well as environmental data for the region where the plant line was originally developed. The algorithm projects which genes are associated with which traits under which environmental conditions and then determines the optimal combination of genes for a specific breeding goal, such as drought tolerance in a particular growing region, while accounting for genes that help boost yield. The algorithm also determines which plant lines to cross together in which order to achieve the optimal combination of genes in the fewest generations.

Nature Source Improved Plants conducts, for example, a papaya program in southeastern Mexico where the once predictable monsoon season has become erratic. “We are selecting for varieties that can produce under those unknown circumstances,” says Tanksley. But the new papaya must also stand up to ringspot, a virus that nearly wiped papaya from Hawaii altogether before another Cornell breeder developed a resistant transgenic variety (6). Tanksley’s papaya isn’t as disease resistant. But by plugging “rapid growth rate” into their operations research approach, the team bred papaya trees that produce copious fruit within a year, before the virus accumulates in the plant.

“Plant breeders need operations research to help them make better decisions,” says William Beavis, a plant geneticist and computational biologist at Iowa State in Ames, who also develops operations research strategies for plant breeding. To feed the world in rapidly changing environments, researchers need to shorten the process of developing a new cultivar to three years, Beavis adds.

The big seed companies have investigated use of operations research since around 2010, with Syngenta, headquartered in Basel, Switzerland, leading the pack, says Beavis, who spent over a decade as a statistical geneticist at Pioneer Hi-Bred in Johnston, IA, a large seed company now owned by Corteva, which is headquartered in Wilmington, DE. “All of the soybean varieties that have come on the market within the last couple of years from Syngenta came out of a system that had been redesigned using operations research approaches,” he says. But large seed companies primarily focus on grains key to animal feed such as corn, wheat, and soy. To meet growing food demands, Beavis believes that the smaller seed companies that develop vegetable crops that people actually eat must also embrace operations research. “That’s where operations research is going to have the biggest impact,” he says, “local breeding companies that are producing for regional environments, not for broad adaptation.”

In collaboration with Iowa State colleague and engineer Lizhi Wang and others, Beavis is developing operations research-based algorithms to, for example, help seed companies choose whether to breed one variety that can survive in a range of different future growing conditions or a number of varieties, each tailored to specific environments. Two large seed companies, Corteva and Syngenta, and Kromite, a Lambertville, NJ-based consulting company, are partners on the project. The results will be made publicly available so that all seed companies can learn from their approach.

Figure2
Nature Source Improved Plants (NSIP) speeds up its papaya breeding program in southeastern Mexico by using decision-making approaches more common in engineering. Image credit: Nature Source Improved Plants/Jesús Morales.

Drones and Adaptations

Useful farming AI requires good data, and plenty of it. To collect sufficient inputs, some researchers take to the skies. Crop researcher Achim Walter of the Institute of Agricultural Sciences at ETH Zürich in Switzerland and his team are developing techniques to capture aerial crop images. Every other day for several years, they have deployed image-capturing sensors over a wheat field containing hundreds of genetic lines. They fly their sensors on drones or on cables suspended above the crops or incorporate them into handheld devices that a researcher can use from an elevated platform (7).

Meanwhile, they’re developing imaging software that quantifies growth rate captured by these images (8). Using these data, they build models that predict how quickly different genetic lines grow under different weather conditions. If they find, for example, that a subset of wheat lines grew well despite a dry spell, then they can zero in on the genes those lines have in common and incorporate them into new drought-resistant varieties.

Research geneticist Edward Buckler at the US Department of Agriculture and his team are using machine learning to identify climate adaptations in 1,000 species in a large grouping of grasses spread across the globe. The grasses include food and bioenergy crops such as maize, sorghum, and sugar cane. Buckler says that when people rank what are the most photosynthetically efficient and water-efficient species, this is the group that comes out at the top. Still, he and collaborators, including plant scientist Elizabeth Kellogg of the Donald Danforth Plant Science Center in St. Louis, MO, and computational biologist Adam Siepel of Cold Spring Harbor Laboratory in NY, want to uncover genes that could make crops in this group even more efficient for food production in current and future environments. The team is first studying a select number of model species to determine which genes are expressed under a range of different environmental conditions. They’re still probing just how far this predictive power can go.

Such approaches could be scaled up—massively. To probe the genetic underpinnings of climate adaptation for crop species worldwide, Daniel Jacobson, the chief researcher for computational systems biology at Oak Ridge National Laboratory in TN, has amassed “climatype” data for every square kilometer of land on Earth. Using the Summit supercomputer, they then compared each square kilometer to every other square kilometer to identify similar environments (9). The result can be viewed as a network of GPS points connected by lines that show the degree of environmental similarity between points.

“For me, breeding is much more like art. I need to see the variation and I don’t prejudge it. I know what I’m after, but nature throws me curveballs all the time, and I probably can’t count the varieties that came from curveballs.”

—Molly Jahn

In collaboration with the US Department of Energy’s Center for Bioenergy Innovation, the team combines this climatype data with GPS coordinates associated with individual crop genotypes to project which genes and genetic interactions are associated with specific climate conditions. Right now, they’re focused on bioenergy and feedstocks, but they’re poised to explore a wide range of food crops as well. The results will be published so that other researchers can conduct similar analyses.

The Next Agricultural Revolution

Despite these advances, the transition to AI can be unnerving. Operations research can project an ideal combination of genes, but those genes may interact in unpredictable ways. Tanksley’s company hedges its bets by engineering 10 varieties for a given project in hopes that at least one will succeed.

On the other hand, such a directed approach could miss happy accidents, says Molly Jahn, a geneticist and plant breeder at the University of Wisconsin–Madison. “For me, breeding is much more like art. I need to see the variation and I don’t prejudge it,” she says. “I know what I’m after, but nature throws me curveballs all the time, and I probably can’t count the varieties that came from curveballs.”

There are also inherent tradeoffs that no algorithm can overcome. Consumers may prefer tomatoes with a leafy crown that stays green longer. But the price a breeder pays for that green calyx is one percent of the yield, says Tanksley.

Image recognition technology comes with its own host of challenges, says Walter. “To optimize algorithms to an extent that makes it possible to detect a certain trait, you have to train the algorithm thousands of times.” In practice, that means snapping thousands of crop images in a range of light conditions. Then there’s the ground-truthing. To know whether the models work, Walter and others must measure the trait they’re after by hand. Keen to know whether the model accurately captures the number of kernels on an ear of corn? You’d have to count the kernels yourself.

Despite these hurdles, Walter believes that computer science has brought us to the brink of a new agricultural revolution. In a 2017 PNAS Opinion piece, Walter and colleagues described emerging “smart farming” technologies—from autonomous weeding vehicles to moisture sensors in the soil (10). The authors worried, though, that only big industrial farms can afford these solutions. To make agriculture more sustainable, smaller farms in developing countries must have access as well.

Fortunately, “smart breeding” advances may have wider reach. Once image recognition technology becomes more developed for crops, which Walter expects will happen within the next 10 years, deploying it may be relatively inexpensive. Breeders could operate their own drones and obtain more precise ratings of traits like time to flowering or number of fruits in shorter time, says Walter. “The computing power that you need once you have established the algorithms is not very high.”

The genomic data so vital to AI-led breeding programs is also becoming more accessible. “We’re really at this point where genomics is cheap enough that you can apply these technologies to hundreds of species, maybe thousands,” says Buckler.

Plant breeding has “entered the engineered phase,” adds Tanksley. And with little time to spare. “The environment is changing,” he says. “You have to have a faster breeding process to respond to that.”

Published under the PNAS license.

References

1. United Nations, Department of Economic and Social Affairs, Population Division, World Population Prospects 2019: Highlights, (United Nations, New York, 2019).

2. N. Jones, “Redrawing the map: How the world’s climate zones are shifting” Yale Environment 360 (2018). https://e360.yale.edu/features/redrawing-the-map-how-the-worlds-climate-zones-are-shifting. Accessed 14 May 2020.

3. P. L. Pingali, Green revolution: Impacts, limits, and the path ahead. Proc. Natl. Acad. Sci. U.S.A. 109, 12302–12308 (2012).

4. D. Tilman, The greening of the green revolution. Nature 396, 211–212 (1998).

5. G. P. Ramstein, S. E. Jensen, E. S. Buckler, Breaking the curse of dimensionality to identify causal variants in Breeding 4. Theor. Appl. Genet. 132, 559–567 (2019).

6. D. Gonsalves, Control of papaya ringspot virus in papaya: A case study. Annu. Rev. Phytopathol. 36, 415–437 (1998).

7. N. Kirchgessner et al., The ETH field phenotyping platform FIP: A cable-suspended multi-sensor system. Funct. Plant Biol. 44, 154–168 (2016).

8. K. Yu, N. Kirchgessner, C. Grieder, A. Walter, A. Hund, An image analysis pipeline for automated classification of imaging light conditions and for quantification of wheat canopy cover time series in field phenotyping. Plant Methods 13, 15 (2017).

9. J. Streich et al., Can exascale computing and explainable artificial intelligence applied to plant biology deliver on the United Nations sustainable development goals? Curr. Opin. Biotechnol. 61, 217–225 (2020).

10. A. Walter, R. Finger, R. Huber, N. Buchmann, Opinion: Smart farming is key to developing sustainable agriculture. Proc. Natl. Acad. Sci. U.S.A. 114, 6148–6150 (2017).

Inteligência artificial já imita Guimarães Rosa e pode mudar nossa forma de pensar (Folha de S.Paulo)

www1.folha.uol.com.br

Hermano Vianna Antropólogo, escreve no blog hermanovianna.wordpress.com

22 de agosto de 2020


[resumo] Espantado com as proezas de tecnologias capazes de produzir textos, até mesmo criando propostas a partir de frase de Guimarães Rosa, antropólogo analisa os impactos gerados pela inteligência artificial, aponta dilemas éticos relativos a seu uso, teme pelo aumento da dependência em relação aos países produtores de softwares e almeja que as novas práticas façam florescer no Brasil modos mais diversos e colaborativos de pensar.

GPT-3 é o nome da nova estrela da busca por IA (inteligência artificial). Foi lançado em maio deste ano pela OpenAI, companhia que vai completar cinco anos desde sua fundação bilionária financiada por, entre outros, Elon Musk.

Até agora, o acesso a sua já lendária gigacapacidade de geração de textos surpreendentes, sobre qualquer assunto, é privilégio de pouca gente rica e poderosa. Há, contudo, atalhos divertidos para pobres mortais: um deles é o jogo “AI Dungeon” (masmorra de IA), criação de um estudante mórmon, que desde julho funciona com combustível GPT-3.

O objetivo dos jogadores é criar obras literárias de ficção com ajuda desse modelo de IA. A linguagem de partida é o inglês, mas usei português, e o bichinho teve jogo de cintura admirável para driblar minha pegadinha.

Fui até mais implicante. Não usei apenas português, usei Guimarães Rosa. Copiei e colei, da primeira página de “Grande Sertão: Veredas”: “Alvejei mira em árvore, no quintal, no baixo do córrego”. O “AI Dungeon”, que até aquele ponto estava falando inglês, pegou a deixa e continuou assim: “Uma fogueira crepitante brinca e lambiça em torno de um lindo carvalho”.

Tudo bem, Rosa nunca escreveria essa frase. Fiz uma busca: crepitar não aparece em nenhum momento de “Grande Sertão: Veredas”, e carvalho não costuma ser vizinho de buritis. Porém, o GPT-3 entendeu que precisava mudar de língua para jogar comigo e resolveu arriscar: uma fogueira não fica deslocada no meu quintal, ainda mais uma fogueira brincante. E fez o favor de confundir Rosa com James Joyce, inventando o verbo lambiçar, que meu corretor ortográfico não reconhece, talvez para sugerir uma lambida caprichada ou sutilmente gulosa.

Fiquei espantado. Não é todo dia que recebo uma resposta tão desconcertante. Fiz outra busca, aproveitando os serviços do Google: não há registro da frase completa que o “AI Dungeon” propôs. Foi realmente uma criação original. Uma criação “bem criativa”.

(E testei Joyce também: quando inseri, de “Ulysses”, sampleado igualmente de sua primeira página, “Introibo ad altare Dei”, o jogo foi apenas um pouco menos surpreendente, mandou de volta a tradução do latim para o inglês.)

Originalidade. Criatividade. A combinação disso tudo parece mesmo atributo de um ser inteligente, que tem consciência do que está fazendo ou pensando.

Pelo que entendo, já que minha pouca inteligência não é muito treinada nessa matéria, o GPT-3, certamente o mais parrudo modelo de geração artificial de textos com intenção de pé e cabeça, tem uma maneira muito especial de pensar, que não sou capaz de diferenciar daquilo que acontece entre nossos neurônios: seu método é estatístico, probabilístico.

Está fundamentado na análise de uma quantidade avassaladora de textos, quase tudo que existe na internet, em várias línguas, inclusive linguagens de computador. Sua estratégia mais simples, e certamente estou simplificando muito, é identificar quais palavras costumam aparecer com mais frequência depois de outras. Assim, em suas respostas, chuta o que no seu “pensamento” parecem ser as respostas mais “prováveis”.

Claro que não “sabe” do que está falando. Talvez, no meu teste Rosa, se tivesse escrito peixe, no lugar do carvalho poderia surgir um “lindo tubarão”; e isso não significaria que essa IA entenda profundamente a distinção peixe-árvore.

Mas qual profundidade o entendimento precisa atingir para ser reconhecido como verdadeiramente inteligente? E o chute não é, afinal, uma característica corriqueira dos artifícios da nossa IA? Não estou chutando aqui descaradamente, falando daquilo que não domino, ou não entendo?

Não estou escrevendo isto para tentar definir o que é inteligência ou consciência; melhor voltarmos a um território mais concreto: a probabilidade. Há algo de inusitado em uma fogueira que brinca. Não deve ser tão comum assim essa associação de ideias ou palavras, mas árvore remeter a carvalho deve indicar um treinamento de “machine learning” (aprendizado de máquina) que não aconteceu no Brasil.

Outros “pés de quê” são por aqui estatisticamente mais prováveis de despontarem em nossas memórias “nacionais” quando penetram no reino vegetal. Estou pensando, é claro, em tema bem batido do debate sobre IA: o “bias”, ou viés, inevitável em seus modelos, consequência dos dados que alimentaram seu aprendizado, não importa quão “deep learning” (aprendizagem profunda) tenha sido.
São conhecidos os exemplos mais preconceituosos, como o da IA de identificação de fotos que classificou pessoas negras como gorilas, pois no seu treinamento a quase totalidade dos seres humanos que “viu” era gente branca. Problema dos bancos de dados? É preciso ir mais “deep”.

Então, lembro-me do primeiro artigo assinado por Kai Fu Lee, empresário baseado na China, que li no jornal The New York Times. Resumo: na corrida pela IA, EUA e China ocupam as primeiras posições, muito na frente dos demais países. Poucas grandes companhias serão vencedoras.

Cada avanço exige muitos recursos, inclusive energéticos tradicionais, vide o consumo insustentável de eletricidade para o GPT-3 aprender a “lambiçar”. Muitos empregos vão sumir. Todo o mundo precisará de algo como “renda universal”. De onde virá o dinheiro?

Resposta assustadora de Kai Fu Lee, em tradução do Google Translator, sem minhas correções: “Portanto, se a maioria dos países não for capaz de tributar a IA ultra-lucrativa, empresas para subsidiar seus trabalhadores, que opções eles terão? Prevejo apenas um: a menos que desejem mergulhar seu povo na pobreza, serão forçados a negociar com o país que fornecer a maior parte de seu IA software —China ou Estados Unidos— para se tornar essencialmente dependente econômico desse país, recebendo subsídios de bem-estar em troca de deixar a nação ‘mãe’ IA. as empresas continuam lucrando com os usuários do país dependente. Tais arranjos econômicos remodelariam as alianças geopolíticas de hoje”.

Apesar dos muitos erros, a conclusão é bem compreensível: uma nova teoria da dependência. Eis o pós-colonialismo, ou o cibercolonialismo, como destino inevitável para a humanidade?

Isso sem tocar em algo central no pacote a ser negociado: a colônia se submeterá também ao conjunto de “bias” da “nação ‘mãe’ IA”. Prepare-se: florestas de carvalho, sem buritis.

Recentemente, mas antes do “hype” do GPT-3, o mesmo Kai Fu Lee virou notícia dando nota B- para a atuação da IA durante a pandemia. Ele passou sua quarentena em Pequim. Diz que entregadores de suas compras foram sempre robôs —e, pelo que vi na temporada 2019 do Expresso Futuro, gravada por Ronaldo Lemos e companhia na China, eu acredito.

Ficou decepcionado, todavia, com a falta de protagonismo do “machine learning” no desenvolvimento de vacinas e tratamentos. Eu, com minha ousadia pouco preparada, chutaria nota semelhante, talvez C+, para seguir o viés universidade americana.

Aplaudi, por exemplo, quando a IBM liberou os serviços do Watson para organizações em seu combate contra o coronavírus. Ou quando empresas gigantes, como Google e Amazon, proibiram o uso de suas tecnologias de reconhecimento facial depois das manifestações antirracistas pelo mundo todo.

No entanto, empresas menores, com IAs de vigilância não menos potentes, aproveitaram a falta de concorrência para aumentar sua clientela. E vimos como os aplicativos de rastreamento de contatos e contaminações anunciam a transparência totalitária de todos os nossos movimentos, através de algoritmos que já tornaram obsoletas antigas noções de privacidade.

Tudo bem assustador, para quem defende princípios democráticos. Contudo, nem o Estado mais autoritário terá garantia de controle de seus próprios segredos.

Esses problemas são reconhecidos por toda a comunidade de desenvolvedores de IA. Há muitos grupos —como The Partnership on AI, que inclui da OpenAI a Electronic Frontier Foundation— que se dedicam há anos ao debate sobre as questões éticas do uso da inteligência artificial.

Debate extremamente complexo e cheio de becos perigosos, como demonstra a trajetória de Mustafa Suleyman, uma das personalidades mais fascinantes do século 21. Ele foi um dos três fundadores da DeepMind, a empresa britânica, depois comprada pelo Google, que criou aquela IA famosa que venceu o campeão mundial de Go, jogo de tabuleiro criado na China há mais de 2.500 anos.

As biografias do trio poderiam inspirar filmes ou séries. Demis Hassabis tem pai grego-cipriota e mãe de Singapura; Shane Legg nasceu no norte da Nova Zelândia; e Mustafa Suleyman é filho de sírio taxista imigrante em Londres.

A história de Suleyman pré-DeepMind é curiosa: enquanto estudava na Universidade de Oxford, montou um serviço telefônico para cuidar da saúde mental de jovens muçulmanos. Depois foi consultor para resolução de conflitos. No mundo da IA —hoje cuida de “policy” no Google— nunca teve papas na língua. Procure por suas palestras e entrevistas no YouTube: sempre tocou em todas as feridas, como se fosse crítico de fora, mas com lugar de fala do centro mais poderoso.

Gosto especialmente de sua palestra na Royal Society, com seu estilo pós-punk e apresentado pela princesa Ana. Mesmo assim, com toda sua consciência política muito clara e preocupações éticas que me parecem muito sinceras, Mustafa Suleyman se viu metido em um escândalo que envolve a acusação de uso sem autorização de dados de pacientes do NHS (serviço britânico de saúde pública) para desenvolvimento de aplicativos que pretendiam ajudar a monitorar doentes hospitalares em estado crítico.

Foram muitas as explicações da DeepMind, do Google e do NHS. Exemplo de problemas com os quais vamos viver cada vez mais e que precisam de novos marcos regulatórios para determinar que algoritmos podem se meter com nossas vidas —e, sobretudo, quem vai entender o que pode um algoritmo e o que pode a empresa dona desse algoritmo.

Uma coisa já aprendi, pensando nesse tipo de problema: diversidade não é importante apenas nos bancos de dados usados em processos de “machine learning”, mas também nas maneiras de cada IA “pensar” e nos sistemas de segurança para auditar os algoritmos que moldam esses pensamentos.

Essa necessidade tem sido mais bem explorada nas experiências que reúnem desenvolvedores de IA e artistas. Acompanho com enorme interesse o trabalho de Kenric Mc Dowell, que cuida da aproximação de artistas com os laboratórios de “machine learning” do Google.

Seus trabalhos mais recentes investem na possibilidade de existência de inteligências não humanas e na busca de colaboração entre tipos diferentes de inteligências e modos de pensar, incluindo a inspiração nas cosmotécnicas do filósofo chinês Yuk Hui, que andou pela Paraíba e pelo Rio de Janeiro no ano passado.

Na mesma trilha, sigo a evolução da prática em artes e robótica de Ken Goldberg, professor da Universidade da Califórnia em Berkeley. Ele publicou um artigo no Wall Street Journal em 2017 defendendo a ideia que se tornou meu lema atual: esqueçam a singularidade, viva a multiplicidade.

Através de Ken Goldberg também aprendi o que é floresta randômica (“random forest”), método de “machine learning” que usa não apenas um algoritmo, mas uma mata atlântica de algoritmos, de preferência cada um pensando de um jeito, com decisões tomadas em conjunto, procurando, assim, entre outras vantagens, evitar vieses “individuais”.

Minha utopia desesperada de Brasil: que a “random forest” seja aqui cada vez mais verdejante. Com desenvolvimento de outras IAs, ou IAs realmente outras. Inteligências artificiais antropófagas. GPTs-n ao infinito, capazes de pensar nas 200 línguas indígenas que existem/resistem por aqui. Chatbots que façam rap com sotaque tecnobrega paraense, anunciando as fórmulas para resolução de todos os problemas alimentares da humanidade.

Inteligência não nos falta. Inteligência como a da jovem engenheira Marianne Linhares, que saiu da graduação da Universidade Federal de Campina Grande e foi direto para a DeepMind de Londres.

Em outro mundo possível, poderia continuar por aqui, colaborando com o pessoal de “machine learning” da UFPB (e via Github com o mundo todo), talvez inventando uma IA que realmente entenda a literatura de Guimarães Rosa. Ou que possa responder à pergunta de Meu Tio o Iauaretê, ”você sabe o que onça pensa?”, pensando igual a uma onça. Bom. Bonito.

Da personalização do discurso em Aristóteles à personalização com algoritmos de IA (Época Negócios)

epocanegocios.globo.com

Dora Kaufman* – 11 Set 2020 – 10h30

Os algoritmos de inteligência artificial (IA) atuam como curadores da informação, personalizando, por exemplo, as respostas nas plataformas de busca como Google e a seleção do que será publicado no feed de notícias de cada usuário do Facebook. O ativista Eli Pariser (The Filtre Bubble, 2011) reconhece a utilidade de sistemas de relevância ao fornecer conteúdo personalizado, mas alerta para os efeitos negativos da formação de “bolhas” ao reduzir a exposição à opiniões divergentes. Para Cass Sunstein (#republic, 2017), esses sistemas são responsáveis pelo aumento da polarização cultural e política pondo em risco a democracia. Existem muitas críticas à esses sistemas, algumas justas outras nem tanto; o fato é que personalização, curadoria, clusterização, mecanismos de persuasão, nada disso é novo, cabe é investigar o que mudou com a IA.

A personalização do discurso, por exemplo, remete à Aristóteles. A arte de conhecer o ouvinte e adaptar o discurso ao seu perfil, não para convencê-lo racionalmente, mas para conquistá-lo pelo “coração” é o tema da obra “Retórica”. Composta de três volumes, o Livro II é dedicado ao plano emocional listando as emoções que devem conter um discurso persuasivo: ira, calma, amizade, inimizade, temor, confiança, vergonha, desvergonha, amabilidade, piedade, indignação, inveja e emulação. Para o filósofo, todos, de algum modo, praticam a retórica na sustentação de seus argumentos. Essa obra funda as bases da retórica ocidental que, com seus mecanismos de persuasão, busca influenciar o interlocutor seja ele usuário, consumidor, cliente ou eleitor.

Cada modelo econômico tem seus próprios mecanismos de persuasão, que extrapolam motivações comerciais com impactos culturais e comportamentais. Na Economia Industrial, caracterizada pela produção e pelo consumo massivo de bens e serviços, a propaganda predominou como meio de convencimento nas decisões dos consumidores, inicialmente tratados como uma “massa” de indivíduos indistinguíveis. O advento das tecnologias digitais viabilizou a comunicação segmentada em função de características, perfis e preferências similares, mas ainda distante da hipersegmentação proporcionada pelas tecnologias de IA.

A hipersegmentação com algoritmos de IA é baseada na mineração de grandes conjuntos de dados (Big Data) e sofisticadas técnicas de análise e previsão, particularmente os modelos estatísticos de redes neurais/deep learning. Esses modelos permitem extrair dos dados informações sobre seus usuários e/ou consumidores e fazer previsões com alto grau de acurácia – desejos, comportamentos, interesses, padrões de pesquisa, por onde circulam, bem como a capacidade de pagamento e até o estado de saúde. Os algoritmos de IA transformam em informação útil a imensidão de dados gerados nas movimentações online.

Na visão de Shoshana Zuboff (The Age of Surveillance Capitalism, 2019), a maior ameaça não está nos dados produzidos voluntariamente em nossas interações nos meios digitais (“dados consentidos”), mas nos “dados residuais” sob os quais os usuários de plataformas online não exercem controle. Até 2006, os dados residuais eram desprezados, com a sofisticação dos modelos preditivos de IA esses dados tornaram-se valiosos: a velocidade de digitalização, os erros gramaticais cometidos, o formato dos textos, as cores preferidas e mais uma infinidade de detalhes do comportamento dos usuários são registrados e inseridos na extensa base de dados gerando projeções assertivas sobre o comportamento humano atual e futuro. Outro aspecto ressaltado por Zuboff é que as plataformas tecnológicas, em geral, captam mais dados do que o necessário para a dinâmica de seus modelos de negócio, ou seja, para melhorar produtos e serviços, e os utilizam para prever o comportamento de grupos específicos (“excedente comportamental”).

Esses processos de persuasão ocorrem em níveis invisíveis, sem conhecimento e/ou consentimento dos usuários, que desconhecem o potencial e a abrangência das previsões dos algoritmos de IA; num nível mais avançado, essas previsões envolvem personalidade, emoções, orientação sexual e política, ou seja, um conjunto de informações que em tese não era a intenção do usuário revelar. As fotos postadas nas redes sociais, por exemplo, geram os chamados “sinais de previsão” tais como os músculos e a simetria da face, informações utilizadas no treinamento de algoritmos de IA de reconhecimento de imagem.

A escala atual de geração, armazenamento e mineração de dados, associada aos modelos assertivos de personalização, é um dos elementos-chave da mudança de natureza dos atuais mecanismos de persuasão. Comparando os modelos tradicionais com os de algoritmos de IA, é possível detectar a extensão dessa mudança: 1) de mensagens elaboradas com base em conhecimento do público-alvo superficial e limitado, a partir do entendimento das características generalistas das categorias, para mensagens elaboradas com base em conhecimento profundo e detalhado/minucioso do público-alvo, hipersegmentação e personalização; 2) de correlações entre variáveis determinadas pelo desenvolvedor do sistema para correlações entre variáveis determinadas automaticamente com base nos dados; 3) de limitados recursos para associar comportamentos offline e online para capacidade de capturar e armazenar dados de comportamento off-line e agregá-los aos dados capturados online formando uma base de dados única, mais completa, mais diversificada, mais precisa; 4) de mecanismos de persuasão visíveis (propaganda na mídia) e relativamente visíveis (propaganda na internet) para mecanismos de persuasão invisíveis; 5) de baixo grau de assertividade para alto grau de assertividade; 6) de instrumentos de medição/verificação dos resultados limitados para instrumentos de medição/verificação dos resultados precisos; 7) de capacidade preditiva limitada à tendências futuras para capacidade preditiva de cenários futuros e quando vão acontecer com grau de acurácia média em torno de 80-90%; e 8) de reduzida capacidade de distorcer imagem e voz para enorme capacidade de distorcer imagem e voz, as Deep Fakes.

Como sempre, cabe à sociedade encontrar um ponto de equilíbrio entre os benefícios e as ameaças da IA. No caso, entre a proteção aos direitos humanos civilizatórios e a inovação e o avanço tecnológico, e entre a curadoria da informação e a manipulação do consumo, do acesso à informação e dos processos democráticos.

*Dora Kaufman professora do TIDD PUC – SP, pós-doutora COPPE-UFRJ e TIDD PUC-SP, doutora ECA-USP com período na Université Paris – Sorbonne IV. Autora dos livros “O Despertar de Gulliver: os desafios das empresas nas redes digitais”, e “A inteligência artificial irá suplantar a inteligência humana?”. Professora convidada da Fundação Dom Cabral

Cientistas planejam a ressurreição digital com bots e humanoides (Canal Tech)

Por Natalie Rosa | 25 de Junho de 2020 às 16h40 Reprodução

Em fevereiro deste ano, o mundo todo se surpreendeu com a história de Jang Ji-sung, uma sul-coreana que “reencontrou” a sua filha, já falecida, graças à inteligência artificial. A garota morreu em 2016 devido a uma doença sanguínea.

No encontro simulado, a imagem da pequena Nayeon é exibida para a mãe que está em um fundo verde, também conhecido como chroma key, usando um headset de realidade virtual. A interação não foi só visual, como também foi possível conversar e brincar com a criança. Segundo Jang, a experiência foi como um sonho que ela sempre quis ter.

Encontro de Jang Ji-sung com a forma digitalizada da filha (Imagem: Reprodução)

Por mais que pareça uma tendência difícil de ser executada em massa na vida real, além de ser uma preocupação bastante antiga das produções de ficção científica, existem pessoas interessadas nesta forma de imortalidade. A questão que fica, no entanto, é se devemos fazer isso e como irá acontecer.

Em entrevista ao CNET, John Troyer, diretor do Centre for Death and Society (Centro para Morte e Sociedade) da Universidade de Bath, na Inglaterra, e autor do livro Technologies of the Human Corpse, conta que o interesse mais moderno pela imortalidade começou ainda na década de 1960. Na época, muitas pessoas acreditavam na ideia do processo criônico de preservação de corpos, quando um cadáver ou apenas uma cabeça humana eram congelados com a esperança de serem ressuscitados no futuro. Até o momento, ainda não houve tentativa de serem revividas.

“Aconteceu uma mudança na ciência da morte naquele tempo, e a ideia de que, de alguma forma, os humanos poderiam derrotar a morte”, explica Troyer. O especialista conta também que ainda não há uma pesquisa revisada que prove que o investimento de milhões no upload de dados do cérebro, ou ainda manter um corpo vivo, valha a pena.

Em 2016, um estudo publicado na revista acadêmica Plos One descobriu que expor um cérebro preservado a sondas químicas e elétricas o faz voltar a funcionar. “Tudo isso é uma aposta do que é possível no futuro. Mas eu não estou convencido de que é possível da maneira que estão descrevendo ou desejando”, completa.

Superando o luto

O caso que aconteceu na Coreia do Sul não foi o único que envolve o luto. Em 2015, Eugenia Kuyda, co-fundadora e CEO da empresa de softwares Replika, sofreu com a perda do seu melhor amigo Roman após um atropelamento em Moscou, na Rússia. A executiva decidiu, então, criar um chatbot treinado com milhares de mensagens de texto trocadas pelos dois ao longo dos anos, resultando em uma versão digital de Roman, que pode conversar com amigos e família.

“Foi muito emocionante. Eu não estava esperando me sentir assim porque eu trabalhei naquele chatbot e sabia como ele foi construído”, relata Kuyda. A experiência lembra bastante um dos episódios da série Black Mirror, que aborda um futuro distópico da tecnologia. Em Be Right Back, de 2013, uma jovem mulher perde o namorado em um acidente de carro e se inscreve em um projeto para que ela possa se comunicar com “ele” de forma digital, graças à inteligência artificial.

Por outro lado, Kuyda conta que o projeto não foi criado para ser comercializado, mas sim como uma forma pessoal de lidar com a perda do melhor amigo. Ela conta que qualquer pessoa que tentar reproduzir o feito vai encontrar uma série de empecilhos e dificuldades, como decidir qual tipo de informação será considerada pública ou privada, ou ainda com quem o chatbot poderá interagir. Isso porque a forma de se conversar com um amigo, por exemplo, não é a mesma com integrantes da família, e Kuyda diz que não há como fazer essa diferenciação.

A criação de uma versão digital de uma pessoa não vai desenvolver novas conversas e nem emitir novas opiniões, mas sim replicar frases e palavras já ditas, basicamente, se encaixando com o bate-papo. “Nós deixamos uma quantidade insana de dados, mas a maioria deles não é pessoal, privada ou baseada em termos de que tipo de pessoa nós somos”, diz Kuyda. Em resposta ao CNET, a executiva diz que é impossível obter dados 100% precisos de uma pessoa, pois atualmente não há alguma tecnologia que possa capturar o que está acontecendo em nossas mentes.

Sendo assim, a coleta de dados acaba sendo a maior barreira para criar algum tipo de software que represente uma pessoa após o falecimento. Parte disso acontece porque a maioria dos conteúdos postados online são de uma empresa, passando a pertencer à plataforma. Com isso, se um dia a companhia fechar, os dados vão embora junto com ela. Para Troyer, a tecnologia de memória não tende a sobreviver ao tempo.

Imagem: Reprodução

Cérebro fresco

A startup Nectome vem se dedicando à preservação do cérebro, pensando na possível extração da memória após a morte. Para que isso aconteça, no entanto, o órgão precisa estar “fresco”, o que significaria que a morte teria que acontecer por uma eutanásia.

O objetivo da startup é conduzir os testes com voluntários que estejam em estado terminal de alguma doença e que permitam o suicídio assistido por médicos. Até o momento a Nectome coletou US$ 10 mil reembolsáveis para uma lista de espera para o procedimento, caso um dia a oportunidade esteja disponível. Por enquanto, a companhia ainda precisa se esforçar em ensaios clínicos.

A startup já arrecadou um milhão de dólares em financiamento e vinha colaborando com um neurocientista do MIT. Porém, a publicação da história gerou muita polêmica negativa de cientistas e especialistas em ética, e o MIT encerrou o seu contrato com a startup. A repercussão afirmou que o projeto da empresa não é possível de ser realizado. 

Veja a declaração feita pelo MIT na época:

“A neurociência não é suficientemente avançada ao ponto de sabermos se um método de preservação do cérebro é o suficiente para preservar diferentes tipos de biomoléculas relacionadas à memória e à mente. Também não se sabe se é possível recriar a consciência de uma pessoa”, disse a nota ainda em 2018.

Eternização com a realidade aumentada

Enquanto alguns pensam em extrair a mente de um cérebro, outras empresas optam por uma “ressurreição” mais simples, mas não menos invasiva. A empresa Augmented Reality, por exemplo, tem como objetivo ajudar pessoas a viverem em um formato digital, transmitindo conhecimento das pessoas de hoje para as futuras gerações.

O fundador e CEO da empresa de computação FlyBits e professor do MIT Media Lab, Hossein Rahnama, vem tentando construir agentes de software que possam agir como herdeiros digitais. “Os Millennials estão criando gigabytes de dados diariamente e nós estamos alcançando um nível de maturidade em que podemos, realmente, criar uma versão digital de nós mesmos”, conta.

Para colocar o projeto em ação, a Augmented Reality alimenta um mecanismo de aprendizado de máquina com emails, fotos e atividades de redes sociais das pessoas, analisando como ela pensa e age. Assim, é possível fornecer uma cópia digital de uma pessoa real, e ela pode interagir via chatbot, vídeo digitalmente editado ou ainda como um robô humanoide.

Falando em humanoides, no laboratório de robótica Intelligent Robotics, da Universidade de Osaka, no Japão, já existem mais de 30 androides parecidos com humanos, inclusive uma versão robótica de Hiroshi Ishiguro, diretor do setor. O cientista vem inovando no campo de pesquisa de interações entre humanos e robôs, estudando a importância de detalhes, como movimentos sutis dos olhos e expressões faciais.

Reprodução: Hiroshi Ishiguro Laboratory, ATR

Quando Ishiguro morrer, segundo o próprio, ele poderá ser substituído pelo seu robô para dar aulas aos seus alunos, mesmo que esta máquina nunca seja realmente ele e nem possa gerar novas ideias. “Nós não podemos transmitir as nossas consciências aos robôs. Compartilhamos, talvez, as memórias. Um robô pode dizer ‘Eu sou Hiroshi Ishiguro’, mas mesmo assim a consciência é independente”, afirma.

Para Ishiguro, no futuro nada disso será parecido com o que vemos na ficção científica. O download de memória, por exemplo, é algo que não vai acontecer, pois simplesmente não é possível. “Precisamos ter diferentes formas de fazer uma cópia de nossos cérebros, mas nós não sabemos ainda como fazer isso”, completa. 

Mãe “reencontra” filha morta graças a realidade virtual

IBM will no longer offer, develop, or research facial recognition technology (The Verge)

IBM’s CEO says we should reevaluate selling the technology to law enforcement

By Jay Peters Jun 8, 2020, 8:49pm EDT

Original article

IBM will no longer offer general purpose facial recognition or analysis software, IBM CEO Arvind Krishna said in a letter to Congress today. The company will also no longer develop or research the technology, IBM tells The Verge. Krishna addressed the letter to Sens. Cory Booker (D-NJ) and Kamala Harris (D-CA) and Reps. Karen Bass (D-CA), Hakeem Jeffries (D-NY), and Jerrold Nadler (D-NY).

“IBM firmly opposes and will not condone uses of any [facial recognition] technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with our values and Principles of Trust and Transparency,” Krishna said in the letter. “We believe now is the time to begin a national dialogue on whether and how facial recognition technology should be employed by domestic law enforcement agencies.” Facial recognition software has come under scrutiny for issues with racial bias and privacy concerns

Facial recognition software has improved greatly over the last decade thanks to advances in artificial intelligence. At the same time, the technology — because it is often provided by private companies with little regulation or federal oversight — has been shown to suffer from bias along lines of age, race, and ethnicity, which can make the tools unreliable for law enforcement and security and ripe for potential civil rights abuses.

In 2018, research by Joy Buolamwini and Timnit Gebru revealed for the first time the extent to which many commercial facial recognition systems (including IBM’s) were biased. This work and the pair’s subsequent studies led to mainstream criticism of these algorithms and ongoing attempts to rectify bias.

A December 2019 National Institute of Standards and Technology study found “empirical evidence for the existence of a wide range of accuracy across demographic differences in the majority of the current face recognition algorithms that were evaluated,” for example. The technology has also come under fire for its role in privacy violations.

Notably, NIST’s study did not include technology from Amazon, which is one of the few major tech companies to sell facial recognition software to law enforcement. Yet Rekognition, the name of the program, has also been criticized for its accuracy. In 2018, the American Civil Liberties Union found that Rekognition incorrectly matched 28 members of Congress to faces picked from 25,000 public mugshots, for example.

Another company, Clearview AI, has come under heavy scrutiny starting earlier this year when it was discovered that its facial recognition tool, built with more than 3 billion images compiled in part from scraping social media sites, was being widely used by private sector companies and law enforcement agencies. Clearview has since been issued numerous cease and desist orders and is at the center of a number of privacy lawsuits. Facebook was also ordered in January to pay $550 million to settle a class-action lawsuit over its unlawful use of facial recognition technology.

IBM has tried to help with the issue of bias in facial recognition, releasing a public data set in 2018 designed to help reduce bias as part of the training data for a facial recognition model. But IBM was also found to be sharing a separate training data set of nearly one million photos in January 2019 taken from Flickr without the consent of the subjects — though the photos were shared under a Creative Commons license. IBM told The Verge in a statement at the time that the data set would only be accessed by verified researchers and only included images that were publicly available. The company also said that individuals can opt-out of the data set.

In his letter, Krishna also advocated for police reform, arguing that more police misconduct cases should be put under the purview of federal court and that Congress should make changes to qualified immunity doctrine, among other measures. In addition, Krishna said that “we need to create more open and equitable pathways for all Americans to acquire marketable skills and training,” and he suggested Congress consider scaling the P-TECH school model nationally and expanding eligibility for Pell Grants.

Update, June 9th, 2:45AM ET: This story has been updated to reference the work of AI researchers Joy Buolamwini and Timnit Gebru, whose 2018 Gender Shades project provided the first comprehensive empirical data on bias in facial recognition systems.

Zizek: Podemos vencer as cidades pós-humanas (Outras Palavras)

Em Nova York, constrói-se, agora, uma distopia: não haverá contato social; as maiorias sobreviverão de trabalhos braçais e subalternos; corporações e Estado controlarão os inseridos. Alternativa: incorporar as novas tecnologias ao Comum

Outras Palavras Tecnologia em Disputa

Por Slavoj Žižek – publicado 21/05/2020 às 21:49 – Atualizado 21/05/2020 às 22:06

Por Slavoj Zizek | Tradução de Simone Paz

As funções básicas do Estado de Nova York, muito em breve, poderão ser “reimaginadas” graças à aliança do governador Andrew Cuomo com a Big Tech personificada. Seria este o campo de testes para um futuro distópico sem contato físico?

Parece que a escolha básica que nos resta para lidar com a pandemia se reduz a duas opções: uma é ao estilo de Trump (com uma volta à atividade econômica sob as condições de liberdade de mercado e lucratividade, mesmo que isso traga milhares de mortes a mais); a outra é a que nossa mídia chama de o “jeitinho chinês” (um controle estatal, total e digitalizado, dos indivíduos).

Entretanto, nos EUA, ainda existe uma terceira opção, que vem sendo divulgada pelo governador de Nova York, Andrew Cuomo, e pelo ex-CEO do Google, Eric Schmidt — em conjunto com Michael Bloomberg e Bill Gates e sua esposa Melinda, nos bastidores. Naomi Klein e o The Intercept chamam essa alternativa de Screen New Deal [alusão jocosa ao Green New Deal, que pressupõe uma Virada Sócioambiental. Screen New Deal seria algo como Virada para dentro das Telas] Ele vem com a promessa de manter o indivíduo a salvo das infecções, mantendo todas as liberdades pessoais que interessam aos liberais — mas será que tem chances de funcionar?

Em uma de suas reflexões sobre a morte, o comediante de stand-up Anthony Jeselnik fala sobre sua avó: “Nós achávamos que ela tinha morrido feliz, enquanto dormia. Mas a autópsia revelou uma verdade horrível: ela morreu durante a autópsia”. Esse é o problema da autópsia de Eric Schmidt sobre nossa situação: a autópsia e suas implicações tornam nossa situação muito mais catastrófica do que é para ser.

Cuomo e Schmidt anunciaram um projeto para “reimaginar a realidade pós-Covid do estado de Nova York, com ênfase na integração permanente da tecnologia em todos os aspectos da vida cívica”. Na visão de Klein, isso levará a um “futuro-sem-contato permanente, altamente lucrativo”, no qual não existirá o dinheiro vivo, nem a necessidade de sair de casa para gastá-lo. Todos os serviços e mercadorias possíveis poderão ser encomendados pela internet, entregues por drone, e “compartilhados numa tela, por meio de uma plataforma”. E, para fazer esse futuro funcionar, seria necessário explorar massivamente “trabalhadores anônimos aglomerados em armazéns, data centers, fábricas de moderação de conteúdo, galpões de manufatura de eletrônicos, minas de lítio, fazendas industriais, plantas de processamento de carne, e prisões”. Existem dois aspectos cruciais que chamam a atenção nesta descrição logo de cara.

O primeiro é o paradoxo de que os privilegiados que poderão usufruir de uma vida nos ambientes sem contato serão, também, os mais controlados: toda a vida deles estará nua à verdadeira sede do poder, à combinação do governo com a Big Tech. Está certo que as redes que são a alma de nossa existência estejam nas mãos de empresas privadas como Google, Amazon e Apple? Empresas que, fundidas com agências de segurança estatais, terão a capacidade de censurar e manipular os dados disponíveis para nós ou mesmo nos desconectar do espaço público? Lembre-se de que Schmidt e Cuomo recebem imensos investimentos públicos nessas empresas — então, não deveria o público ter também acesso a elas e poder controlá-las? Em resumo, como propõe Klein, eles não deveriam ser transformados em serviços públicos sem fins lucrativos? Sem um movimento semelhante, a democracia, em qualquer sentido significativo, será de fato abolida, já que o componente básico de nossos bens comuns — o espaço compartilhado de nossa comunicação e interação — estará sob controle privado

O segundo aspecto é que o Screen New Deal intervém na luta de classes num ponto bem específico e preciso. A crise do vírus nos conscientizou completamente do papel crucial daqueles que David Harvey chamou de “nova classe trabalhadora”: cuidadores de todos os tipos, desde enfermeiros até aqueles que entregam comida e outros pacotes, ou os que esvaziam nossas lixeiras, etc. Para nós, que conseguimos nos auto-isolar, esses trabalhadores se tornaram nosso principal contato com outro, em sua forma corpórea, uma fonte de ajuda, mas também de possível contágio. O Screen New Deal não passa de um plano para minimizar o papel visível dessa classe de cuidadores, que deve permanecer não-isolada, praticamente desprotegida, expondo-se ao perigo viral, para que nós, os privilegiados, possamos sobreviver em segurança — alguns até sonham com a possibilidade de que robôs passem a tomar conta dos idosos e lhes façam companhia… Mas esses cuidadores invisíveis podem se rebelar, exigindo maior proteção: na indústria de frigoríficos nos EUA, milhares de trabalhadores tiveram a covid, e dezenas morreram; e coisas semelhantes estão acontecendo na Alemanha. Agora, novas formas de luta de classes vão surgir

Se levarmos esse projeto à sua conclusão hiperbólica, ao final do Screen New Deal existe a ideia de um cérebro conectado, de nossos cérebros compartilhando diretamente experiências em uma Singularidade, uma espécie de autoconsciência coletiva divina. Elon Musk, outro gênio da tecnologia de nossos tempos, recentemente declarou que ele acredita que em questão de 10 anos a linguagem humana estará obsoleta e que, se alguém ainda a utilizar, será “por motivos sentimentais”. Como diretor da Neuralink, ele diz que planeja conectar um dispositivo ao cérebro humano dentro de 12 meses

Esse cenário, quando combinado com a extrapolação do futuro em casa de Naomi Klein, a partir das ambições dos simbiontes de Big Tech de Cuomo, não lembra a situação dos humanos no filme Matrix? Protegidos, fisicamente isolados e sem palavras em nossas bolhas de isolamento, estaremos mais unidos do que nunca, espiritualmente, enquanto os senhores da alta tecnologia lucram e uma multidão de milhões de humanos invisíveis faz o trabalho pesado — uma visão de pesadelo, se é que alguma vez existiu alguma

No Chile, durante os protestos que eclodiram em outubro de 2019, uma pichação num muro dizia: “Outro fim de mundo é possível”. Essa deveria ser nossa resposta para o Screen New Deal: sim, nosso mundo chegou ao fim, mas um futuro-sem-contato não é a única alternativa, outro fim de mundo é possível.

Ética cresce em importância no mundo com menos religião, diz Luciano Floridi (Folha de S.Paulo)

Folha de S.Paulo

Raphael Hernandes – 19 de fevereiro de 2020

Ser um pioneiro em um dos ramos de uma área do conhecimento que possui milênios de existência, a filosofia, é para poucos. E esse é o caso do italiano Luciano Floridi, 55, professor da Universidade de Oxford.

Ele é um dos primeiros, e mais proeminentes, nomes nos campos de filosofia e ética da informação. Esses ramos estudam tópicos ligados à computação e tecnologia. É conselheiro da área para o governo britânico e trabalhou para empresas gigantes da área, como Google e a chinesa Tencent.

Ele também se destaca quando o assunto é especificamente IA (inteligência artificial). Floridi foi um dos 52 autores das “Orientações éticas para uma IA de confiança”, da União Europeia.

À Folha, falou sobre temas que foram desde o elementar na sua área, como a definição de inteligência artificial, a discussões mais complexas acerca de como pensar nossa relação com a tecnologia.

Para ele, a discussão moral cresce em importância na era digital. “Temos menos religião. As pessoas tendem a associar ética à religião um pouco menos do que no passado”, diz. “Ela precisa se sustentar sozinha.”

A conversa por videochamada durou aproximadamente uma hora e foi interrompida apenas uma vez: quando a mulher de Floridi, brasileira, foi embarcar num avião para visitar o país natal e ele quis desejar uma boa viagem.

A fala paciente e educada deu lugar a irritação quando o assunto se tornou pensamento de Nick Bostrom, também filósofo da Universidade de Oxford, que versa sobre os riscos de a IA destruir a humanidade.

“A IA descrita na singularidade e na superinteligência de Nick Bostrom não é impossível”, diz. “Da mesma forma que é possível que uma civilização extraterrestre chegue aqui, domine e escravize a humanidade. Impossível? Não. Vamos nos planejar para o caso de isso acontecer? Só pode ser piada.”

*

Como definir IA? São artefatos construídos pelo homem capazes de fazer coisas no nosso lugar, para nós e, às vezes, melhor do que nós, com uma habilidade especial que não encontramos em outros artefatos mecânicos: aprender a partir de sua performance e melhorar.
Uma forma de descrever IA é como uma espécie de reservatório de operações para fazer coisas que podemos aplicar em contextos diferentes. Podemos aplicar para economizar eletricidade em casa, para encontrar informações interessantes sobre pessoas que visitam minha loja, para melhorar a câmera do meu celular, para recomendar em um site outros produtos dos quais o consumidor gostaria.

Na academia há muitas opiniões contrastantes sobre o que é IA. A definição de IA é importante? Uma definição diz “isso é aquilo” e “aquilo é isso”, como “água é H2O” e “H2O é água” e não tem erro. Não temos uma definição sobre IA dessa forma, mas também não temos definição de muitas coisas importantes na vida como amor, inteligência e por aí vai. Muitas vezes temos um bom entendimento, conseguimos reconhecer essas coisas ao vê-las. É crucial ter um bom entendimento da tecnologia porque aí temos as regras e a governança de algo que compreendemos.

Qual a importância da ética hoje, uma era digital? Ela se tornou mais e mais importante porque nós temos algo mais e algo menos. Temos menos religião, então ela precisa se sustentar sozinha. Não se pode justificar algo dizendo “porque a Igreja diz isso” ou porque “Deus mandou”. Um pouco menos de religião tornou o debate ético mais difícil, mas mais urgente.
E temos algo mais: falamos muito mais uns com os outros do que em qualquer momento no passado. Estou falando de globalização. De repente, diferentes visões sobre o que está certo e errado estão colidindo de uma forma que nunca aconteceu. Quanto mais tecnologia, ciência e poder tivermos sobre qualquer coisa –sociedade, o ambiente, nossas próprias vidas–, mais urgentes ficam as questões éticas.

E por que discutir ética em IA? Até recentemente, entendíamos em termos de “intervenções divinas” (para as pessoas do passado que acreditavam em Deus), “intervenções humanas” ou “intervenções animais”. Essas eram as forças possíveis. É como se tivéssemos um tabuleiro de xadrez em que, de repente, surge uma peça nova. Claramente, essa peça muda o jogo todo. É IA.
Se você tem algo que pode fazer coisas no mundo de forma autônoma e aprendendo, de modo que podem mudar seus próprios programas, sua atividade requer entendimento de certo e errado: ética.

Como respondemos essas perguntas e definimos os limites? No último ano tivemos um florescer de códigos éticos para IA. Dois em particular são bem importantes pelo alcance. Um é o da União Europeia. Fizemos um bom trabalho, penso, e temos uma boa estrutura na Europa para entender IA boa e não tão boa. O outro é da OCDE, uma estrutura semelhante.

Críticos dizem que esses documentos não são específicos o suficiente. O sr. vê eles como um primeiro passo? Mostra que, pelo menos, algumas pessoas em algum lugar se importam o suficiente para produzir um documento sobre essa história toda. Isso é melhor do que nada, mas é só isso: melhor que nada. Alguns deles são completamente inúteis.
O que acontece agora é que toda empresa, toda instituição, todo governo sente que não pode ser deixado para trás. Se 100 empresas têm um documento com suas estruturas e regras para IA, se sou a empresa 102 também preciso ter. Não posso ser o único sem.
Precisamos fazer muito mais. Por isso, as diretrizes verdadeiras são feitas por governos, organizações ou instituições internacionais. Se você tem instituições internacionais, como a OECD, União Europeia, Unesco, intervindo, já estamos em um novo passo na direção certa.
Olhe, por exemplo, a IA aplicada a reconhecimento facial. Já tivemos esse debate. Uso reconhecimento facial na minha loja? No aeroporto? Esse buraco tem que ser tapado e as pessoas o estão tapando. Eu tendo a ser um pouco otimista.

E como estamos nessa tradução de diretrizes em políticas práticas? Num contexto geral, vejo grandes empresas desenvolvendo serviços de consultoria para seus clientes e ajudando a verificar se estão de acordo com as regras e regulações, bem como se levaram em consideração questões éticas.
Há lacunas e mais precisa ser feito, mas algo já está disponível. As pessoas estão se mexendo em termos de legislação, autorregulação, políticas ou ferramentas digitais para traduzir princípios em práticas. O que se pode fazer é se certificar que os erros aconteçam o mais raramente possível e que, quando acontecerem, haja uma forma de retificar.

Com diferentes entidades, governos, instituições e empresas criando suas regras para uso de IA, não corremos o risco de ficar perdidos em termos de qual documento seguir? Um pouco, sim. No começo pode ter discordância, ou visões diferentes, mas isso é algo que já vivemos no passado.
Toda a história de padrões industriais e de negócios é cheia desses desacordos e, depois, reconciliação e encontrar uma plataforma comum para todos estarem em concordância.

As grandes empresas de tecnologia estão pedindo por regulação, o que é estranho, visto que elas normalmente tentam autoregulação. O sr. esteve com uma delas, a Google. Por que esse interesse das empresas de tecnologia agora? Há alguns motivos para isso. O primeiro é certeza: eles querem ter certeza do que é certo e errado. Empresas gostam de certeza, mais até do que de regras boas. Melhor ter regras ruins do que regra nenhuma. A segunda coisa é que entendem que a opinião pública pede por uma boa aplicação de IA. Dado que é opinião pública, tem que vir da sociedade o que é aceitável e o que não é. Empresas gostam de regulações desde que elas ajudem.

Há diferença ao pensar em regulações para sistemas com finalidades diferentes? Por exemplo, é diferente pensar em regulação em IA para carros automatizados e IA para sugestão de músicas? Sim e não. Há regulações que são comuns para muitas áreas. Pense nas regulações de segurança envolvendo eletricidade. Não importa se é uma furadeira elétrica, um forno elétrico ou um carro elétrico. É eletricidade e, portanto, tem regulações de segurança. Isso se aplicaria igualmente à IA. Mas aí você tem algo específico: você tem segurança ligada aos freios para o carro, não para o micro-ondas. Isso é bem específico. Penso, então, numa combinação dos dois: princípios que cubram várias áreas diferentes, diretrizes que se espalhem horizontalmente, mas também verticalmente pensando em setor por setor.

Quão longe estamos de ter essas diretrizes estabelecidas? Falamos de meses, anos, uma geração? Alguns anos. Eu não me surpreenderia se tivéssemos essa conversa em cinco anos e o que dissemos hoje fosse história.

E como funciona esse processo de pensamento na prática? Por exemplo, no caso de carros autônomos, como se chega a uma conclusão em relação à responsabilidade do caso de acidente: é do motorista, da fabricante, de quem? Tínhamos isso em muitos outros contextos antes da IA. A recomendação é distribuir a responsabilidade entre todos os agentes envolvidos, a menos que eles consigam provar que não tiveram nada a ver com acidente.
Um exemplo bem concreto: na Holanda, se você andar de bicicleta ao lado de alguém, sem problemas. Você pode andar na rua, lado a lado com alguém e tudo bem. Se uma terceira pessoa se junta a vocês, é ilegal. Não se pode ir com três pessoas lado a lado numa rua pública. Quem recebe a multa? Todos os três, porque quando A e B estavam lado a lado e o C chega até eles, A e B poderiam reduzir a velocidade ou parar totalmente para que o C passasse. Agora, na mesma Holanda, outro exemplo, se dois barcos estão parados na margem do rio lado a lado, é legal. Se um terceiro barco chegar e parar ao lado deles, é ilegal. Nesse caso, somente o terceiro barco tomaria uma multa. Por quê? Porque os outros dois barcos não podem ir para lugar algum. Não é culpa deles. Pode ver que são dois exemplos bastante elementares, bem claros, com três agentes. Em um caso igualmente responsáveis e a responsabilidade é distribuída, no outro caso apenas um responsável.
Com IA é o mesmo. Em contextos nos quais tivermos uma pessoa, doida, usando IA para algo mal, a culpa é dessa pessoa. Não tem muito debate. Em muitos outros contextos, com muitos agentes, quem será culpado? Todos, a menos que provem que não fizeram nada de errado. Então o fabricante do carro, do software, o motorista, até mesmo a pessoa que atravessou a rua no lugar errado. Talvez haja corresponsabilidade que precise ser distribuída entre eles.

Seria sempre uma análise caso a caso? Acho que é mais tipos de casos. Não só um caso isolado, mas uma família de casos.
Façamos um exemplo realista. Se uma pessoa dirigindo em um carro autônomo não tem como dirigir, usar um volante, nada. É como eu em um trem, tenho zero controle. Aí o carro se envolve num acidente. De quem é a culpa? Você culparia um passageiro pelo acidente que o trem teve? Claro que não. Num caso em que haja um volante, em que haja um grande botão vermelho dizendo “se algo der errado, aperte o botão”… Quem é responsável? O fabricante do carro e o motorista que não apertou o botão.
Precisamos ser bastante concretos e nos certificar de que existem tipologias e, não exatamente caso a caso, mas compreendendo que caso tal pertence a tal tipologia. Aí teremos um senso claro do que está acontecendo.

Em suas palestras, o sr. menciona um uso em excesso e a subutilização de IA. Quais os problemas nessas situações? O excesso de uso, com um exemplo concreto, é como o debate que temos hoje sobre reconhecimento facial. Não precisamos disso em todos os cantos. É como matar mosquitos com uma granada.
A subutilização é típica, por exemplo, no setor de saúde. Não usamos porque a regulação não é muito clara, as pessoas têm medo das consequências.

A IA vai criar o futuro, estar em tudo? Temos uma grande oportunidade de fazer muito trabalho bom, tanto para nossos problemas sociais, desigualdade em particular, e para o ambiente, particularmente aquecimento global. É uma tecnologia muito poderosa que, nas mãos certas e com a governança correta, poderia fazer coisas fantásticas. Me preocupa um pouco o fato de que não estamos fazendo isso, estamos perdendo a oportunidade.
O motivo de a ética ser tão importante é exatamente porque a aplicação correta dessa tecnologia precisará de um projeto geral sobre a nossa sociedade. Gosto de chamá-lo de “projeto humano”. O que a sociedade irá querer. Qual futuro queremos deixar para as próximas gerações? Estamos preocupados com outras coisas, como usar IA para gerar mais dinheiro, basicamente.

E os direitos dos robôs? Deveríamos estar pensando nisso? [Risos]. Não, isso é uma piada. Você daria direitos à sua lavadora de louças? É uma peça de engenharia. É um bom entretenimento [falar de direito dos robôs], podemos brincar sobre isso, mas não falemos de Star Wars.

O sr. é crítico em relação à ficção científica que trata do fim do mundo por meio de IA ou superinteligência. Não vê a ideia de Nick Bostrom como uma possibilidade? Acho que as pessoas têm jogado com alguns truques. Esses são truques que ensinamos a alunos de filosofia no primeiro ano. O truque é falar sobre “possibilidade” e é exatamente essa a palavra que usam.
Deixe-me dar um exemplo: imagine que eu compre um bilhete de loteria. É possível que eu ganhe? Claro. Compro outro bilhete de outra loteria. É possível que eu ganhe da segunda vez? Sim, mas não vai acontecer. É improvável, é insignificantemente possível. Esse é o tipo de racionalização feita por Nick Bostrom. “Ah! Mas você não pode excluir a possibilidade…” Não, não posso. A IA descrita na singularidade e na superinteligência de Nick Bostrom não é impossível. Concordo. Significa que é possível? Não.
Da mesma forma que é possível que uma civilização extraterrestre chegue aqui, domine e escravize a humanidade. Impossível? Hmmm. Não. Vamos nos planejar para o caso de isso acontecer? Só pode ser piada.

A IA é uma força para o bem? Acho que sim. Como a maioria das tecnologias que já desenvolvemos são. Quando falamos da roda, alfabeto, computadores, eletricidade… São todas coisas boas. A internet. É tudo coisa boa. Podemos usar para algo ruim? Absolutamente.
Sou mais otimista em relação à tecnologia e menos em relação à humanidade. Acho que faremos uma bagunça com ela. Por isso, discussões como a do Nick Bostrom, singularidade, etc. não são simplesmente engraçadas. Elas distraem, e isso é sério.
Conforme falamos, temos 700 milhões de pessoas sem acesso a água limpa que poderiam usar IA para ter uma chance. E você realmente quer se preocupar com algum Exterminador do Futuro? Skynet? Eticamente falando, é irresponsável. Pare de ver Netflix e caia na real.

Reportagem: Raphael Hernandes/ Edição: Camila Marques, Eduardo Sodré, Roberto Dias / Ilustrações e infografia: Carolina Daffara

Marcelo Leite: Desinteligência artificial agrava Covid-19 (Folha de S.Paulo)

www1.folha.uol.com.br

04 de maio de 2020

Terça-feira (28) participei de uma teleconversa curiosa, sobre inteligência artificial (IA) e humanização da medicina. Parecia contradição nos termos, em especial nesta pandemia de Covid-19, ou debate sobre sexo dos anjos, quando estamos fracassando já na antessala da alta tecnologia –realizar testes diagnósticos, contar mortes corretamente e produzir dados estatísticos confiáveis.

O encontro virtual, que vai ao ar amanhã, faz parte de uma série (youtube.com/rio2c) que vem substituir a conferência sobre economia criativa Rio2C, cuja realização neste mês foi cancelada. Coube-me moderar o diálogo entre Sonoo Thadaney, do Presence ­–centro da Universidade Stanford dedicado à humanização do atendimento de saúde–, e Jorge Moll Neto, do Instituto D’Or de Pesquisa e Ensino (Idor), conhecido como Gito.

O coronavírus CoV-2 já legou cerca de 3,5 milhões de infectados e 250 mil mortos (números subestimados). A pandemia é agravada por líderes de nações populosas que chegaram ao poder e nele se mantêm espalhando desinformação com ajuda de algoritmos de redes sociais que privilegiam a estridência e os vieses de confirmação de seus seguidores.

Você entendeu: Donald Trump (EUA, 1/3 dos casos no mundo) e Jair Bolsonaro (Brasil, um distante 0,2% do total, mas marchando para números dantescos). Trump corrigiu alguns graus no curso na nau de desvairados em que se tornou a Casa Branca; o Messias que não faz milagres ainda não deu sinais de imitá-lo, porque neste caso seria fazer a coisa certa.

Na teleconversa da Rio2C, Sonoo e Gito fizeram as perorações de praxe contra a substituição da ciência por ideologia na condução da pandemia. O diretor do Idor deu a entender que nunca viu tanta besteira saindo da boca de leigos e autointitulados especialistas.

A diretora do centro de Stanford, originária da Índia, disse que, se precisar preparar um frango tandoori, vai ligar e perguntar para quem sabe fazer. E não para qualquer médico que se aventura nos mares da epidemiologia dizendo que a Terra é plana, deduzo eu, para encompridar a metáfora, na esperança de que leitores brasileiros entendam de que deputado se trata.

Há razão para ver o vídeo da conversa (com legendas em português) e sair um pouco otimista. Gito afirmou que se dá mais importância e visibilidade para consequências não pretendidas negativas da tecnologia.

No caso, a IA e seus algoritmos dinâmicos, que tomam resultados em conta para indicar soluções, como apresentar em cada linha do tempo na rede social as notas com maior probabilidade de atraírem novos seguidores e de serem reproduzidas, curtidas ou comentadas (o chamado engajamento, que muitos confundem com sucesso).

Um bom nome para isso seria desinteligência artificial. A cizânia se espalha porque os usuários aprendem que receberão mais cliques quanto mais agressivos forem, substituindo por raiva os argumentos de que não dispõem para confirmar as próprias convicções e as daqueles que pensam como ele (viés de confirmação).

Já se pregou no passado que se deve acreditar mesmo que seja absurdo, ou porque absurdo (ouçam os “améns” com que fanáticos brindam Bolsonaro). Também já se disse que o sono da razão produz monstros.

O neurocientista do Idor prefere desviar a atenção para efeitos não pretendidos positivos das tecnologias. Cita as possibilidades abertas para enfrentar a Covid-19 com telefones celulares de última geração disseminados pelo mundo, mesmo em países pobres, como difusão de informação e bases de dados para monitorar mobilidade em tempos de isolamento social.

Há também os aplicativos multiusuário de conversa com vídeo, que facilitam o contato para coordenação entre colegas trabalhando em casa, a deliberação parlamentar a distância e, claro, as teleconsultas entre médicos e pacientes.

Sonoo diz que a IA libera profissionais de saúde para exercerem mais o que está na base da medicina, cuidar de pessoas de carne e osso. Mesmo que seja em ambiente virtual, o grande médico se diferencia do médico apenas bom por tratar o doente, não a doença.

Fica tudo mais complicado quando o espectro do contágio pelo corona paira sobre todos e uma interface de vídeo ou a parafernália na UTI afasta o doutor do enfermo. Mas há dicas simples para humanizar esses encontros, de portar uma foto da pessoa por trás da máscara a perguntar a origem de objetos que se vê pela tela na casa do paciente (mais sugestões em inglês aqui: youtu.be/DbLjEsD1XOI).

Conversamos ainda sobre diversidade, equidade, acesso e outras coisas importantes. Para terminar, contudo, cabe destacar o chamado de Gito para embutir valores nos algoritmos e chamar filósofos e outros especialistas de humanidades para as equipes que inventam aplicações de IA.

Os dois governos mencionados, porém, são inimigos da humanidade, no singular (empatia, mas também conjunto de mulheres, homens, velhos, crianças, enfermos, sãos, deficientes, atletas, patriotas ou não, ateus e crentes) e no plural (disciplinas que se ocupam das fontes e razões do que dá certo ou dá errado nas sociedades humanas e na cabeça das pessoas que as compõem).

São os reis eleitos da desinteligência artificial.