Arquivo da tag: Discriminação

Negros relatam que precisam ir arrumados a consultas para serem mais bem atendidos (Folha de S.Paulo)

www1.folha.uol.com.br

Racismo estrutural prejudica a população negra da atenção básica à saúde mental; profissionais sugerem mudanças na formação

Havolene Valinhos

19 de novembro de 2022


Mesmo não se sentindo bem de saúde, você se preocuparia com que roupa vestir antes de ir ao hospital ou a uma consulta médica? Ou teria receio da forma como seria tratada na hora de dar à luz por causa da cor de sua pele? Esta é apenas uma amostra do que passa pela cabeça da população negra brasileira.

Ainda criança, a trancista Sara Viana, 22, recebia o conselho da mãe de que elas seriam mais bem atendidas se chegassem ao hospital bem vestidas. “Ela falava que os hospitais públicos não atendem bem as pessoas da comunidade. Então eu poderia estar morrendo, mas pensava: tenho que me arrumar minimamente.”

Em 2020, a Sara sentiu na pele o que a mãe temia. Ela afirma que não sabia que estava com infecção na vesícula e foi levada ao hospital às pressas, de pijamas.

“Até meu cabelo estava molhado, pois chovia no dia. Percebi o incômodo da enfermeira da triagem, que nem me examinou e disse que era cólica e deu a classificação como baixo risco. Como estava com muita dor, meu pai e meu irmão me levaram para outro hospital, onde fui atendida por uma médica negra que me deu atenção. A situação era tão grave que passei por cirurgia no mesmo dia”, relembra.

Coordenadora de um grupo sobre maternidade na Casa de Marias, espaço de escuta e acolhimento para mulheres pretas em situação de vulnerabilidade social, a psicóloga Alessandra Marques diz ser comum ouvir relatos de violência obstétrica.

“Há uma ideia de que a mulher negra suporta mais a dor. É comum pacientes relatarem medo antes do parto. Já é um momento de mais fragilidade, e essa mulher pode sentir ainda mais dor porque não consegue relaxar devido à tensão.”

Como parte de um estudo publicado em 2016 e citado com frequência, pesquisadores da Universidade da Virginia (EUA) investigaram 222 estudantes e médicos residentes brancos e descobriram que mais de um terço deles acreditava equivocadamente que negros têm a pele mais espessa que brancos e, por isso, faziam recomendações menos adequadas para tratamentos contra dor. O estudo foi publicado na Proceedings of the National Academy of Sciences.

Essas situações têm consequências na saúde mental e podem ser agravadas quando não tratadas. “Elas nem sempre encontram escuta qualificada. Pacientes dizem que tiveram suas falas sobre racismo ou solidão da mulher negra invalidadas quando atendidas por psicólogos brancos. Esse tipo de atendimento pode agravar mais o quadro.”

Coordenadora-geral da Casa de Marias, a também psicóloga Ana Carolina Barros Silva ressalta a importância de profissionais negros no atendimento a essa população e cita a própria experiência como exemplo. Ela conta que passou por vários dermatologistas e ginecologistas brancos até encontrar, enfim, profissionais negros que correspondessem a suas expectativas.

“A dermatologia é composta majoritariamente de pessoas brancas, que dificilmente têm formação para lidar com a pele negra e, por isso, usam protocolos que não são adequados.”

De acordo com a fisioterapeuta Merllin de Souza, doutoranda na Faculdade de Medicina da USP (Universidade de São Paulo), o currículo básico dos cursos de saúde não têm disciplinas obrigatórias sobre atenção à população negra. Há, contudo, uma disciplina optativa, chamada Formação do Profissional de Saúde e Combate ao Racismo.

“Esses profissionais estão sendo formados para atender os usuários do SUS, das UBSs, dos postinhos, pessoas pretas periféricas que passam por racismo estrutural?”, questiona. “Às vezes a pessoa chega num estágio grave porque não consegue ter o atendimento necessário.”

Para Souza, a formação básica do médico deixa a desejar nesse aspecto. “O Código de Ética prevê respeito étnico racial, mas a maioria dos conselhos não discute essas questões. É preciso respeitar os direitos humanos e trabalhar o letramento racial com esses profissionais”, afirma.

Ythalo Pau-Ferro, 22, cursa o quarto semestre de medicina na USP e diz que o racismo é evidente nos ambientes de saúde. “Um paciente negro que chega vomitando pode ser considerado um usuário de drogas”, critica. Como homem preto, ele afirma querer contribuir para que esse quadro mude.

A pesquisadora Ana Claudia Sanches Baptista, 34, diz que nunca foi atendida por médicos negros, com exceção de sua psicóloga. “Ela é a primeira profissional negra da área da saúde por quem sou atendida”, diz. E relata que uma vez foi constrangida por uma ginecologista, durante um exame de rotina. “As mulheres negras são consideradas parideiras. Essa médica ficou indignada por eu não querer ter filhos.”

O dentista Guilherme Blum, 31, conta que começou a pesquisar sobre a saúde bucal do homem negro porque sentiu falta dessa abordagem no curso de odontologia. “A população negra é a que mais perde dentes ou tem doença bucal, mas não se fala sobre isso”, diz ele, referindo-se a casos de dor, cárie, perda dentária e necessidade de prótese.

Blum atua no Programa Saúde da Família e relata um episódio no qual um paciente o abraçou porque sentiu identificação e conforto. “Existe um conceito subjetivo de que pessoas pretas aguentam mais a dor, o que na verdade é racismo estrutural.”

Outro atendimento que o marcou foi o de uma criança de 7 anos com a mãe. “Ela disse que ficou feliz por ter me encontrado e chorou ao lembrar que sofreu violência odontológica, tiraram muitos dos seus dentes.”

A discriminação no ambiente de trabalho também traz riscos à saúde psíquica da população negra. A enfermeira Carla Mantovan, 38, relata que passou por situações de racismo ao longo de cinco anos que contribuíram para o avanço de quadros de depressão e também de vitiligo. Ela chegou a se afastar do trabalho por um período, para cuidar do emocional, mas acabou pedindo demissão no início deste ano porque o assédio continuou após seu retorno às atividades.

“Tive que tomar essa decisão em nome da minha saúde mental, porém hoje pago um preço alto. A renda familiar caiu muito, meu marido também ficou sem emprego, e temos financiamentos atrasados e uma bebê de um ano e meio.” Ambos trabalham hoje por conta, como confeiteiros.

“Não registrei boletim de ocorrência nem gravei o que acontecia. Quando reportei aos meus superiores o que acontecia, eu não tinha provas e não validaram o que falei. E quando levei um advogado, as testemunhas se omitiram”, afirma Carla.

A psicóloga da Casa de Marias lembra que, além do racismo, as mulheres pretas enfrentam mais dificuldades no acesso a saúde, educação, moradia e emprego, o que contribui para uma vida “cada vez mais precarizada”. “Somados, todos esses elementos produzem sofrimento psíquico. Começam a surgir sintomas psicossomáticos, transtornos de ansiedade, depressão, insônia”, diz Silva.

Quanto aos homens, ela ressalta que o homem preto é cobrado para ser forte e raramente busca ajuda profissional quando o assunto é saúde mental. “Muitos buscam apoio quando já estão em um estado de surto, de depressão grave.”

Opinion | On Affirmative Action, What Once Seemed Unthinkable Might Become Real (New York Times)

nytimes.com


Guest Essay

Oct. 28, 2022

Credit: Michael Kennedy

By Linda Greenhouse

As affirmative action prepares to meet its fate before a transformed Supreme Court, after having been deemed constitutional in higher education for more than four decades, the cases to be argued on Monday bring into sharp focus a stunning reality.

After all this time, after the civil rights movement and the many anti-discrimination laws it gave birth to, after the election of the first Black president and the profound racial reckoning of the past few years — perhaps because of all those things — the country is still debating the meaning of Brown v.Board of Education.

A dispute over what the court meant when it declared in 1954 that racial segregation in the public schools violates constitutional equality is not what I expected to find when I picked up the daunting pile of briefs filed in two cases challenging racially conscious admissions practices at Harvard and the University of North Carolina. There are more than 100 briefs, representing the views of hundreds of individual and organizational “friends of the court,” in addition to those filed by the parties themselves.

Both cases were developed by a made-to-order organization called Students for Fair Admissions Inc. The group asks the court in both cases to overturn Grutter v. Bollinger, its 2003 decision upholding affirmative action in student admissions to the University of Michigan’s law school.

Justice Sandra Day O’Connor, writing for the majority in Grutter, said then that society’s interest in maintaining a diverse educational environment was “compelling” and justified keeping affirmative action going, as needed, for the next 25 years. Since that was 19 years ago, I expected to read an argument for why the timetable should be foreshortened or, more broadly, why diversity should no longer be considered the compelling interest the court said it was in 1978 in Regents of the University of California v. Bakke. The court concluded in that case that race could be used as one criterion by universities in their admissions decisions.

Instead, I found this bold assertion on page 47 of the plaintiff’s main brief: “Because Brown is our law, Grutter cannot be.”

Relying on a kind of double bank shot, the argument by Students for Fair Admissions goes like this: The Brown decision interpreted the 14th Amendment’s equal protection guarantee to prohibit racial segregation in public schools. In doing so, it overturned the “separate but equal” doctrine established 58 years earlier in Plessy v. Ferguson. Therefore, the court in Brown necessarily bound itself to Justice John Marshall Harlan’s reference in his dissenting opinion in Plessy to a “colorblind” Constitution.

“Just as Brown overruled Plessy’s deviation from our ‘colorblind’ Constitution, this court should overrule Grutter’s,” the group asserts in its brief. “That decision has no more support in constitutional text or precedent than Plessy.”

Briefs on the universities’ side take vigorous issue with what the University of North Carolina’s brief calls “equal protection revisionism.” Noting that Justice Harlan’s objection to enforced separation of the races was that it imposed a “badge of servitude” on Black citizens, the brief observes that “policies that bring students together bear no such badge.”

Moreover, a brief by the NAACP Legal Defense and Educational Fund Inc., under the auspices of which Thurgood Marshall argued Brown before the Supreme Court, warns that the plaintiff’s position “would transform Brown from an indictment against racial apartheid into a tool that supports racial exclusion.” The “egregious error” in the court’s majority opinion in Plessy, the legal defense fund’s brief explains, was not its failure to embrace a “colorblind” ideal but its “failure to acknowledge the realities and consequences of persistent anti-Black racism in our society.” For that reason, the brief argues, the Grutter decision honored Brown, not Plessy.

“Some level of race-consciousness to ensure equal access to higher education remains critical to realizing the promise of Brown,” the defense fund argues.

Grutter was a 5-to-4 decision. While the court was plainly not at rest on the question of affirmative action, it evidently did not occur to the justices in 2003 to conduct their debate on the ground of which side was most loyal to Brown. Each of the four dissenters — Chief Justice William Rehnquist and Justices Anthony Kennedy, Antonin Scalia and Clarence Thomas — wrote an opinion. None cited Brown; Justice Thomas quoted Justice Harlan’s “our Constitution is colorblind” language from his Plessy dissent in the last paragraph of his 31-page opinion, which was mainly a passionate expression of his view that affirmative action has hurt rather than helped African Americans.

While the contest at the court over Brown’s meaning is new in the context of higher education, it was at the core of the 2007 decision known as Parents Involved, which concerned a limited use of race in K-12 school assignments to prevent integrated schools from becoming segregated again. In his opinion declaring the practice unconstitutional, Chief Justice John Roberts had this to say: “Before Brown, schoolchildren were told where they could and could not go to school based on the color of their skin. The school districts in these cases have not carried the heavy burden of demonstrating that we should allow this once again — even for very different reasons.” In his dissenting opinion, Justice Stephen Breyer called the chief justice’s appropriation of Brown “a cruel distortion of history.”

The invocation of a supposedly race-neutral 14th Amendment — as the former Reagan administration attorney general Edwin Meese III phrased it in his brief against the universities — goes to the very meaning of equal protection. That was clear earlier this month in the argument in the court’s important Voting Rights Act case in the new term.

Alabama is appealing a decision requiring it to draw a second congressional district with a Black majority. Alabama’s solicitor general, Edmund LaCour, denounced the decision as imposing a racial gerrymander that he said placed the Voting Rights Act “at war with itself and with the Constitution.” “The Fourteenth Amendment is a prohibition on discriminatory state action,” he told the justices. “It is not an obligation to engage in affirmative discrimination in favor of some groups vis-à-vis others.”

The newest member of the court, Justice Ketanji Brown Jackson, pushed back strongly with an opposite account of the 14th Amendment’s origins. “I don’t think that the historical record establishes that the founders believed that race neutrality or race blindness was required,” she said. “The entire point of the amendment was to secure the rights of the freed former slaves.”

It is no coincidence that challenges to the constitutionality of both affirmative action and the Voting Rights Act appear on the court’s calendar in a single term. The conjunction reflects the accurate perception that the current court is open to fundamental re-examination of both. Indeed, decisions going back to the 1980s have held that in setting government policy, race cannot be a “predominant” consideration. But whether because the votes haven’t been there or from some institutional humility no longer in evidence, the court always stopped short of proceeding to the next question: whether the Constitution permits the consideration of race at all.

That question, always lurking in the background, is now front and center. Not too long ago, it would have been scarcely thinkable that if and when the court took that step, it would do so in the name of Brown v. Board of Education. But if the last term taught us anything, it’s that the gap between the unthinkable and the real is very short, and shrinking fast.

Ms. Greenhouse, the recipient of a 1998 Pulitzer Prize, reported on the Supreme Court for The Times from 1978 to 2008 and was a contributing Opinion writer from 2009 to 2021.

Fala de Bolsonaro sobre canibalismo entre indígenas gera indignação, diz líder yanomami (Folha de S.Paulo)

www1.folha.uol.com.br

Presidente do conselho de saúde indígena afirma que prática não existe, e antropólogo vê delírio em frase resgatada na campanha

Vinicius Sassine

7 de outubro de 2022


A afirmação do presidente Jair Bolsonaro (PL) sobre canibalismo entre indígenas na região de Surucucu, feita em 2016 e resgatada na disputa eleitoral em segundo turno, é mentirosa, repulsiva, ofensiva e causadora de indignação entre os indígenas. É o que afirma Júnior Yanomami, presidente do Condisi (Conselho Distrital de Saúde Indígena) dos Yanomami e Ye’kuana.

“Estou indignado, com raiva. Como um presidente que é candidato fala isso? Ele é uma pessoa que não conhece o Brasil. Meu povo não é canibal, não come humanos. Isso não existe nem nunca existiu, nem entre ancestrais”, diz Júnior à Folha.

O presidente do Condisi é da região de Surucucu, uma das maiores áreas da Terra Indígena Yanomami, na região de Alto Alegre (RR). Ali vivem 3,5 mil yanomamis, em 34 comunidades. O Exército tem um PEF (Pelotão Especial de Fronteira) na região.

O antropólogo Rogério Pateo, professor do Departamento de Antropologia e Arqueologia da UFMG (Universidade Federal de Minas Gerais), morou em Surucucu por nove meses para um doutorado sobre os indígenas. O convívio com eles se dá desde 1998. Para Pateo, a referência de Bolsonaro é aos yanomami da região de Surucucu em Roraima.

“O que ele fala é um delírio. É uma coisa absurda num nível. Típica de quem vive nessa bolha de preconceito contra os indígenas. Os yanomamis têm códigos alimentares rigorosos. Eles não comem nem carne de bicho mal passada”, afirma o antropólogo, que disse não saber de nenhuma prática de canibalismo entre outros indígenas brasileiros.

As afirmações de Bolsonaro, feitas quando era deputado federal, ressurgiram nas redes sociais e foram exploradas pela campanha do ex-presidente Lula (PT), que levou as falas à propaganda eleitoral na TV. A campanha de Bolsonaro disse que acionará o TSE (Tribunal Superior Eleitoral) contra o vídeo.

O vídeo está no próprio canal de Bolsonaro no Youtube. Ele identifica o material, que tem mais de uma hora de duração, como uma entrevista dada ao jornal The New York Times. A data da postagem é 24 de março de 2016.

“Quase comi um índio em Surucucu uma vez”, afirma o então deputado no vídeo. Bolsonaro diz ter estado uma vez em Surucucu. “Comecei a ver lá as mulheres índias passando com um carregamento de bananas nas costas. E o índio passa limpando os dentes com capim. ‘O que está acontecendo?’ Eu vi muita gente andando. ‘Morreu um índio e eles estão cozinhando.’ Eles cozinham o índio.”

Bolsonaro prossegue na fala ao jornalista: “É a cultura deles. Bota o corpo. É para comer. Cozinha por dois, três dias, e come com banana. E daí eu queria ver o índio sendo cozinhado. Daí o cara: ‘Se for, tem de comer.’ ‘Eu como.’ Aí da comitiva ninguém quis ir.”

O então deputado ainda reforça: “Eu comeria o índio sem problema nenhum. É cultura deles”.

Não existe essa cultura, nem hábito, nem prática, nem histórico de ações do tipo entre os yanomamis de Surucucu, segundo Junior Yanomami, que nasceu e cresceu na comunidade, onde permanece com a família.

“Não tinha conhecimento dessa fala de Bolsonaro”, diz Júnior.

Ele detalha como funcionam os rituais fúnebres entre os yanomamis. Primeiro, são dois dias de reunião entre os indígenas. Depois, duas pessoas são escolhidas para colocar o corpo na floresta adentro, onde fica entre 30 e 45 dias, guardado e suspenso em estruturas finas de madeira.

Em seguida ocorre a cremação, e as cinzas são guardadas em utensílios. Se o indígena que morreu é uma pessoa considerada importante para a comunidade, como um pajé, uma liderança ou um caçador, a retenção das cinzas pode durar anos. E pode haver repartição do material entre os indígenas.

“O que Bolsonaro disse ofende e chateia muito. Não há nenhum registro de que ele tenha ido a Surucucu”, afirma Júnior. “A sociedade vai pensar que somos canibais. Essa pessoa não está bem da cabeça. Não tem o que oferecer ao Brasil.”

Para o antropólogo Rogerio Pateo, o que Bolsonaro faz é reproduzir uma imagem de desenho animado.

“Os relatos que existem são sobre guerreiros tupinambás, no litoral e no século 16, capturarem e assarem inimigos”, afirma. “Os yanomamis não comem nem carne de onça, porque dizem que onça come gente.”

Segundo Pateo, as afirmações de Bolsonaro são a manifestação de um “preconceito num nível baixíssimo”. “Ele tem na cabeça aquela imagem que assustou a Europa 500 anos atrás. É preconceito e racismo. Atualmente, não há resquício dessa imagem de canibalismo entre indígenas brasileiros.”

Preceitos da Pombagira: mulheres de terreiros e lutas (Outras Palavras)

outraspalavras.net

por AzMina

Publicado 17/02/2022 às 15:11 – Atualizado 17/02/2022 às 15:23

Visita a espaços de culto de religiões de matriz africana – que historicamente têm mulheres como líderes. As Iaôs e Ialorixás tornam-se referências nas lutas pelo direito à igualdade religiosa, de raça e de gênero

Por Aymê Brito, no AzMina

“Exu (…) exerce forte domínio sobre as mulheres e as moças”, dizia uma coluna de opinião no jornal O Estado de São Paulo, em 1973. Escrito no período da Ditadura Militar no Brasil, o artigo demonizava as religiões de matriz africana e demonstrava preocupação que as mulheres abandonassem o “lar” em troca da vida nos terreiros. Quase cinco décadas depois, o machismo e o racismo seguem presentes na vida das mulheres que escolhem fazer parte das religiões afro-brasileiras, mas elas resistem e lideram terreiros.

Não é comum vê-las em cargos de liderança em outras religiões, como na Igreja Católica com padres e papas homens. Já nas religiões de matriz africana, as mulheres quase sempre são maioria, ocupando os postos mais altos. Quem frequenta os barracões (como também são chamados os terreiros) percebe isso.

Seja como mulheres de santo, senhoras do ilê, sacerdotisas ou herdeiras do axé, elas conquistaram um protagonismo que não ficou restrito aos terreiros. Axé Muntu! Essa é uma expressão criada pela intelectual Lélia Gonzalez – uma mistura das línguas Iorubá (axé: poder, energia) com o dialeto Kimbundo (muntu: gente). A socióloga e ativista usou muito de sua vivência como mulher do candomblé na produção intelectual que fez sobre a vida e posição das mulheres negras na sociedade brasileira.

Nesta reportagem trazemos as falas de Mãe Du, Nailah, Kenya e Renata, que, assim como Lélia, mostram que a influência dos povos de terreiros pode ser encontrada hoje no espaço acadêmico, na militância, na política, na culinária e em vários outros campos da sociedade.

Num país marcado por profundas desigualdades sociorraciais como o Brasil, os terreiros e as mulheres à frente deles – as macumbeiras, como elas mesmas se chamam – desempenham um papel social muito além da religião. Elas realizam uma verdadeira “feitiçaria” ao conciliar a tradição de diferentes povos, resistir às opressões e ajudar a proporcionar um espaço de acolhimento a quem sempre foi excluído.

Perseguição à cultura e às mulheres

A perseguição aos terreiros e barracões, que já dura mais de 500 anos, e as campanhas de difamação na imprensa geraram uma falta de conhecimento generalizada. “A umbanda, com seus sucedâneos e religiões assemelhadas, é entre nós um subproduto da ignorância associada à politicalha. Seu terreno de eleição já foi o quilombo e o mocambo. Modernamente é a favela e o escritório eleitoral” – dizia mais um trecho da coluna do jornal paulista, publicada logo após uma festa em comemoração ao Dia de Oxóssi.

Noticiários racistas como esse não eram (e não são) raros. Resquícios de uma sociedade que até 1832 obrigava todos a se converterem à religião oficial do Estado – na época, a Cristã. Isso fez com que outras expressões religiosas fossem criminalizadas, sofrendo com opressão policial e apreensão de objetos sagrados – que até hoje nunca foram devolvidos.

A cientista política e também praticante do Candomblé, Nailah Neves, Ìyàwó ty Ọ̀ṣun (seu nome de santo), afirma que essa perseguição também era resultado do fato de as mulheres serem maioria e liderarem as casas de axé. “Terreiros, quilombos e escolas de samba, que eram espaços de resistência e de valorização da cultura negra matriarcal, eram um grande risco para o projeto eugenista e patriarcal do Estado brasileiro.”

Passados 34 anos da Constituição Federal que, em seu artigo 5, passou a garantir a liberdade de crença e proteção aos locais de cultos religiosos diversos, a discriminação não teve fim. Em 2021, um estudo da Comissão de Combate à Intolerância Religiosa apontou que 91% dos ataques que ocorreram no estado do Rio de Janeiro eram contra as mesmas religiões – as de Racistradição africana. 

Ensinamentos da pombagira

Kenya Odara (primeira na imagem), de 23 anos, é uma das cofundadoras do coletivo de mulheres negras Siriricas Co e atualmente frequenta o terreiro de Candomblé Àse Efon Omibainà, composto apenas por mulheres. “Quando estamos nos terreiros não nos preocupamos só com a questão religiosa, somos mulheres negras, toda a nossa existência é política.” Foto: Divulgação/ Arquivo Pessoal

Embora as investidas contra os afro-religiosos não tenham sido poucas, os terreiros e as mulheres continuam passando de geração em geração os preceitos e fundamentos do povo de axé. Renata Pallottine, de 36 anos, é bisneta de Dona Maria, Mãe de Santo, de uma casa de umbanda no interior de São Paulo, e cresceu aprendendo os valores civilizatórios desta comunidade.

Advogada pelos direitos das mulheres e atuante no combate ao racismo religioso, Renata atualmente é responsável pela área jurídica do coletivo Terreiro Resiste, movimento de defesa das comunidades tradicionais. Hoje, como uma das filhas de santo mais velhas de um terreiro na capital paulista, ela conta que foi essa vivência que contribuiu para o seu engajamento na luta:

“Quem nasce umbandista já aprende com a Pombagira que a desigualdade de gênero mata, aniquila e silencia, e que mulheres, sobretudo as racializadas, devem ocupar lugar de poder e decisão dentro das nossas comunidades.”

A Pombagira é uma das entidades cultuadas nessas religiões, que representa as encruzilhadas e é conhecida por simbolizar uma figura feminina ligada ao prazer e à liberdade sexual. Renata explica que a figura da pombagira em muitos lugares é temida exatamente por romper com a lógica patriarcal: “mulher que poeticamente nos ensina a autonomia dos corpos femininos”.

Renata também chama atenção para a história dessas religiões, que vêm de uma cultura de valorização de povos ancestrais socialmente excluídos, mas passou por um forte embranquecimento nos últimos anos. “Em 1908, um homem branco, militar, espírita, de São Gonçalo, teria fundado a religião só porque deu nome às práticas que já existiam nos morros cariocas. Como é possível fundar algo que já existe?”, questionou a advogada.

A família de santo

Eu, repórter desta matéria, cresci ouvindo as histórias das macumbeiras, contadas por Elza Mendes, baiana de 72 anos, mulher negra e minha avó. Ela lida com a ignorância da sociedade sobre sua cultura há pelo menos 50 anos. “Ninguém vê com bons olhos, ainda hoje as pessoas têm muito medo, acham que é magia”, desabafa. Mas ressalta sempre o sentimento que há no terreiro de pertencer a uma comunidade. “Quando você abraça um terreiro, você começa a fazer parte de uma comunidade”, diz ela.

Hoje candomblecista, Elza foi a primeira a se tornar uma Iaô num dia de feitura, recebendo o título de dofona.

Glossário:

Iorubá: é um grupo étnico-linguístico da África Ocidental, principalmente na Nigéria e no Congo. Varia conforme o local e é usada nos rituais de matrizes africanas.

– Feitura no santo: é a iniciação de alguém no culto aos orixás. Pode vir com novo nome e assume novas funções. O ritual varia segundo a religião e pode durar até três meses.

Orixás (em iorubá: Òrìṣà): divindades representadas pela natureza, acredita-se que tenham existido anteriormente em Orum (céu em iorubá).

Aborós: orixás de energia masculina. Podem ser incorporados por pessoas de todos os gêneros.

Ayabás: orixás de energia feminina. Podem ser incorporados por pessoas de todos os gêneros.

Dona Elza conta que quando se começa a fazer parte de um terreiro você se torna também integrante de uma família de santo. “Tanto é que a gente diz irmão, tio, filho de santo”, comentou. Em muitos lugares os terreiros são conhecidos por serem receptivos a todo tipo de gente. “Uma mãe de santo nunca deixa de acolher um filho, mesmo se não tiver onde morar, será bem recebido no terreiro.”

Esse acolhimento está intimamente ligado à presença das mulheres na religião e a própria história dos negros no Brasil, conforme explica a pesquisadora Jacyara Silva, professora e coordenadora do núcleo de estudos afro-brasileiros da Universidade Federal do Espírito Santo (UFES). “É importante lembrar que as famílias dos negros que chegavam ao Brasil eram separadas por estratégia de dominação.”

Após o sequestro da população negra do continente africano, a formação das “famílias de santo” foi o jeito encontrado para preservar a identidade cultural e reconstruir essa ideia de família que havia sido destruída na escravidão. As grandes responsáveis por refazer esses laços familiares, dentro das religiões afro-brasileiras, foram as mulheres negras, as Yalorixás. Os barracões passaram a se tornar presentes na maior parte das regiões periféricas do país, acolhendo as pessoas que eram estigmatizadas pela sociedade, como mães solo e o público LGBTQIA+. 

“Não quer dizer que não existam nos terreiros os mesmos problemas que existem fora deles”, explicou Jacyara. As religiões de matriz africana estão inseridas dentro de uma sociedade onde racismo, machismo e transfobia são estruturais. Por isso, o cotidiano dos terreiros não está isento dessas questões. Mas, “pode estar na estrutura, mas não é institucionalizado”, ponderou a pesquisadora.

Debatendo fora dos terreiros

Maria do Carmo, Omó de Omolú Iemanjá Oxalá, conhecida como Mãe Du, é uma das mulheres à frente de um terreiro de Umbanda, na cidade de Viçosa, no interior de Minas Gerais. Apesar do grande respeito que conquistou entre os seus, teve que encarar o preconceito das mães e professoras da escola em que a sua filha estudava. “As pessoas ficaram meio cismadas”, conta.

A força de seguir por mais de 20 anos na defesa dos povos de terreiros vem da crença de que o amanhã será melhor que o hoje. A trajetória dela no culto aos orixás já tem, na verdade, 50 anos. “Fui a primeira Yaô daqui, andei pela cidade toda de branquinho.” Atualmente Mãe Du está na Umbanda, mas foi iniciada dentro do Candomblé, onde teve que passar por diversos processos até se tornar de fato umaIaô – filha de santo. Se tornar feita no santo é uma vitória para a maioria das mulheres de axé, por ser um processo de várias etapas, que requer muito tempo de dedicação e prática dentro do terreiro.

Ela também é líder espiritualista e integra o Conselho Municipal de Promoção da Igualdade Racial de Viçosa. Os cargos fora do terreiro são um marco e uma representação importante para quem é de religiões de matriz africana, mas também são espaços arriscados. “Defender aquilo que se é, hoje em dia, é perigoso, principalmente para nós mulheres.”

O preconceito acaba afastando outros praticantes dos encontros e debates religiosos, por preferirem se resguardar. Mas, Mãe Du – que tem viajado nos últimos anos para falar das religiões de matriz africana nas universidades – sente que agora as pessoas começaram a querer entender mais sobre sua cultura.

Hierarquia ancestral

Em boa parte da tradição africana, a hierarquia não se baseia no gênero, mas sim na experiência e conhecimento. “O matriarcalismo é natural de vários povos africanos, até porque a hierarquia não é por gênero como os europeus impuseram, é por ancestralidade”, explicou a candomblecista Nailah Neves.

As religiões de matriz africana não dividem o mundo entre bem e mal, emoção e ciência, corpo e alma, homens e mulheres. Nailah argumenta que essa lógica binária foi imposta aos povos que estavam sendo colonizados, por influência do eurocentrismo cristão. Existe na Umbanda e no Candomblé uma outra forma de ver e se relacionar com o mundo. “Não são apenas religiões, são povos e comunidades tradicionais, assim como são os quilombos.”

As religiões afro-brasileiras que conhecemos hoje são fruto das características de diversos povos africanos que se encontram no país e, exatamente por isso, elas variam conforme a nação ou tradição de origem, como acontece no caso do Candomblé, da Umbanda, do Batuque e do Xangô.

Sem nenhum tipo de livro oficial, como a Bíblia, os fundamentos são passados por gerações via tradição oral, e nem sempre são os mesmos em todos os lugares. Os preceitos e costumes não estão “escritos em pedra”.

AÇÕES E ESPAÇOS OCUPADOS PELAS MULHERES DE AXÉ NOS ÚLTIMOS ANOS:

  • No Brasil, o Dia Nacional de Combate à Intolerância Religiosa, 21 de janeiro, data que assegura a diversidade religiosa, foi criado em homenagem a uma líder religiosa, a Mãe Gilda. Em 1999, ela teve seu terreiro em Salvador invadido e depredado por fundamentalistas religiosos e acabou falecendo no ano seguinte.
  • Em 2021, a Organização das Mulheres de Axé do Brasil (MAB) realizou uma campanha de combate a violência menstrual. Elas distribuíram mais de 23 mil pacotes de absorventes higiênicos para pessoas em situação de vulnerabilidade econômica e social.
  • O Fórum Nacional de Segurança Alimentar e Nutricional dos Povos Tradicionais de Matriz Africana (FONSANPOTMA), presidido pela médica e líder religiosa Kato Mulanji, é uma das organizações que lutam para garantir soberania alimentar aos povos tradicionais.
  • Desde 2017, as mulheres de axé conquistaram o reconhecimento da profissão de baiana de acarajé e passaram a ter direitos aos benefícios profissionais. Em 2005 elas já tinham sido reconhecidas como Patrimônio Cultural Imaterial do Brasil.
  • Pelo país todo, terreiros são responsáveis por projetos de atendimento à comunidade, oficinas, distribuição de alimentos e ações de combate a violência. O Ilê Omolu Oxum, liderado pela ialorixá Mãe Meninazinha de Oxum, em atividade na Baixada Fluminense desde 1968, é um dos que oferece orientação às mulheres vítimas de violência. 

O que é racismo religioso. E qual seu efeito nas crianças (Nexo)

Iraci Falavina e Guilherme Gurgel

21 de jan de 2022 (atualizado 21/01/2022 às 20h39)

Pais que praticam religiões de matriz africana no Brasil relatam casos de preconceito, incluindo a perda da guarda de filhos sob a anuência da Justiça
Devotos do candomblé carregam cestas de flores em cerimônia religiosa, na Bahia
 DEVOTOS DO CANDOMBLÉ CARREGAM CESTAS DE FLORES EM CERIMÔNIA RELIGIOSA, NA BAHIA

Este conteúdo foi produzido pelos autores como trabalho final do Lab Nexo de Jornalismo Digital, que teve como tema “Primeira Infância e Desigualdades” e foi realizado no segundo semestre de 2021. O programa é uma iniciativa do Nexo Jornal em parceria com a Fundação Maria Cecilia Souto Vidigal e apoio da Porticus América Latina e do Insper.

Dados do Ministério da Mulher, Família e Direitos Humanos apontam 645 registros de violações da liberdade de crença e religião no Brasil entre janeiro e dezembro de 2021, a maior parcela relacionada a religiões de matriz africana — incluindo Candomblé, Umbanda e outras. Levantamentos anteriores também refletem essa realidade.

INTOLERÂNCIA

Registros de violações de liberdade religiosa no Brasil, por gênero da vítima, de acordo com dados da Ouvidoria Nacional de Direitos Humanos

O preconceito que cerca quem pratica o Candomblé, a Umbanda, entre outras designações afro, integra o fenômeno do racismo religioso. Trata-se de um problema que, segundo especialistas, tem um impacto especialmente danoso para crianças.

Neste texto, o Nexo explica o que configura o racismo religioso, mostra o que a legislação prevê sobre o tema e traz relatos, que vão do preconceito no ambiente escolar a decisões judiciais que fazem com que filhos sejam separados dos pais.

O conceito e a legislação

A expressão “racismo religioso” não está no Código Penal, mas é algo que se enquadra na Lei nº 7.716, de 5 de janeiro de 1989, segundo o advogado especialista em crimes raciais Gilberto Silva.

Tal lei versa sobre crimes provocados por “discriminação ou preconceito de raça, cor, etnia, religião ou procedência nacional”, com penas previstas de um a três anos de reclusão.

O termo “racismo religioso”, então, acaba sendo usado para reforçar um ponto central da sociedade brasileira: o racismo estrutural no Brasil.

Silva afirma que a lei ainda é vista por muitos como pouco eficiente e permissiva. Professor de história da África da UFMG (Universidade Federal de Minas Gerais), Alexandre Marcussi concorda que a punição ainda é ineficaz para os casos de racismo religioso. “A lei é extremamente leniente. Tem sido principalmente nos últimos anos no Brasil, com a ascensão ao poder e a influência de cultos religiosos pentecostais, que fazem ataques recorrentes a cultos de religiões africanas”, afirma.

“Se pode entender essas intolerâncias menos como intolerância contra as práticas dessas religiões e mais como uma intolerância às camadas da população que estão historicamente associadas a essas religiões” – Alexandre Marcussi, professor de história da África da UFMG

O Brasil viveu 300 anos de escravidão, período em que milhões de pessoas foram trazidas à força de regiões da África para serem usadas e negociadas como mercadoria. A cultura e a religião dessas pessoas sofreram um processo de tentativa de apagamento.

O artigo 5º da Constituição brasileira de 1824, por exemplo, instituiu o catolicismo como a religião oficial do Império. Já o artigo 276 do Código Criminal de 1830 proibia celebrar em casa, publicamente ou em templos “o culto de outra religião que não seja a do Estado”.

A abolição só foi proclamada em 1888 no Brasil e o Estado brasileiro só se tornou laico a partir de 1890, com o decreto nº 119-A, de 7 de janeiro daquele ano. A lei concedeu a todas as confissões religiosas “a faculdade de exercerem o seu culto, regerem-se segundo a sua fé e não serem contrariadas” e proibiu o Estado de definir uma religião oficial.

Mais tarde, na Constituição de 1988, conhecida como a Constituição Cidadã, o inciso 6 do Artigo 5º assegura ser inviolável a liberdade de crença e o livre exercício dos cultos religiosos.

Ainda assim, o preâmbulo da atual Carta Magna define a promulgação do documento “sob a proteção de Deus”, mostrando resquícios da ainda influente religião cristã no país.

“Ninguém se incomoda da mãe levar o filho para batizar no cristianismo quando é bebê. É uma cerimônia bonita, celebrada, lembrada. Agora, todo mundo incomoda com a iniciação das crianças no Candomblé e na Umbanda. Mesmo estando acompanhada de seus pais. Isso é o quê? Se não o racismo religioso?” – Makota Celinha, coordenadora geral do Cenarab (Centro Nacional de Africanidade e Resistência Afro-Brasileira)

O racismo religioso na escola

As crianças de religiões de matriz africana sofrem preconceito na escola começando por suas brincadeiras, segundo Makota Kidoiale, líder da comunidade quilombola Manzo N’Gunzo Kaiango e coordenadora do programa Educa Quilombo, em Belo Horizonte.

“No primeiro ano de escola dos meus netos, eles iam para o parquinho e as brincadeiras deles eram muito diferentes do que a própria estrutura da escola foi programada para poder receber. Eles ficavam reproduzindo tudo aquilo que eles viviam dentro do terreiro”, conta.

Segundo Kidoiale, a administração da escola se incomodou com o comportamento das crianças. “Tinham medo de criar um problema com outras famílias, porque as outras crianças podiam reproduzir isso em casa. Eu questionei, porque da mesma forma que meu neto trazia outra cultura, outra tradição, outros conhecimentos para dentro da nossa casa, por que não transversalizar com tudo que ele vivenciava dentro da comunidade?”, afirma.

Em 2003 entrou em vigor a lei 10.639, que tornou obrigatório o ensino de história e cultura africana e afro-brasileira no ensino fundamental e médio. Mas, para Kidoiale, a legislação não faz com que a temática tenha uma abordagem adequada na grade curricular. Ela acredita que o fato da educação brasileira ser muito baseada em princípios cristãos acaba por gerar uma exclusão da diversidade. “A escola não dá conta de trabalhar nem mesmo a história da população africana, quanto mais a religião.”

Segundo a psicóloga Jaqueline Gomes de Jesus, da Abrapso (Associação Brasileira de Psicologia Social) e da ABPN (Associação Brasileira de Pesquisadores Negros), o combate ao racismo religioso nas escolas é de responsabilidade dos profissionais de educação, dos pais e responsáveis.

“O desafio é que os adultos são formatados nessa sociedade racista, nessa sociedade que tenta formatar, principalmente em um contexto cristão, fundamentalista, crianças que não se enquadram em certos padrões até de roupa e de práticas, então isso é muito violento”.

O racismo religioso na Justiça

Além das diferentes violações de direitos de expressar ritos de matriz africana na escola, há casos em que os pais perdem a guarda das crianças por iniciá-los na religião.

Uma situação que ganhou grande destaque na mídia em 2021 foi a da manicure Kate Belintani, de Araçatuba (SP) que teve a guarda da filha — na época, com 11 anos — suspensa. Kate foi acusada de lesão corporal após raspar os cabelos da menina em um ritual religioso do Candomblé.

Outro caso, que chegou a ser citado pela Unesco (Organização das Nações Unidas para a Educação, a Ciência e a Cultura), é o da professora e jornalista Rosiane Rodrigues. Moradora de Rio das Ostras, no Rio de Janeiro, ela conta que perdeu a guarda do filho em 2007 por causa do preconceito religioso, a partir de uma decisão judicial.

Marcus Rodrigues, chamado geralmente de Marquinhos, o mais novo dos três filhos de Rosiane, nasceu em 2004. No ano seguinte, ela se separou do pai da criança, Marcus Henriques, o que deu início a uma disputa sobre quantos dias cada um ficaria com o filho.

Em uma das audiências do processo, Rosiane estava “tomando obrigação de santo”, um costume religioso do Candomblé que determina o uso de roupas brancas, cabeça coberta e colar de contas. Ao ver a professora vestida dessa maneira, a juíza do caso determinou que o laudo psicológico da família fosse feito com urgência. Segundo Rosiane, “depois disso, a juíza concluiu que por eu ser do Candomblé eu tinha menos condições morais de criar o garoto do que o pai dele.”

Rosiane afirma que dois oficiais de Justiça foram retirar Marquinhos de casa acompanhados de um carro da polícia. No momento, o filho estava na escola, e Rosiane se recusou a informar a localização da criança. Ela foi levada para a delegacia.

Marquinhos foi inicialmente entregue ao pai. Mas depois de uma série de vaivéns que duraram quatro anos, Rosana conseguiu a guarda de volta. Ela então buscou auxílio do Nudem (Núcleo de Defesa da Mulher da Defensoria Pública). Três psicólogos e duas assistentes sociais trabalharam em um novo laudo psicossocial de Rosiane e seus filhos.

O garoto fez terapia com um psicólogo infantil durante um ano. “Logo que ele voltou para mim, que a gente consegue essa guarda provisória, ele volta muito assustado, com muito problema, com muito transtorno, uma criança muito agressiva”, conta Rosiane, que chegou a registrar um boletim de ocorrência contra o ex-marido por agressões ao filho.

O caso foi citado no relatório “Direito a uma vida livre de violência”, publicado em 2013 pela Secretaria Nacional de Promoção e Defesa dos Direitos Humanos em parceria com a Unesco como um caso emblemático de intolerância religiosa no Brasil.

Os efeitos do racismo religioso nas crianças

“As crianças não sabem que estão sofrendo intolerância, não têm o discernimento, a capacidade de entender o racismo. Há uma vulnerabilidade de quem não consegue se defender”, ressalta Makota Celinha, do Cenarab.

Para a líder quilombola Makota Kidoiale, um dos passos importantes para lidar com o choque de tradições é ouvir o que as crianças vivenciam. “A gente vai direcionando tudo que elas descobriram lá fora a um determinado lugar da comunidade”, diz.

“Por exemplo, se elas aprendem na escola sobre as folhas, a fase da vegetação, do plantio, aqui a gente acrescenta: ‘essa aula está relacionada a Oxossi, que é deus das folhas, das plantas. E é delas também que a gente tira os remédios’. A gente faz um complemento do que elas aprenderam”, exemplifica.

Nos casos em que o racismo religioso é mais explícito, é difícil conseguir a garantia do bem-estar da criança. “A gente mostra que existem as diferenças das religiões e cada um tem um conceito, e que infelizmente o nosso direito de falar sobre nós é muito recente, então as pessoas poucos sabem sobre nós. Mas às vezes é muito difícil, muito violento. Violento de pegar e pôr pra fora, fazer chacota quando estão vestidas com as contas, ou de branco, as pessoas olham assustadas para eles”, diz Kidoiale.

A psicóloga Jaqueline Gomes de Jesus afirma que crianças que crescem em ambientes de discriminação religiosa se tornam adultos intolerantes, tornando a violência uma marca que molda a personalidade.

“A gente tem que lutar para que os profissionais de educação, de saúde, os que cuidam das crianças, permitam que elas sejam quem elas são, para que não gerem esses traumas que ficam para o resto da vida”, diz.

De acordo com o psicólogo Flávio Prata, pesquisador da área, é importante que a criança tenha um ambiente seguro. “Não há como dimensionar os efeitos do racismo especificamente, mas a influência está nos mecanismos que a criança encontra para lidar com essa discriminação”, afirma.

Book Review: Discriminating Data: Correlation, Neighborhoods, and the New Politics of Recognition by Wendy Hui Kyong Chun (LSE)

blogs.lse.ac.uk

Professor David Beer – November 22nd, 2021


In Discriminating Data: Correlation, Neighborhoods, and the New Politics of RecognitionWendy Hui Kyong Chun explores how technological developments around data are amplifying and automating discrimination and prejudice. Through conceptual innovation and historical details, this book offers engaging and revealing insights into how data exacerbates discrimination in powerful ways, writes David Beer

Discriminating Data: Correlation, Neighborhoods, and the New Politics of Recognition. Wendy Hui Kyong Chun (mathematical illustrations by Alex Barnett). MIT Press. 2021.

Going back a couple of decades, there was a fair amount of discussion of ‘the digital divide’. Uneven access to networked computers meant that a line was drawn between those who were able to switch-on and those who were not. At the time there was a pressing concern about the disadvantages of a lack of access. With the massive escalation of connectivity since, the notion of a digital divide still has some relevance, but it has become a fairly blunt tool for understanding today’s extensively mediated social constellations. The divides now are not so much a product of access; they are instead a consequence of what happens to the data produced through that access.

With the escalation of data and the establishment of all sorts of analytic and algorithmic processes, the problem of uneven, unjust and harmful treatment is now the focal point for an animated and urgent debate. Wendy Hui Kyong Chun’s vibrant new book Discriminating Data: Correlation, Neighborhoods, and the New Politics of Recognition makes a telling intervention. At its centre is the idea that these technological developments around data ‘are amplifying and automating – rather than acknowledging and repairing – the mistakes of a discriminatory past’ (2). Essentially this is the codification and automation of prejudice. Any ideas about the liberating aspects of technology are deflated. Rooted in a longer history of statistics and biometrics, existing ruptures are being torn open by the differential targeting that big data brings.

This is not just about bits of data. Chun suggests that ‘we need […] to understand how machine learning and other algorithms have been embedded with human prejudice and discrimination, not simply at the level of data, but also at the levels of procedure, prediction, and logic’ (16). It is not, then, just about prejudice being in the data itself; it is also how segregation and discrimination are embedded in the way this data is used. Given the scale of these issues, Chun narrows things down further by focusing on four ‘foundational concepts’, with correlation, homophily, authenticity and recognition providing the focal points for interrogating the discriminations of data.

Image Credit: Pixabay 

It is the concept of correlation that does much of the gluing work within the study. The centrality of correlation is a subtext in Chun’s own overview of the book, which suggests that ‘Discriminating Data reveals how correlation and eugenic understandings of nature seek to close off the future by operationalizing probabilities; how homophily naturalizes segregation; and how authenticity and recognition foster deviation in order to create agitated clusters of comforting rage’ (27). As well as developing these lines of argument, the use of the concept of correlation also allows Chun to think in deeply historical terms about the trajectory and politics of association and patterning.

For Chun the role of correlation is both complex and performative. It is argued, for instance, that correlations ‘do not simply predict certain actions; they also form them’. This is an established position in the field of critical data studies, with data prescribing and producing the outcomes they are used to anticipate. However, Chun manages to reanimate this position through an exploration of how correlation fits into a wider set of discriminatory data practices. The other performative issue here is the way that people are made-up and grouped through the use of data. Correlations, Chun writes, ‘that lump people into categories based on their being “like” one another amplify the effects of historical inequalities’ (58). Inequalities are reinforced as categories become more obdurate, with data lending them a sense of apparent stability and a veneer of objectivity. Hence the pointed claim that ‘correlation contains within it the seeds of manipulation, segregation and misrepresentation’ (59).

Given this use of data to categorise, it is easy to see why Discriminating Data makes a conceptual link between correlation and homophily – with homophily, as Chun puts it, being the ‘principle that similarity breeds connection’ and can therefore lead to swarming and clustering. The acts of grouping within these data structures mean, for Chun, that ‘homophily not only eases conflict; it also naturalizes discrimination’ (103). Using data correlations to group informs a type of homophily that not only misrepresents and segregates; it also makes these divides seem natural and therefore fixed.

Chun anticipates that there may be some remaining remnants of faith in the seeming democratic properties of these platforms, arguing that ‘homophily reveals and creates boundaries within theoretically flat and diffuse social networks; it distinguishes and discriminates between supposedly equal nodes; it is a tool for discovering bias and inequality and for perpetuating them in the name of “comfort,” predictability, and common sense’ (85). As individuals are moved into categories or groups assumed to be like them, based upon the correlations within their data, so discrimination can readily occur. One of the key observations made by Chun is that data homophily can feel comfortable, especially when encased in predictions, yet this can distract from the actual damages of the underpinning discriminations they contain. Instead, these data ‘proxies can serve to buttress – and justify – discrimination’ (121). For Chun there is a ‘proxy politics’ unfolding in which data not only exacerbates but can also be used to lend legitimacy to discriminatory acts.

As with correlation and homophily, Chun, in a particularly novel twist, also explores how authenticity is itself becoming automated within these data structures. In stark terms, it is argued that ‘authenticity has become so central to our times because it has become algorithmic’ (144). Chun is able to show how a wider cultural push towards notions of the authentic, embodied in things like reality TV, becomes a part of data systems. A broader cultural trend is translated into something renderable in data. Chun explains that the ‘term “algorithmic authenticity” reveals the ways in which users are validated and authenticated by network algorithms’ (144). A system of validation occurs in these spaces, where actions and practices are algorithmically judged and authenticated. Algorithmic authenticity ‘trains them to be transparent’ (241). It pushes a form of openness upon us in which an ‘operationalized authenticity’ develops, especially within social media.

This emphasis upon the authentic draws people into certain types of interaction with these systems. It shows, Chun compellingly puts it, ‘how users have become characters in a drama called “big data”’ (145). The notion of a drama is, of course, not to diminish what is happening but to try to get at its vibrant and role-based nature. It also adds a strong sense of how performance plays out in relation to the broader ideas of data judgment that the book is exploring.

These roles are not something that Chun wants us to accept, arguing instead that ‘if we think through our roles as performers and characters in the drama called “big data,” we do not have to accept the current terms of our deployment’ (170). Examining the artifice of the drama is a means of transformation and challenge. Exposing the drama is to expose the roles and scripts that are in place, enabling them to be questioned and possibly undone. This is not fatalistic or absent of agency; rather, Chun’s point is that ‘we are characters, rather than marionettes’ (248).

There are some powerful cross-currents working through the discussions of the book’s four foundational concepts. The suggestion that big data brings a reversal of hegemony is a particularly telling argument. Chun explains that: ‘Power can now operate through reverse hegemony: if hegemony once meant the creation of a majority by various minorities accepting a dominant worldview […], now hegemonic majorities can emerge when angry minorities, clustered around a shared stigma, are strung together through their mutual opposition to so-called mainstream culture’ (34). This line of argument is echoed in similar terms in the book’s conclusion, clarifying further that ‘this is hegemony in reverse: if hegemony once entailed creating a majority by various minorities accepting – and identifying with – a dominant worldview, majorities now emerge by consolidating angry minorities – each attached to a particular stigma – through their opposition to “mainstream” culture’ (243). In this formulation it would seem that big data may not only be disciplinary but may also somehow gain power by upending any semblance of a dominant ideology. Data doesn’t lead to shared ideas but to the splitting of the sharing of ideas into group-based networks. It does seem plausible that the practices of targeting and patterning through data are unlikely to facilitate hegemony. Yet, it is not just that data affords power beyond hegemony but that it actually seeks to reverse it.

The reader may be caught slightly off-guard by this position. Chun generally seems to picture power as emerging and solidifying through a genealogy of the technologies that have formed into contemporary data infrastructures. In this account power seems to be associated with established structures and operates through correlations, calls for authenticity and the means of recognition. These positions on power – with infrastructures on one side and reverse hegemony on the other – are not necessarily incompatible, yet the discussion of reverse hegemony perhaps stands a little outside of that other vision of power. I was left wondering if this reverse hegemony is a consequence of these more processional operations of power or, maybe, it is a kind of facilitator of them.

Chun’s book looks to bring out the deep divisions that data-informed discrimination has already created and will continue to create. The conceptual innovation and the historical details, particularly on statistics and eugenics, lend the book a deep sense of context that feeds into a range of genuinely engaging and revealing insights and ideas. Through its careful examination of the way that data exacerbates discrimination in very powerful ways, this is perhaps the most telling book yet on the topic. The digital divide may no longer be a particularly useful term but, as Chun’s book makes clear, the role data performs in animating discrimination means that the technological facilitation of divisions has never been more pertinent.

Why Africa urgently needs its own genetic library (BBC)

bbc.com

By Elna Schutz – Nov. 1, 2021


By Elna Schutz
Business reporter, South Africa

Ambroise Wonkam, Professor of Medical Genetics at the University of Cape Town.
Prof Ambroise Wonkam hopes to create a vast database of African genomes

It was just a “crazy idea” to start with, says Ambroise Wonkam, professor of medical genetics at Cape Town university in South Africa.

He is talking about his vision of creating a huge library of genetic information about the population of Africa, outlined in the science journal Nature, earlier this year.

The Three Million African Genomes (3MAG) project emerged from his work on how genetic mutations among Africans contribute to conditions like sickle-cell disease and hearing impairments.

He points out that African genes hold a wealth of genetic variation, beyond that observed by scientists in Europe and elsewhere.

“We are all African but only a small fraction of Africans moved out of Africa about 20-40,000 years ago and settled in Europe and in Asia,” he says.

Spectators in the crowd cheer at the international friendly football match between South Africa and Mali at the Nelson Mandela Bay Stadium, Port Elizabeth, on October 13, 2019.
Only about 2% of the human genomes that have been mapped are African

Prof Wonkam is also concerned about equity. “Too little of the knowledge and applications from genomics have benefited the global south because of inequalities in health-care systems, a small local research workforce and lack of funding,” he says.

Only about 2% of the genomes mapped globally are African, and a good proportion of these are African American. This comes from a lack of prioritising funding, policies and training infrastructure, he says, but it also means the understanding of genetic medicine as a whole is lopsided.

Studies of African genomes will also help to correct injustices, he says: “Estimates of genetic risk scores for people of African descent that predict, say, the likelihood of cardiomyopathies or schizophrenia can be unreliable or even misleading using tools that work well in Europeans.”

To address these inequities, Prof Wonkam and other scientists are talking to governments, companies and professional bodies across Africa and internationally, in order to build up capacity over the next decade to make the vision a reality.

A DNA sample being pipetted into a tube for automated analysis
Estimates of genetic risk scores for people of African descent can be unreliable, says Prof Wonkam

The number of three million is the minimum he expects to accurately map genetic variations across Africa. As a comparison, the UK Biobank currently aims to sequence half a million genomes in under three years, but the UK’s 68 million population is just a fraction of Africa’s 1.3 billion.

Prof Wonkam says the project will take 10 years, and will cost around $450m (£335m) per year, and says industry is already showing an interest in it.

Biotech firms say they welcome any expansion of the library of African genomes.

The Centre for Proteomic and Genomic Research (CPGR) in Cape Town works with biotech firm Artisan Biomed on a variety of diagnostic tests. The firm says it is affected by the gaps in the availability of genomic information relevant to local populations.

For example, it may find a genetic mutation in someone and not know for certain if that variation is associated with a disease, especially as a marker for that particular population.

The Centre for Proteomic and Genomic Research (CPGR) in Cape Town
The Centre for Proteomic and Genomic Research works with private firms to further their research

“The more information you have at that level, the better the diagnosis, treatment and eventually care can be for any individual, regardless of your ethnicity,” says Dr Lindsay Petersen, chief operations officer.

Artisan Biomed says the data it collects feeds back into CPGR’s research – allowing them to design a better diagnostic toolkit that is better suited to African populations, for instance.

“Because of the limited data sets of the African genome, it needs that hand in hand connection with research and innovation, because without that it’s just another test that has been designed for a Caucasian population that may or may not have much of an effect within the African populations,” says Dr Judith Hornby Cuff.

She says the 3MAG project would help streamline processes and improve the development of research, and perhaps one day provide cheaper, more effective and more accessible health care, particularly in the strained South African system.

Dr Aron Abera is a genomics scientist at Inqaba Biotech in Pretoria
Dr Aron Abera hopes his company can build labs and train staff outside South Africa

One of those hoping to take part in the 3MAG project is Dr Aron Abera, genomics scientist at Inqaba Biotech in Pretoria, which offers genetic sequencing and other services to research and industry.

The firm employs over 100 people in South Africa, Ghana, Kenya, Mali, Nigeria Senegal, Tanzania, Uganda and Zimbabwe. Currently, most of the genetics samples collected in these countries are still processed in South Africa, but Dr Abera hopes to increase the number of laboratories soon.

The gaps are not only in infrastructure, but also in staff. Over the last 20 years, Inqaba has focused on using staff and interns from the African continent – but it now has to expand its training programme as well.

Back in Cape Town, Prof Wonkam says that while the costs are huge, the project will “improve capacity in a whole range of biomedical disciplines that will equip Africa to tackle public-health challenges more equitably”.

He says: “We have to be ambitious when we are in Africa. You have so many challenges you cannot see small, you have to see big – and really big.”

Machine learning can be fair and accurate (Science Daily)

Date: October 20, 2021

Source: Carnegie Mellon University

Summary: Researchers are challenging a long-held assumption that there is a trade-off between accuracy and fairness when using machine learning to make public policy decisions.


Carnegie Mellon University researchers are challenging a long-held assumption that there is a trade-off between accuracy and fairness when using machine learning to make public policy decisions.

As the use of machine learning has increased in areas such as criminal justice, hiring, health care delivery and social service interventions, concerns have grown over whether such applications introduce new or amplify existing inequities, especially among racial minorities and people with economic disadvantages. To guard against this bias, adjustments are made to the data, labels, model training, scoring systems and other aspects of the machine learning system. The underlying theoretical assumption is that these adjustments make the system less accurate.

A CMU team aims to dispel that assumption in a new study, recently published in Nature Machine Intelligence. Rayid Ghani, a professor in the School of Computer Science’s Machine Learning Department (MLD) and the Heinz College of Information Systems and Public Policy; Kit Rodolfa, a research scientist in MLD; and Hemank Lamba, a post-doctoral researcher in SCS, tested that assumption in real-world applications and found the trade-off was negligible in practice across a range of policy domains.

“You actually can get both. You don’t have to sacrifice accuracy to build systems that are fair and equitable,” Ghani said. “But it does require you to deliberately design systems to be fair and equitable. Off-the-shelf systems won’t work.”

Ghani and Rodolfa focused on situations where in-demand resources are limited, and machine learning systems are used to help allocate those resources. The researchers looked at systems in four areas: prioritizing limited mental health care outreach based on a person’s risk of returning to jail to reduce reincarceration; predicting serious safety violations to better deploy a city’s limited housing inspectors; modeling the risk of students not graduating from high school in time to identify those most in need of additional support; and helping teachers reach crowdfunding goals for classroom needs.

In each context, the researchers found that models optimized for accuracy — standard practice for machine learning — could effectively predict the outcomes of interest but exhibited considerable disparities in recommendations for interventions. However, when the researchers applied adjustments to the outputs of the models that targeted improving their fairness, they discovered that disparities based on race, age or income — depending on the situation — could be removed without a loss of accuracy.

Ghani and Rodolfa hope this research will start to change the minds of fellow researchers and policymakers as they consider the use of machine learning in decision making.

“We want the artificial intelligence, computer science and machine learning communities to stop accepting this assumption of a trade-off between accuracy and fairness and to start intentionally designing systems that maximize both,” Rodolfa said. “We hope policymakers will embrace machine learning as a tool in their decision making to help them achieve equitable outcomes.”


Story Source:

Materials provided by Carnegie Mellon University. Original written by Aaron Aupperlee. Note: Content may be edited for style and length.


Journal Reference:

  1. Kit T. Rodolfa, Hemank Lamba, Rayid Ghani. Empirical observation of negligible fairness–accuracy trade-offs in machine learning for public policy. Nature Machine Intelligence, 2021; 3 (10): 896 DOI: 10.1038/s42256-021-00396-x

Quando brasileiras brancas descobrem na Europa que, com a brancura, não podem mobilizar privilégios (Geledés)

geledes.org.br

Por Fabiane Albuquerque, enviado ao Portal Geledés

24/10/2021


Moro na França há alguns anos. Também já morei na Itália e, além dos meus estudos sobre branquitude, convivo com brasileiras no exterior e tenho uma vasta experiência com as frustrações, queixas e crises de mulheres brancas, sobretudo das classes médias e altas. Eu observo pessoas brancas há muito tempo. Acho que comecei a refletir sobre elas ouvindo as histórias das mulheres da minha família que trabalhavam nas suas cozinhas, fazendas, em estreita relação com a branquitude brasileira. Então, não me faltaram relatos sobre como se comportavam, pensavam, diziam e se relacionavam, sobretudo com os seus iguais e o seu Outro (negros e negras). 

Essas mulheres, contudo, não nos viam (e ainda não nos veem) porque estão ocupadas demais em projetar em corpos negros as coisas mal resolvidas em si mesmas. Incrível como falam da pobreza no Brasil, dos problemas políticos e sociais, da falta de educação do povo brasileiro, sem ao menos se darem conta dos problemas dentro de seus lares. Lourenço Cardoso escreve sobre isso na sua tese de doutorado intitulada: “O branco ante a rebeldia do desejo: um estudo sobre a branquitude no Brasil” e explica que negros, mesmo sendo desumanizados por brancos, ainda conseguem vê-los enquanto humanos; já o contrário é difícil. 

Pois bem, vejo estas mulheres que, acostumadas a projetar o olhar para fora, para o outro, raramente se questionam e se veem como de fato são. Não se enxergam brancas, privilegiadas, construídas e projetadas como seres superiores com base na raça e no pertencimento de classe no Brasil. E, quando chegam na Europa e descobrem que, por serem brancas e possuírem dinheiro, não podem tirar proveito da situação como fazem no país que as endeusou, entram em crise.  A crise dessas mulheres é uma das coisas mais interessantes que meu olhar de pesquisadora pôde ver.  Essa não é consciente para elas, assim como não é o fato de que a brancura lhes garantiu um lugar confortável na sociedade de origem. 

Por três anos meu filho estudou na mesma classe que o filho de uma brasileira branca, loira, de Santa Catarina, advogada e apoiadora de Bolsonaro, antipetista, antilulista e possuidora de uma visão estereotipada sobre a esquerda, os negros e os pobres. Mas, uma coisa aqui mudou na vida dela: embora nós duas tenhamos origem social e raça diferentes, a França nos nivelou. Eu e ela moramos no mesmo bairro e nossos filhos frequentaram a mesma escola, diga-se de passagem, pública. Para ela, mais do que para mim, isso constituiu um grande incômodo, manifesto na sua tentativa constante de mostrar-me o que ela tinha de diferencial em relação à mim. 

Como a questão financeira não era o principal mobilizador de superioridade, tampouco ela possuía conhecimentos sobre cultura, ou seja, enquanto eu sou amante de livros, pesquisadora, escritora, conheço de literatura brasileira, francesa, italiana, dentre outras, vou ao teatro e cinema, ela se orgulhava de ser frequentadora assídua de academia, Disneylândia e Mcdonalds. No Brasil, parece que a futilidade dessas pessoas é ofuscada pelo privilégio de raça e de classe.

Certo dia, na porta da escola, ela me abordou da seguinte forma: 

-Ai guria, tem dias aqui que é difícil, estou para ficar louca. Outro dia fui ao banco sozinha e me trataram como uma qualquer, você acredita?

Incrédula com a expressão, pois esse “ser qualquer um” deveria ser o sentimento de todo cidadão, de juiz a gari, de professor à médico, de político à banqueiro, balancei a cabeça dando-lhe corda: 

-Verdade? 

E ela continuou:

-Eu tive que ligar para o meu marido ir até lá para ver se com ele seria diferente. Ele vive recebendo propostas para investimento do banco porque ganha bem.

Fiquei pensando nas suas palavras. Aqui na França, ela não pode mobilizar um tratamento diferenciado por ser loira e muito menos pela sua classe social. Aqui o “você sabe com quem está falando?” não cola como no Brasil. Afinal, ela é só mais uma branca dentre brancos. E os brancos daqui, como diz o pesquisador Lourenço Cardoso, são “mais brancos” que os nossos brancos devido a impressão digital deixada pela colonização que hierarquizou povos e nações. Quanto mais nórdico, como os ingleses, mais branco e ideal é um povo.

Como eu jamais a bajulei por ser branca (como geralmente acontece entre brasileiros), outra vez, na porta da escola, ela me abordou novamente. Eu disse que estava indo caminhar e ela logo se oferece para ir junto. No caminho, sem nenhum pudor, me solta essa:

-Quando meu filho nasceu, a preocupação do meu marido era com o cabelo, se ia nascer ruim como o dele. Eu até achei engraçado porque assim que ele nasceu, ele correu para mim e disse “parece que é ruim, é bem enrolado”. 

Eu, que tenho cabelo “ruim” na concepção da sua família somente soltei um “é mesmo?” e parece que aquilo liberou nela seu racismo mais latente. Esse só sai quando a pessoa não se sente julgada ou rechaçada, quando acha abertura e acredita que o interlocutor não a está julgando:

-Meu marido (branco no Brasil) ‘rapa’ a cabeça porque ele odeia o próprio cabelo. Mas quando viu que puxou a mim ficou mais tranquilo. 

O que essa mulher queria ao me dizer tudo isso? Ela estava buscando que eu reconhecesse a sua superioridade, pelo menos aquela racial, já que eu, por espontânea vontade não o fiz, ela estava ali me lembrando disso. A igualdade é um dos maiores sofrimentos psíquicos para mulheres brancas brasileiras das camadas altas que chegam para morar aqui na Europa. Digo de mulheres porque convivo pouco com os homens brancos brasileiros. E não parou aqui, não. Outra vez ela fez o seguinte comentário:

-Guria, falei com minha prima que mora na Inglaterra e ela me disse que sou louca de colocar meu filho em escola pública, de me misturar com esta gente

Ela se referia à grande presença de crianças imigrantes na escola, de origem africana e de países árabes. A escola pública foi o espaço que acolheu o seu filho, o ensinou a falar francês, lhe proporcionou uma base e uma convivência respeitosa e igualitária com diferentes nacionalidades, sobretudo aquelas as quais ele nunca teve contato no Brasil por viver segregado no seu pequeno mundinho burguês.  Mas, ela insistia em tentar se colocar como um ser especial. 

Antes que alguém diga que tive muita paciência, só resisti porque estudo brancos e quando descobri que é melhor lhes dar corda para ter material, meu envolvimento afetivo e emocional me causa menos sofrimento.  

O estupor por não ser tratada com distinção não vem somente de gente de extrema direita. Nesse ponto, a branquitude se assemelha muito, tanto de direita quanto de esquerda. Uma moça branca, paulistana e segundo ela mesma, de classe média alta, revelou-me que estava surpresa por sofrer discriminação dentro da universidade francesa. A pergunta que ela me fez foi a seguinte:

– Eu posso me comparar com os negros por sofrer racismo

Lhe respondi que com negros, jamais. E continuei dizendo que aqui, antes de tudo, ela é brasileira e tinha alguns traços árabes como o nariz e o formado do rosto. Ela estava desorientada por não poder usufruir da “invisibilidade” da raça como acontecia no Brasil e talvez, sem se dar conta, da visibilidade por ser branca e burguesa na hora de receber privilégios. Essas mulheres estão acostumadas, desde pequenas, a serem paparicadas e, quando isso não acontece, o Eu se fragiliza. 

Uma outra, branca de olhos verdes, vendo que eu jamais comentei algo sobre a sua aparência física, como está acostumada, depois de um tempo de convivência, tirou os óculos diante de mim, arregalou os olhos e disse:

Todo mundo fala que eu deveria parar de usar óculos, pois desvalorizam meus olhos. Você já viu os meus olhos?

A cena foi cômica. A mulher com os olhos esbugalhados na minha frente mendigando elogios.  Lhe respondi:

– Fulana, eu já vi os seus olhos.

 Ela, muito sem graça, recolocou os óculos. O que ela queria de mim? O que todo mundo lhe dava: bajulação da sua corporeidade branca, dos seus olhos verdes e o reconhecimento do seu valor em base a isto. 

Muitas dessas mulheres tentam reproduzir a mesma hierarquia social e racial que temos no Brasil, procurando por outras que estejam à disposição do ego delas. Conheci uma promotora de justiça de Brasília que chegou na França, juntamente com o marido para fazer mestrado. Os dois conseguiram uma licença de um ano do trabalho. No primeiro contato que tivemos ela perguntou: “Você conhece uma diarista para me apresentar?” Achei estranho o pedido, pois a mulher e o marido ficariam um ano sem trabalhar, morando em um pequeno apartamento, como ela descreveu, mas tinha que ter alguém para lhe servir. Essa gente fora do Brasil e das relações de dominação/servidão/ que se dão em base à racialização de corpos se perde. 

Conheci brasileiras aqui que gostam de conviver com outras brasileiras porque entre nós, entendemos os códigos, as hierarquias e as leis ocultas do nosso país para reproduzir a mesma lógica de quem adora e de quem é adorado. Ou, em outros casos, preferem conviver somente com franceses, pois segundo elas, “não gostam de se misturar” e se agarram aos “brancos mais brancos” como se fosse um troféu para mostrar ao mundo e exibir para a família e amigos no Brasil: “Olha a minha amiga francesa!!!”. É um modo de participar da branquitude mais “pura” (mesmo que indiretamente), que o que temos nas terras Brasilis. 

Uma coisa é certa, essa experiência na Europa poderia ser, para elas, uma grande chance de mudar de paradigma, de renascer, de se tornar uma pessoa melhor. Mas, na maioria dos casos, o privilégio é buscado com unhas e dentes. Se soubessem que podem abandoná-lo e viver mais livres, talvez o fariam. Mas alguém como elas, ou seja, branco, precisaria dizer. Pois, no meu caso, se lhes digo, passo por negra raivosa, ressentida, invejosa, que vê racismo em tudo. Eu torço pela mudança e pela emancipação humana, mas enquanto isso não acontece, continuo tendo-as como objeto de análise e estudo.  


Fabiane Albuquerque é doutora em sociologia, autora do livro Cartas a um homem negro que amei, publicado pela Editora Malê.

With More Freedom, Young Women in Albania Shun Tradition of ‘Sworn Virgins’ (New York Times)

nytimes.com

Text by Andrew Higgings


Gjystina Grishaj, known by her male nickname, Duni, declared herself a man nearly 40 years ago in order to avoid a forced marriage at a young age. 

Gjystina Grishaj, known by her male nickname, Duni, declared herself a man nearly 40 years ago in order to avoid a forced marriage at a young age.

A centuries-old tradition in which women declared themselves men so they could enjoy male privilege is dying out as young women have more options available to them to live their own lives.

Andrew Higgins

Photographs by Laura Boushnak

LEPUSHE, Albania — As a teenager locked in a patriarchal and tradition-bound mountain village in the far north of Albania, Gjystina Grishaj made a drastic decision: She would live the rest of her life as a man.

She did not want to be married off at a young age, nor did she like cooking, ironing clothes or “doing any of the things that women do,” so she joined a gender-bending Albanian fraternity of what are known as “burrneshat,” or “female-men.” She adopted a male nickname — Duni.

“I took a personal decision and told them: I am a man and don’t want to get married,” Duni recalled telling her family.

Few women today want to become what anthropologists call Albania’s “sworn virgins,” a tradition that goes back centuries. They take an oath of lifelong celibacy and enjoy male privileges, like the right to make family decisions, smoke, drink and go out alone.

Duni said her choice was widely accepted, though her mother kept trying to get her to change her mind until the day she died in 2019. Like other burrneshat, Duni — who remains Gjystina Grishaj in official documents — is still universally referred to in a traditional way, with female pronouns and forms of address, and does not consider herself transgender.

Duni working in a field in the mountain village of Lepushe, in northern Albania. 

The fraternity that Duni joined nearly 40 years ago is dying out as change comes to Albania and its paternalistic rural areas, allowing younger women more options. Her village, which is Christian, like much of the northern part of the country, has in recent years started to shed its claustrophobic isolation, thanks to the construction of a winding road through the mountains that attracts visitors, but that also provides a way out for strong-willed local women who want to live their own lives.

Many, like Duni, took the oath so that they could escape forced marriages; some so that they could take on traditional male roles — like running a farm — in families where all of the men had died in blood feuds that plagued the region; and others because they just felt more like men.

“Society is changing, and burrneshat are dying out,” said Gjok Luli, an expert on the traditions of northern Albania. There are no precise figures for how many remain, but of the dozen or so who do, most are elderly. Duni, at 56, is perhaps the youngest, he said.

“It was an escape from the role given to women,” Mr. Luli said, “but there is no desperate need to escape anymore.”

Among those now able to choose different paths in life is Duni’s niece, Valerjana Grishaj, 20, who decided as a teenager to leave the mountains and move to Tirana, Albania’s relatively modern-minded capital. The village, Ms. Grishaj explained over coffee in a Tirana cafe, “is not a place for me.”

“All my friends there have been married since they were 16,” she said.

But Ms. Grishaj said she understood why her aunt made the decision she did. “There were no strong, independent women up there,” she said. “To be one, you had to become a man.”

She praised her parents for letting her make her own choices. “I was very lucky, but parents like mine are rare,” Ms. Grishaj said, noting that most still pressure their daughters to marry as teenagers.

Albania, which was isolated under a communist dictatorship until 1991, has seen its economy and social mores develop rapidly in recent years, and the country has become increasingly connected to the rest of Europe. But Tirana, to which Ms. Grishaj moved at 17 to study theater directing, can still be a difficult place for a young woman trying to make her own way.

“The patriarchy still exists, even here in Tirana,” Duni’s niece said. Young women who live alone, she lamented, stir nasty gossip and “are often seen as whores.”

The difference now though, she said, is that “women today have much more freedom than before, and you don’t need to become a man to live your own life.”

By declaring herself a man, Duni was not striking at conventional gender norms, but submitting to them. She also shares the strongly transphobic and homophobic views that are prevalent in Albania.

Men, everyone in her remote alpine hamlet of Lepushe believed, would always have more power and respect, so the best way for a woman to share their privilege was to join them, rather than trying to beat them.

“As a man, you get a special status in society and in the family,” Duni said, looking back on nearly four decades of dressing, behaving and being treated like a man. “I have never worn a skirt and never had any regrets about my decision,” she said.

Underpinning this tradition was the firm grip in northern Albania of “the Kanun,” a set of rules and social norms that classify women as chattel whose purpose was to serve men.

The low status afforded women did give them one advantage, though: It exempted them from the battles that for centuries decimated northern Albanian families as men from feuding clans died in a never-ending cycle of vengeance killings. Parents whose sons had all been killed often urged a daughter to take on a male identity so there would be a man to represent the family at village meetings and to manage its property.

A woman who became a sworn virgin was viewed as not entirely male, did not count in blood feuds and therefore escaped being targeted for murder by a rival clan.

Mr. Luli, the expert on local traditions, said one of his cousins, who went by the nickname Cuba instead of her original name, Tereza, was an only child and became a sworn virgin so she could avoid being married off and leaving her parents to fend for themselves. She died of old age in 1982.

He compared Cuba with a “woman who decides to become a nun.”

“It is the same kind of devotion,” Mr. Luli said, “only to the family instead of God.”

For Albanians pushing for gender equality, such devotion stirs mixed feelings. “Saying I will not take orders from a man is feminist,” said Rea Nepravishta, a women’s rights activist in Tirana. “Saying I own myself and will not be owned by a man is feminist.”

But, she added, “being forced to be a man instead of a woman is totally anti-feminist — it is horrible.”

Inequalities enshrined by the Kanun, Ms. Nepravishta said, gave women a choice “between either living like a semi-animal or having some freedom by becoming a man.” While still strong, patriarchy, she added, has lost some power and no longer confronts women with such stark choices.

Some burrneshat said they declared themselves men simply because they never felt like women. Diana Rakipi, 66, a burrnesha in the coastal city of Durres, said, “I always felt like a man, even as a boy.”

Aggressively masculine in manner, Ms. Rakipi delights in being bossy. On a stroll near her tiny one-room apartment, she kept stopping passers-by who she thought were acting improperly — like a boy she saw hitting his brother — and berated them.

Ms. Rakipi, who was raised in the north before moving south to Durres, said she took an oath of celibacy as a teenager in front of dozens of relatives and vowed to serve the family as a man. Born after her parents’ only son died from illness, Ms. Rakipi said she had grown up being told she had been sent by God to replace her dead brother.

“I was always considered the male of the family. They were all so upset by the death of my brother,” she said, sitting in a cafe where all of the other customers were men. She wore a black military beret, a red tie, men’s trousers and a safari vest, its pockets stuffed with talismans of her eclectic beliefs, including a Christian cross and a medallion with the face of Albania’s onetime dictator, Enver Hoxha.

Ms. Rakipi snorted with contempt when asked about people who undergo transition surgery. “It is not normal,” she said. “If God made you a woman, you are a woman.”

Duni, from Lepushe village, also has strong views on the subject, saying that altering the body goes “against God’s will,” and that people “should be put in jail” for doing so.

“I have not lived as a burrnesha because I want to be a man in any physical way. I have done this because I want to take on the role played by men and to get the respect of a man,” she said. “I am a man in my spirit, but having male genitals is not what makes you a man.”

Locals in Lepushe, including Manushaqe Shkoza, a server at a cafe in the village, said Duni’s decision to become a man initially came as a surprise, but it was accepted long ago. “Everyone sees it as normal,” Ms. Shkoza said.

Duni said she was sad that the tradition of sworn virgins would soon die out, but noted that her niece in Tirana had shown that there were now less drastic ways for a woman to live a full and respected life.

“Society is changing, but I think I made the right decision for my time,” Duni said. “I can’t resign from the role I have chosen. I took an oath to my family. This is a path you cannot go back on.”

Fatjona Mejdini contributed reporting.

Foto de criança expõe crise na assistência à saúde dos yanomamis (Folha de S.Paulo)

Território sofre com o aumento da malária e com a desnutrição infantil crônica

9.mai.2021 às 12h00 Atualizado: 9.mai.2021 às 20h02

Fabiano Maisonnave

Manaus Na aldeia Maimasi, em Roraima, uma criança yanomami jaz sobre a rede. Com as costelas expostas pela desnutrição, ela foi diagnosticada com malária e verminose. Mas a primeira equipe médica no local em seis meses não dispunha de medicamentos suficientes para tratar toda a aldeia.

A foto dessa criança e a história por trás dela foram obtidas pelo missionário católico Carlo Zacquini, 84, que atua entre os yanomamis desde 1968. Ele é cofundador da Comissão pela Criação do Parque Yanomami (CCPY), que deu visibilidade aos problemas causados pelos brancos, promoveu atendimento em saúde e lutou pela demarcação, concluída em 1992.

O território yanomami sofre com o aumento da malária e com a desnutrição infantil crônica, que atinge 80% das crianças até 5 anos, segundo estudo recente financiado pela Unicef e realizado em parceria com a Fiocruz e o Ministério da Saúde.

Os indígenas também enfrentam uma grande invasão de garimpeiros, incentivados por promessas do presidente Jair Bolsonaro de legalizá-los e pelo alto preço do minério. São cerca de 20 mil não indígenas morando ilegalmente na Terra Indígena Yanomami, contaminando os rios com mercúrio e contribuindo para espalhar Covid-19 e malária, além do álcool e da prostituição.

Procurado, o Distrito Sanitário Especial Indígena (Dsei) Yanomami, do Ministério da Saúde, informou que a criança, do sexo feminino, foi transferida a Boa Vista (RR) dois dias após a visita médica, acompanhada dos pais e dos irmãos.

Ela tem 8 anos e pesa 12,5 kg. Internada desde 23 de abril, está em tratamento para pneumonia, anemia e desnutrição grave —a malária foi curada. Ela está estável e em acompanhamento pelo serviço social. Segundo o órgão, trata-se de um caso isolado.

O Dsei negou a escassez de medicamentos e afirma que a quantidade é definida de acordo com a demanda prevista pela semana epidemiológica. O órgão não informou sobre como está o tratamento de outros yanomamis doentes na mesma região, mas alega que o atendimento de saúde é dificultado pelo fluxo constante dos indígenas e atribuiu a alta de incidência de malária à presença do garimpo ilegal.

A seguir, o depoimento de Zacquini:

É uma criança da aldeia Maimasi, a dois dias a pé da Missão Catrimani. Ela está sem assistência há muito tempo, com malária e verminose.

A fotografia foi feita por volta de 17 de abril. O pessoal das equipes de saúde tem receio de denunciar essa situação, pois podem ser punidos, colocados em lugares mais penosos ou ser demitidos. Vários polos de saúde estão abandonados. Não há estoque de medicamentos para verminose na sede do Dsei (Distrito Sanitário Especial Indígena Yanomami), em Boa Vista. Até para malária a quantidade é limitada.

O posto de saúde tem muita dificuldade para conseguir medicamentos. Faltam profissionais para revezamento e falta gasolina para deslocamento. Há três meses, eles usam a canoa com rabeta [motor] dos próprios yanomamis.

Criança magra ao ponto de ter ossos visíveis sob a pele, em rede
Com quadro de verminose e malária, criança yanomami dorme em rede na aldeia Maimasi, perto da Missão Catrimani, na Terra Indígena Yanomami, em Roraima – Divulgação

Para chegar a Maimasi, seriam oito minutos de helicóptero, mas, a princípio, isso só ocorre em casos de emergência. Evidentemente, essa criança é um caso de emergência!

Para levar medicamento ao pólo-base, foram deslocados um avião com uma equipe médica, porém eles ficaram aguardando inutilmente a chegada do helicóptero.

Havia seis meses que ninguém visitava a aldeia. Dessa vez, foram medicamentos para malária, mas não deu para repetir a dose. Uma equipe da Sesai (Secretaria Especial de Saúde Indígena, do Ministério da Saúde), incluindo médico, foi de avião até a Missão Catrimani para levar esses medicamentos.

O pessoal da saúde faz tratamentos com medicamentos, mas o tratamento não tem continuidade quando trocam de equipe. Assim, quando possível, fazem a primeira dose de tratamento, mas depois de um tempo os doentes devem recomeçar a partir da primeira dose.

Estou revoltado e com o sangue fervendo. É uma situação que parece estar se generalizando na Terra Indígena Yanomami.

O vaivém de garimpeiros é contínuo e isso implica voos de avião, barcos, helicópteros e a pé. São milhares os invasores da Terra Indígena Yanomami, e o presidente da República anuncia que irá pessoalmente falar com os militares que estão ali e com os garimpeiros também. Faz questão de dizer que não vai prender estes últimos, mas somente conversar.

Até para malária os medicamentos são contados, incluindo a cloroquina. Tem cloroquina para Covid, mas não para malária. A criança desnutrida está numa aldeia a oito minutos de helicóptero de um posto de saúde, mas leva um dia a pé. E depois dessa aldeia há outras, que na época estavam reunidas para o cerimonial funerário em outra aldeia mais afastada.

A equipe do pólo-base se deslocou a pé para a aldeia e encontrou um grupo grande de yanomamis que fazia um ritual funerário para uma criança que tinha morrido sem assistência. Eles ministraram medicamentos para verminose a todos, mas esse medicamento acabou e não puderam dar uma outra dose, o que é a praxe.

Aliás, havia mais de um ano que aquelas aldeias não recebiam atendimento contra verminose. A criança da foto e outros 16 indígenas presentes estavam com malária, a maioria deles com falciparum, a variedade mais agressiva. Os demais 84 estavam todos com sintomas de gripe e de febre.

Sonia Guajajara: Todo brasileiro hoje sente o que é ser tratado como indígena (Folha de S.Paulo)

www1.folha.uol.com.br

Sonia Guajajara, Coordenadora-executiva da Articulação dos Povos Indígenas do Brasil e ex-candidata do PSOL à Vice-Presidência da República (2018)

19 de abril de 2021


Nem sempre deixamos de sentir a dor do outro por falta de empatia; às vezes, isso acontece por puro desconhecimento. A história do Brasil sempre foi muito mal contada. Não desejamos o que passamos a ninguém, nem mesmo aos nossos algozes. São 520 anos de perseguição praticamente ininterrupta. Mas, neste Dia do Índio (19.abr), estamos enfrentando a maior ameaça de nossa existência. E agora não me refiro somente a nós, indígenas. O governo federal atual fez do coronavírus um aliado e põe em risco a vida da população em geral. Hoje, todos sentem como é ser acuado por uma doença que vem de fora, contra a qual não há defesa. Todos mesmo; agora, falo do mundo inteiro.

Nós, indígenas, somos perseguidos em nosso próprio país; neste momento, por causa da Covid-19. Todos nós, brasileiros, corremos o sério risco de sermos marginalizados globalmente. Ninguém em sã consciência nega a importância da Amazônia para a saúde do planeta —e hoje a ciência atesta que a destruição da natureza e as mudanças climáticas podem causar novas pandemias. Mas, além de abusar da caneta para atacar o meio ambiente e os nossos direitos, como de costume, o presidente Jair Bolsonaro vem tentado aliciar e constranger lideranças indígenas. Até Funai e Ibama estão jogando no time rival. Não é apenas um vírus.

A Articulação dos Povos Indígenas do Brasil (Apib) foi criada em 2005 no primeiro Acampamento Terra Livre (ATL), evento que reunia milhares de pessoas de todo o país em Brasília —por causa da pandemia, ele foi realizado virtualmente em 2020 e, neste ano, terá encontros online durante todo o o mês de abril. É fruto da união e auto-organização dos povos, que são as raízes que sustentam esse país e que durante a pandemia recebeu o reconhecimento do Supremo Tribunal Federal (STF) como entidade que pode entrar com ações diretas na principal corte do país.

Com organizações regionais, nossa rede está presente em todas as regiões do país: a Articulação dos Povos Indígenas do Nordeste, Minas Gerais e Espírito Santo (Apoinme), o Conselho do Povo Terena, a Articulação dos Povos Indígenas do Sudeste (Arpinsudeste), a Articulação dos Povos Indígenas do Sul (Arpinsul), a Grande Assembleia do Povo Guarani (Aty Guasu), a Coordenação das Organizações Indígenas da Amazônia Brasileira (Coiab) e a Comissão Guarani Yvyrupa.

No ano passado, a Apib ganhou o Prêmio Internacional Letelier-Moffitt de Direitos Humanos, concedido pelo Instituto de Estudos Políticos de Washington. A organização tem sido chamada a falar em conferências da ONU. Há décadas tem voz ativa em conferências internacionais, junto a organismos como a ONU e a Comissão Interamericana de Direitos Humanos. Enquanto o governo negligencia criminosamente o atendimento aos povos tradicionais durante a pandemia, com seu projeto integracionista, estamos garantindo segurança alimentar, barreiras sanitárias e equipamentos de proteção por meio do Plano Emergência Indígena, construído de forma participativa com todas as organizações de base que compõem nossa grande articulação.

Estamos nas redes, aldeias, universidades, cidades, prefeituras, Câmaras Legislativas federal, estaduais e municipais e seguiremos lutando contra o racismo e a violência. Em um mundo doente e enfrentando um projeto de morte, nossa luta ainda é pela vida, contra todos os vírus que nos matam! Nosso maior objetivo é garantir a posse de nossas terras para preservá-las e manter nossas identidades culturais.

Terras indígenas são bens da União; ou seja, pertencem ao Brasil, a todos os brasileiros. Temos direito a seu usufruto, mas para manter nossos modos de vida tradicionais. Está tudo na Constituição. Conhecemos as mentiras, que agora são as famosas fake news, desde 1500, quando os portugueses chegaram aqui oferecendo amizade e, assim que dávamos as costas, nos apunhalavam. Não trocamos Pindorama por espelhos, conforme ensinavam erroneamente os livros de história de antigamente. Sabemos o real valor das coisas e das pessoas.

No dia 6 de abril, quando 4.195 compatriotas foram levados pela Covid-19 no país, a revista “Forbes” publicou duas notícias que dizem muito: mais 11 brasileiros entraram para a lista de bilionários do mundo durante a pandemia —dentre eles, ironicamente, nomes ligados à saúde privada— e que todo dia 116,8 milhões de pessoas não sabem se terão o que comer no país.

O abismo social se aprofunda; a quem isso interessa? Quem acredita que vai ver a cor do dinheiro que será arrancado das ruínas de nossas terras? “Decidimos não morrer”: esta resolução, tomada por nós há mais de cinco séculos, foi reafirmada no Acampamento Terra Livre. Nem todos sabem, mas zelar pelo meio ambiente é um dever constitucional de todo cidadão —é só consultar o artigo 225.

Convidamos todos os brasileiros a firmar esse acordo conosco.

How Facebook got addicted to spreading misinformation (MIT Tech Review)

technologyreview.com

Karen Hao, March 11, 2021


Joaquin Quiñonero Candela, a director of AI at Facebook, was apologizing to his audience.

It was March 23, 2018, just days after the revelation that Cambridge Analytica, a consultancy that worked on Donald Trump’s 2016 presidential election campaign, had surreptitiously siphoned the personal data of tens of millions of Americans from their Facebook accounts in an attempt to influence how they voted. It was the biggest privacy breach in Facebook’s history, and Quiñonero had been previously scheduled to speak at a conference on, among other things, “the intersection of AI, ethics, and privacy” at the company. He considered canceling, but after debating it with his communications director, he’d kept his allotted time.

As he stepped up to face the room, he began with an admission. “I’ve just had the hardest five days in my tenure at Facebook,” he remembers saying. “If there’s criticism, I’ll accept it.”

The Cambridge Analytica scandal would kick off Facebook’s largest publicity crisis ever. It compounded fears that the algorithms that determine what people see on the platform were amplifying fake news and hate speech, and that Russian hackers had weaponized them to try to sway the election in Trump’s favor. Millions began deleting the app; employees left in protest; the company’s market capitalization plunged by more than $100 billion after its July earnings call.

In the ensuing months, Mark Zuckerberg began his own apologizing. He apologized for not taking “a broad enough view” of Facebook’s responsibilities, and for his mistakes as a CEO. Internally, Sheryl Sandberg, the chief operating officer, kicked off a two-year civil rights audit to recommend ways the company could prevent the use of its platform to undermine democracy.

Finally, Mike Schroepfer, Facebook’s chief technology officer, asked Quiñonero to start a team with a directive that was a little vague: to examine the societal impact of the company’s algorithms. The group named itself the Society and AI Lab (SAIL); last year it combined with another team working on issues of data privacy to form Responsible AI.

Quiñonero was a natural pick for the job. He, as much as anybody, was the one responsible for Facebook’s position as an AI powerhouse. In his six years at Facebook, he’d created some of the first algorithms for targeting users with content precisely tailored to their interests, and then he’d diffused those algorithms across the company. Now his mandate would be to make them less harmful.

Facebook has consistently pointed to the efforts by Quiñonero and others as it seeks to repair its reputation. It regularly trots out various leaders to speak to the media about the ongoing reforms. In May of 2019, it granted a series of interviews with Schroepfer to the New York Times, which rewarded the company with a humanizing profile of a sensitive, well-intentioned executive striving to overcome the technical challenges of filtering out misinformation and hate speech from a stream of content that amounted to billions of pieces a day. These challenges are so hard that it makes Schroepfer emotional, wrote the Times: “Sometimes that brings him to tears.”

In the spring of 2020, it was apparently my turn. Ari Entin, Facebook’s AI communications director, asked in an email if I wanted to take a deeper look at the company’s AI work. After talking to several of its AI leaders, I decided to focus on Quiñonero. Entin happily obliged. As not only the leader of the Responsible AI team but also the man who had made Facebook into an AI-driven company, Quiñonero was a solid choice to use as a poster boy.

He seemed a natural choice of subject to me, too. In the years since he’d formed his team following the Cambridge Analytica scandal, concerns about the spread of lies and hate speech on Facebook had only grown. In late 2018 the company admitted that this activity had helped fuel a genocidal anti-Muslim campaign in Myanmar for several years. In 2020 Facebook started belatedly taking action against Holocaust deniers, anti-vaxxers, and the conspiracy movement QAnon. All these dangerous falsehoods were metastasizing thanks to the AI capabilities Quiñonero had helped build. The algorithms that underpin Facebook’s business weren’t created to filter out what was false or inflammatory; they were designed to make people share and engage with as much content as possible by showing them things they were most likely to be outraged or titillated by. Fixing this problem, to me, seemed like core Responsible AI territory.

I began video-calling Quiñonero regularly. I also spoke to Facebook executives, current and former employees, industry peers, and external experts. Many spoke on condition of anonymity because they’d signed nondisclosure agreements or feared retaliation. I wanted to know: What was Quiñonero’s team doing to rein in the hate and lies on its platform?

Joaquin Quinonero Candela
Joaquin Quiñonero Candela outside his home in the Bay Area, where he lives with his wife and three kids.

But Entin and Quiñonero had a different agenda. Each time I tried to bring up these topics, my requests to speak about them were dropped or redirected. They only wanted to discuss the Responsible AI team’s plan to tackle one specific kind of problem: AI bias, in which algorithms discriminate against particular user groups. An example would be an ad-targeting algorithm that shows certain job or housing opportunities to white people but not to minorities.

By the time thousands of rioters stormed the US Capitol in January, organized in part on Facebook and fueled by the lies about a stolen election that had fanned out across the platform, it was clear from my conversations that the Responsible AI team had failed to make headway against misinformation and hate speech because it had never made those problems its main focus. More important, I realized, if it tried to, it would be set up for failure.

The reason is simple. Everything the company does and chooses not to do flows from a single motivation: Zuckerberg’s relentless desire for growth. Quiñonero’s AI expertise supercharged that growth. His team got pigeonholed into targeting AI bias, as I learned in my reporting, because preventing such bias helps the company avoid proposed regulation that might, if passed, hamper that growth. Facebook leadership has also repeatedly weakened or halted many initiatives meant to clean up misinformation on the platform because doing so would undermine that growth.

In other words, the Responsible AI team’s work—whatever its merits on the specific problem of tackling AI bias—is essentially irrelevant to fixing the bigger problems of misinformation, extremism, and political polarization. And it’s all of us who pay the price.

“When you’re in the business of maximizing engagement, you’re not interested in truth. You’re not interested in harm, divisiveness, conspiracy. In fact, those are your friends,” says Hany Farid, a professor at the University of California, Berkeley who collaborates with Facebook to understand image- and video-based misinformation on the platform.

“They always do just enough to be able to put the press release out. But with a few exceptions, I don’t think it’s actually translated into better policies. They’re never really dealing with the fundamental problems.”

In March of 2012, Quiñonero visited a friend in the Bay Area. At the time, he was a manager in Microsoft Research’s UK office, leading a team using machine learning to get more visitors to click on ads displayed by the company’s search engine, Bing. His expertise was rare, and the team was less than a year old. Machine learning, a subset of AI, had yet to prove itself as a solution to large-scale industry problems. Few tech giants had invested in the technology.

Quiñonero’s friend wanted to show off his new employer, one of the hottest startups in Silicon Valley: Facebook, then eight years old and already with close to a billion monthly active users (i.e., those who have logged in at least once in the past 30 days). As Quiñonero walked around its Menlo Park headquarters, he watched a lone engineer make a major update to the website, something that would have involved significant red tape at Microsoft. It was a memorable introduction to Zuckerberg’s “Move fast and break things” ethos. Quiñonero was awestruck by the possibilities. Within a week, he had been through interviews and signed an offer to join the company.

His arrival couldn’t have been better timed. Facebook’s ads service was in the middle of a rapid expansion as the company was preparing for its May IPO. The goal was to increase revenue and take on Google, which had the lion’s share of the online advertising market. Machine learning, which could predict which ads would resonate best with which users and thus make them more effective, could be the perfect tool. Shortly after starting, Quiñonero was promoted to managing a team similar to the one he’d led at Microsoft.

Joaquin Quinonero Candela
Quiñonero started raising chickens in late 2019 as a way to unwind from the intensity of his job.

Unlike traditional algorithms, which are hard-coded by engineers, machine-learning algorithms “train” on input data to learn the correlations within it. The trained algorithm, known as a machine-learning model, can then automate future decisions. An algorithm trained on ad click data, for example, might learn that women click on ads for yoga leggings more often than men. The resultant model will then serve more of those ads to women. Today at an AI-based company like Facebook, engineers generate countless models with slight variations to see which one performs best on a given problem.

Facebook’s massive amounts of user data gave Quiñonero a big advantage. His team could develop models that learned to infer the existence not only of broad categories like “women” and “men,” but of very fine-grained categories like “women between 25 and 34 who liked Facebook pages related to yoga,” and targeted ads to them. The finer-grained the targeting, the better the chance of a click, which would give advertisers more bang for their buck.

Within a year his team had developed these models, as well as the tools for designing and deploying new ones faster. Before, it had taken Quiñonero’s engineers six to eight weeks to build, train, and test a new model. Now it took only one.

News of the success spread quickly. The team that worked on determining which posts individual Facebook users would see on their personal news feeds wanted to apply the same techniques. Just as algorithms could be trained to predict who would click what ad, they could also be trained to predict who would like or share what post, and then give those posts more prominence. If the model determined that a person really liked dogs, for instance, friends’ posts about dogs would appear higher up on that user’s news feed.

Quiñonero’s success with the news feed—coupled with impressive new AI research being conducted outside the company—caught the attention of Zuckerberg and Schroepfer. Facebook now had just over 1 billion users, making it more than eight times larger than any other social network, but they wanted to know how to continue that growth. The executives decided to invest heavily in AI, internet connectivity, and virtual reality.

They created two AI teams. One was FAIR, a fundamental research lab that would advance the technology’s state-of-the-art capabilities. The other, Applied Machine Learning (AML), would integrate those capabilities into Facebook’s products and services. In December 2013, after months of courting and persuasion, the executives recruited Yann LeCun, one of the biggest names in the field, to lead FAIR. Three months later, Quiñonero was promoted again, this time to lead AML. (It was later renamed FAIAR, pronounced “fire.”)

“That’s how you know what’s on his mind. I was always, for a couple of years, a few steps from Mark’s desk.”

Joaquin Quiñonero Candela

In his new role, Quiñonero built a new model-development platform for anyone at Facebook to access. Called FBLearner Flow, it allowed engineers with little AI experience to train and deploy machine-learning models within days. By mid-2016, it was in use by more than a quarter of Facebook’s engineering team and had already been used to train over a million models, including models for image recognition, ad targeting, and content moderation.

Zuckerberg’s obsession with getting the whole world to use Facebook had found a powerful new weapon. Teams had previously used design tactics, like experimenting with the content and frequency of notifications, to try to hook users more effectively. Their goal, among other things, was to increase a metric called L6/7, the fraction of people who logged in to Facebook six of the previous seven days. L6/7 is just one of myriad ways in which Facebook has measured “engagement”—the propensity of people to use its platform in any way, whether it’s by posting things, commenting on them, liking or sharing them, or just looking at them. Now every user interaction once analyzed by engineers was being analyzed by algorithms. Those algorithms were creating much faster, more personalized feedback loops for tweaking and tailoring each user’s news feed to keep nudging up engagement numbers.

Zuckerberg, who sat in the center of Building 20, the main office at the Menlo Park headquarters, placed the new FAIR and AML teams beside him. Many of the original AI hires were so close that his desk and theirs were practically touching. It was “the inner sanctum,” says a former leader in the AI org (the branch of Facebook that contains all its AI teams), who recalls the CEO shuffling people in and out of his vicinity as they gained or lost his favor. “That’s how you know what’s on his mind,” says Quiñonero. “I was always, for a couple of years, a few steps from Mark’s desk.”

With new machine-learning models coming online daily, the company created a new system to track their impact and maximize user engagement. The process is still the same today. Teams train up a new machine-learning model on FBLearner, whether to change the ranking order of posts or to better catch content that violates Facebook’s community standards (its rules on what is and isn’t allowed on the platform). Then they test the new model on a small subset of Facebook’s users to measure how it changes engagement metrics, such as the number of likes, comments, and shares, says Krishna Gade, who served as the engineering manager for news feed from 2016 to 2018.

If a model reduces engagement too much, it’s discarded. Otherwise, it’s deployed and continually monitored. On Twitter, Gade explained that his engineers would get notifications every few days when metrics such as likes or comments were down. Then they’d decipher what had caused the problem and whether any models needed retraining.

But this approach soon caused issues. The models that maximize engagement also favor controversy, misinformation, and extremism: put simply, people just like outrageous stuff. Sometimes this inflames existing political tensions. The most devastating example to date is the case of Myanmar, where viral fake news and hate speech about the Rohingya Muslim minority escalated the country’s religious conflict into a full-blown genocide. Facebook admitted in 2018, after years of downplaying its role, that it had not done enough “to help prevent our platform from being used to foment division and incite offline violence.”

While Facebook may have been oblivious to these consequences in the beginning, it was studying them by 2016. In an internal presentation from that year, reviewed by the Wall Street Journal, a company researcher, Monica Lee, found that Facebook was not only hosting a large number of extremist groups but also promoting them to its users: “64% of all extremist group joins are due to our recommendation tools,” the presentation said, predominantly thanks to the models behind the “Groups You Should Join” and “Discover” features.

“The question for leadership was: Should we be optimizing for engagement if you find that somebody is in a vulnerable state of mind?”

A former AI researcher who joined in 2018

In 2017, Chris Cox, Facebook’s longtime chief product officer, formed a new task force to understand whether maximizing user engagement on Facebook was contributing to political polarization. It found that there was indeed a correlation, and that reducing polarization would mean taking a hit on engagement. In a mid-2018 document reviewed by the Journal, the task force proposed several potential fixes, such as tweaking the recommendation algorithms to suggest a more diverse range of groups for people to join. But it acknowledged that some of the ideas were “antigrowth.” Most of the proposals didn’t move forward, and the task force disbanded.

Since then, other employees have corroborated these findings. A former Facebook AI researcher who joined in 2018 says he and his team conducted “study after study” confirming the same basic idea: models that maximize engagement increase polarization. They could easily track how strongly users agreed or disagreed on different issues, what content they liked to engage with, and how their stances changed as a result. Regardless of the issue, the models learned to feed users increasingly extreme viewpoints. “Over time they measurably become more polarized,” he says.

The researcher’s team also found that users with a tendency to post or engage with melancholy content—a possible sign of depression—could easily spiral into consuming increasingly negative material that risked further worsening their mental health. The team proposed tweaking the content-ranking models for these users to stop maximizing engagement alone, so they would be shown less of the depressing stuff. “The question for leadership was: Should we be optimizing for engagement if you find that somebody is in a vulnerable state of mind?” he remembers. (A Facebook spokesperson said she could not find documentation for this proposal.)

But anything that reduced engagement, even for reasons such as not exacerbating someone’s depression, led to a lot of hemming and hawing among leadership. With their performance reviews and salaries tied to the successful completion of projects, employees quickly learned to drop those that received pushback and continue working on those dictated from the top down.

One such project heavily pushed by company leaders involved predicting whether a user might be at risk for something several people had already done: livestreaming their own suicide on Facebook Live. The task involved building a model to analyze the comments that other users were posting on a video after it had gone live, and bringing at-risk users to the attention of trained Facebook community reviewers who could call local emergency responders to perform a wellness check. It didn’t require any changes to content-ranking models, had negligible impact on engagement, and effectively fended off negative press. It was also nearly impossible, says the researcher: “It’s more of a PR stunt. The efficacy of trying to determine if somebody is going to kill themselves in the next 30 seconds, based on the first 10 seconds of video analysis—you’re not going to be very effective.”

Facebook disputes this characterization, saying the team that worked on this effort has since successfully predicted which users were at risk and increased the number of wellness checks performed. But the company does not release data on the accuracy of its predictions or how many wellness checks turned out to be real emergencies.

That former employee, meanwhile, no longer lets his daughter use Facebook.

Quiñonero should have been perfectly placed to tackle these problems when he created the SAIL (later Responsible AI) team in April 2018. His time as the director of Applied Machine Learning had made him intimately familiar with the company’s algorithms, especially the ones used for recommending posts, ads, and other content to users.

It also seemed that Facebook was ready to take these problems seriously. Whereas previous efforts to work on them had been scattered across the company, Quiñonero was now being granted a centralized team with leeway in his mandate to work on whatever he saw fit at the intersection of AI and society.

At the time, Quiñonero was engaging in his own reeducation about how to be a responsible technologist. The field of AI research was paying growing attention to problems of AI bias and accountability in the wake of high-profile studies showing that, for example, an algorithm was scoring Black defendants as more likely to be rearrested than white defendants who’d been arrested for the same or a more serious offense. Quiñonero began studying the scientific literature on algorithmic fairness, reading books on ethical engineering and the history of technology, and speaking with civil rights experts and moral philosophers.

Joaquin Quinonero Candela

Over the many hours I spent with him, I could tell he took this seriously. He had joined Facebook amid the Arab Spring, a series of revolutions against oppressive Middle Eastern regimes. Experts had lauded social media for spreading the information that fueled the uprisings and giving people tools to organize. Born in Spain but raised in Morocco, where he’d seen the suppression of free speech firsthand, Quiñonero felt an intense connection to Facebook’s potential as a force for good.

Six years later, Cambridge Analytica had threatened to overturn this promise. The controversy forced him to confront his faith in the company and examine what staying would mean for his integrity. “I think what happens to most people who work at Facebook—and definitely has been my story—is that there’s no boundary between Facebook and me,” he says. “It’s extremely personal.” But he chose to stay, and to head SAIL, because he believed he could do more for the world by helping turn the company around than by leaving it behind.

“I think if you’re at a company like Facebook, especially over the last few years, you really realize the impact that your products have on people’s lives—on what they think, how they communicate, how they interact with each other,” says Quiñonero’s longtime friend Zoubin Ghahramani, who helps lead the Google Brain team. “I know Joaquin cares deeply about all aspects of this. As somebody who strives to achieve better and improve things, he sees the important role that he can have in shaping both the thinking and the policies around responsible AI.”

At first, SAIL had only five people, who came from different parts of the company but were all interested in the societal impact of algorithms. One founding member, Isabel Kloumann, a research scientist who’d come from the company’s core data science team, brought with her an initial version of a tool to measure the bias in AI models.

The team also brainstormed many other ideas for projects. The former leader in the AI org, who was present for some of the early meetings of SAIL, recalls one proposal for combating polarization. It involved using sentiment analysis, a form of machine learning that interprets opinion in bits of text, to better identify comments that expressed extreme points of view. These comments wouldn’t be deleted, but they would be hidden by default with an option to reveal them, thus limiting the number of people who saw them.

And there were discussions about what role SAIL could play within Facebook and how it should evolve over time. The sentiment was that the team would first produce responsible-AI guidelines to tell the product teams what they should or should not do. But the hope was that it would ultimately serve as the company’s central hub for evaluating AI projects and stopping those that didn’t follow the guidelines.

Former employees described, however, how hard it could be to get buy-in or financial support when the work didn’t directly improve Facebook’s growth. By its nature, the team was not thinking about growth, and in some cases it was proposing ideas antithetical to growth. As a result, it received few resources and languished. Many of its ideas stayed largely academic.

On August 29, 2018, that suddenly changed. In the ramp-up to the US midterm elections, President Donald Trump and other Republican leaders ratcheted up accusations that Facebook, Twitter, and Google had anti-conservative bias. They claimed that Facebook’s moderators in particular, in applying the community standards, were suppressing conservative voices more than liberal ones. This charge would later be debunked, but the hashtag #StopTheBias, fueled by a Trump tweet, was rapidly spreading on social media.

For Trump, it was the latest effort to sow distrust in the country’s mainstream information distribution channels. For Zuckerberg, it threatened to alienate Facebook’s conservative US users and make the company more vulnerable to regulation from a Republican-led government. In other words, it threatened the company’s growth.

Facebook did not grant me an interview with Zuckerberg, but previous reporting has shown how he increasingly pandered to Trump and the Republican leadership. After Trump was elected, Joel Kaplan, Facebook’s VP of global public policy and its highest-ranking Republican, advised Zuckerberg to tread carefully in the new political environment.

On September 20, 2018, three weeks after Trump’s #StopTheBias tweet, Zuckerberg held a meeting with Quiñonero for the first time since SAIL’s creation. He wanted to know everything Quiñonero had learned about AI bias and how to quash it in Facebook’s content-moderation models. By the end of the meeting, one thing was clear: AI bias was now Quiñonero’s top priority. “The leadership has been very, very pushy about making sure we scale this aggressively,” says Rachad Alao, the engineering director of Responsible AI who joined in April 2019.

It was a win for everybody in the room. Zuckerberg got a way to ward off charges of anti-conservative bias. And Quiñonero now had more money and a bigger team to make the overall Facebook experience better for users. They could build upon Kloumann’s existing tool in order to measure and correct the alleged anti-conservative bias in content-moderation models, as well as to correct other types of bias in the vast majority of models across the platform.

This could help prevent the platform from unintentionally discriminating against certain users. By then, Facebook already had thousands of models running concurrently, and almost none had been measured for bias. That would get it into legal trouble a few months later with the US Department of Housing and Urban Development (HUD), which alleged that the company’s algorithms were inferring “protected” attributes like race from users’ data and showing them ads for housing based on those attributes—an illegal form of discrimination. (The lawsuit is still pending.) Schroepfer also predicted that Congress would soon pass laws to regulate algorithmic discrimination, so Facebook needed to make headway on these efforts anyway.

(Facebook disputes the idea that it pursued its work on AI bias to protect growth or in anticipation of regulation. “We built the Responsible AI team because it was the right thing to do,” a spokesperson said.)

But narrowing SAIL’s focus to algorithmic fairness would sideline all Facebook’s other long-standing algorithmic problems. Its content-recommendation models would continue pushing posts, news, and groups to users in an effort to maximize engagement, rewarding extremist content and contributing to increasingly fractured political discourse.

Zuckerberg even admitted this. Two months after the meeting with Quiñonero, in a public note outlining Facebook’s plans for content moderation, he illustrated the harmful effects of the company’s engagement strategy with a simplified chart. It showed that the more likely a post is to violate Facebook’s community standards, the more user engagement it receives, because the algorithms that maximize engagement reward inflammatory content.

A chart titled "natural engagement pattern" that shows allowed content on the X axis, engagement on the Y axis, and an exponential increase in engagement as content nears the policy line for prohibited content.

But then he showed another chart with the inverse relationship. Rather than rewarding content that came close to violating the community standards, Zuckerberg wrote, Facebook could choose to start “penalizing” it, giving it “less distribution and engagement” rather than more. How would this be done? With more AI. Facebook would develop better content-moderation models to detect this “borderline content” so it could be retroactively pushed lower in the news feed to snuff out its virality, he said.

A chart titled "adjusted to discourage borderline content" that shows the same chart but the curve inverted to reach no engagement when it reaches the policy line.

The problem is that for all Zuckerberg’s promises, this strategy is tenuous at best.

Misinformation and hate speech constantly evolve. New falsehoods spring up; new people and groups become targets. To catch things before they go viral, content-moderation models must be able to identify new unwanted content with high accuracy. But machine-learning models do not work that way. An algorithm that has learned to recognize Holocaust denial can’t immediately spot, say, Rohingya genocide denial. It must be trained on thousands, often even millions, of examples of a new type of content before learning to filter it out. Even then, users can quickly learn to outwit the model by doing things like changing the wording of a post or replacing incendiary phrases with euphemisms, making their message illegible to the AI while still obvious to a human. This is why new conspiracy theories can rapidly spiral out of control, and partly why, even after such content is banned, forms of it can persist on the platform.

In his New York Times profile, Schroepfer named these limitations of the company’s content-moderation strategy. “Every time Mr. Schroepfer and his more than 150 engineering specialists create A.I. solutions that flag and squelch noxious material, new and dubious posts that the A.I. systems have never seen before pop up—and are thus not caught,” wrote the Times. “It’s never going to go to zero,” Schroepfer told the publication.

Meanwhile, the algorithms that recommend this content still work to maximize engagement. This means every toxic post that escapes the content-moderation filters will continue to be pushed higher up the news feed and promoted to reach a larger audience. Indeed, a study from New York University recently found that among partisan publishers’ Facebook pages, those that regularly posted political misinformation received the most engagement in the lead-up to the 2020 US presidential election and the Capitol riots. “That just kind of got me,” says a former employee who worked on integrity issues from 2018 to 2019. “We fully acknowledged [this], and yet we’re still increasing engagement.”

But Quiñonero’s SAIL team wasn’t working on this problem. Because of Kaplan’s and Zuckerberg’s worries about alienating conservatives, the team stayed focused on bias. And even after it merged into the bigger Responsible AI team, it was never mandated to work on content-recommendation systems that might limit the spread of misinformation. Nor has any other team, as I confirmed after Entin and another spokesperson gave me a full list of all Facebook’s other initiatives on integrity issues—the company’s umbrella term for problems including misinformation, hate speech, and polarization.

A Facebook spokesperson said, “The work isn’t done by one specific team because that’s not how the company operates.” It is instead distributed among the teams that have the specific expertise to tackle how content ranking affects misinformation for their part of the platform, she said. But Schroepfer told me precisely the opposite in an earlier interview. I had asked him why he had created a centralized Responsible AI team instead of directing existing teams to make progress on the issue. He said it was “best practice” at the company.

“[If] it’s an important area, we need to move fast on it, it’s not well-defined, [we create] a dedicated team and get the right leadership,” he said. “As an area grows and matures, you’ll see the product teams take on more work, but the central team is still needed because you need to stay up with state-of-the-art work.”

When I described the Responsible AI team’s work to other experts on AI ethics and human rights, they noted the incongruity between the problems it was tackling and those, like misinformation, for which Facebook is most notorious. “This seems to be so oddly removed from Facebook as a product—the things Facebook builds and the questions about impact on the world that Facebook faces,” said Rumman Chowdhury, whose startup, Parity, advises firms on the responsible use of AI, and was acquired by Twitter after our interview. I had shown Chowdhury the Quiñonero team’s documentation detailing its work. “I find it surprising that we’re going to talk about inclusivity, fairness, equity, and not talk about the very real issues happening today,” she said.

“It seems like the ‘responsible AI’ framing is completely subjective to what a company decides it wants to care about. It’s like, ‘We’ll make up the terms and then we’ll follow them,’” says Ellery Roberts Biddle, the editorial director of Ranking Digital Rights, a nonprofit that studies the impact of tech companies on human rights. “I don’t even understand what they mean when they talk about fairness. Do they think it’s fair to recommend that people join extremist groups, like the ones that stormed the Capitol? If everyone gets the recommendation, does that mean it was fair?”

“We’re at a place where there’s one genocide [Myanmar] that the UN has, with a lot of evidence, been able to specifically point to Facebook and to the way that the platform promotes content,” Biddle adds. “How much higher can the stakes get?”

Over the last two years, Quiñonero’s team has built out Kloumann’s original tool, called Fairness Flow. It allows engineers to measure the accuracy of machine-learning models for different user groups. They can compare a face-detection model’s accuracy across different ages, genders, and skin tones, or a speech-recognition algorithm’s accuracy across different languages, dialects, and accents.

Fairness Flow also comes with a set of guidelines to help engineers understand what it means to train a “fair” model. One of the thornier problems with making algorithms fair is that there are different definitions of fairness, which can be mutually incompatible. Fairness Flow lists four definitions that engineers can use according to which suits their purpose best, such as whether a speech-recognition model recognizes all accents with equal accuracy or with a minimum threshold of accuracy.

But testing algorithms for fairness is still largely optional at Facebook. None of the teams that work directly on Facebook’s news feed, ad service, or other products are required to do it. Pay incentives are still tied to engagement and growth metrics. And while there are guidelines about which fairness definition to use in any given situation, they aren’t enforced.

This last problem came to the fore when the company had to deal with allegations of anti-conservative bias.

In 2014, Kaplan was promoted from US policy head to global vice president for policy, and he began playing a more heavy-handed role in content moderation and decisions about how to rank posts in users’ news feeds. After Republicans started voicing claims of anti-conservative bias in 2016, his team began manually reviewing the impact of misinformation-detection models on users to ensure—among other things—that they didn’t disproportionately penalize conservatives.

All Facebook users have some 200 “traits” attached to their profile. These include various dimensions submitted by users or estimated by machine-learning models, such as race, political and religious leanings, socioeconomic class, and level of education. Kaplan’s team began using the traits to assemble custom user segments that reflected largely conservative interests: users who engaged with conservative content, groups, and pages, for example. Then they’d run special analyses to see how content-moderation decisions would affect posts from those segments, according to a former researcher whose work was subject to those reviews.

The Fairness Flow documentation, which the Responsible AI team wrote later, includes a case study on how to use the tool in such a situation. When deciding whether a misinformation model is fair with respect to political ideology, the team wrote, “fairness” does not mean the model should affect conservative and liberal users equally. If conservatives are posting a greater fraction of misinformation, as judged by public consensus, then the model should flag a greater fraction of conservative content. If liberals are posting more misinformation, it should flag their content more often too.

But members of Kaplan’s team followed exactly the opposite approach: they took “fairness” to mean that these models should not affect conservatives more than liberals. When a model did so, they would stop its deployment and demand a change. Once, they blocked a medical-misinformation detector that had noticeably reduced the reach of anti-vaccine campaigns, the former researcher told me. They told the researchers that the model could not be deployed until the team fixed this discrepancy. But that effectively made the model meaningless. “There’s no point, then,” the researcher says. A model modified in that way “would have literally no impact on the actual problem” of misinformation.

“I don’t even understand what they mean when they talk about fairness. Do they think it’s fair to recommend that people join extremist groups, like the ones that stormed the Capitol? If everyone gets the recommendation, does that mean it was fair?”

Ellery Roberts Biddle, editorial director of Ranking Digital Rights

This happened countless other times—and not just for content moderation. In 2020, the Washington Post reported that Kaplan’s team had undermined efforts to mitigate election interference and polarization within Facebook, saying they could contribute to anti-conservative bias. In 2018, it used the same argument to shelve a project to edit Facebook’s recommendation models even though researchers believed it would reduce divisiveness on the platform, according to the Wall Street Journal. His claims about political bias also weakened a proposal to edit the ranking models for the news feed that Facebook’s data scientists believed would strengthen the platform against the manipulation tactics Russia had used during the 2016 US election.

And ahead of the 2020 election, Facebook policy executives used this excuse, according to the New York Times, to veto or weaken several proposals that would have reduced the spread of hateful and damaging content.

Facebook disputed the Wall Street Journal’s reporting in a follow-up blog post, and challenged the New York Times’s characterization in an interview with the publication. A spokesperson for Kaplan’s team also denied to me that this was a pattern of behavior, saying the cases reported by the Post, the Journal, and the Times were “all individual instances that we believe are then mischaracterized.” He declined to comment about the retraining of misinformation models on the record.

Many of these incidents happened before Fairness Flow was adopted. But they show how Facebook’s pursuit of fairness in the service of growth had already come at a steep cost to progress on the platform’s other challenges. And if engineers used the definition of fairness that Kaplan’s team had adopted, Fairness Flow could simply systematize behavior that rewarded misinformation instead of helping to combat it.

Often “the whole fairness thing” came into play only as a convenient way to maintain the status quo, the former researcher says: “It seems to fly in the face of the things that Mark was saying publicly in terms of being fair and equitable.”

The last time I spoke with Quiñonero was a month after the US Capitol riots. I wanted to know how the storming of Congress had affected his thinking and the direction of his work.

In the video call, it was as it always was: Quiñonero dialing in from his home office in one window and Entin, his PR handler, in another. I asked Quiñonero what role he felt Facebook had played in the riots and whether it changed the task he saw for Responsible AI. After a long pause, he sidestepped the question, launching into a description of recent work he’d done to promote greater diversity and inclusion among the AI teams.

I asked him the question again. His Facebook Portal camera, which uses computer-vision algorithms to track the speaker, began to slowly zoom in on his face as he grew still. “I don’t know that I have an easy answer to that question, Karen,” he said. “It’s an extremely difficult question to ask me.”

Entin, who’d been rapidly pacing with a stoic poker face, grabbed a red stress ball.

I asked Quiñonero why his team hadn’t previously looked at ways to edit Facebook’s content-ranking models to tamp down misinformation and extremism. He told me it was the job of other teams (though none, as I confirmed, have been mandated to work on that task). “It’s not feasible for the Responsible AI team to study all those things ourselves,” he said. When I asked whether he would consider having his team tackle those issues in the future, he vaguely admitted, “I would agree with you that that is going to be the scope of these types of conversations.”

Near the end of our hour-long interview, he began to emphasize that AI was often unfairly painted as “the culprit.” Regardless of whether Facebook used AI or not, he said, people would still spew lies and hate speech, and that content would still spread across the platform.

I pressed him one more time. Certainly he couldn’t believe that algorithms had done absolutely nothing to change the nature of these issues, I said.

“I don’t know,” he said with a halting stutter. Then he repeated, with more conviction: “That’s my honest answer. Honest to God. I don’t know.”

Corrections: We amended a line that suggested that Joel Kaplan, Facebook’s vice president of global policy, had used Fairness Flow. He has not. But members of his team have used the notion of fairness to request the retraining of misinformation models in ways that directly contradict Responsible AI’s guidelines. We also clarified when Rachad Alao, the engineering director of Responsible AI, joined the company.

‘They called her a crazy witch’: did medium Hilma af Klint invent abstract art? (The Guardian)

theguardian.com

Stuart Jeffries

Tue 6 Oct 2020 15.53 BST Last modified on Tue 6 Oct 2020 16.45 BST


In 1971, the art critic Linda Nochlin wrote an essay called Why have there been no great women artists? The question may be based on a false premise: there have been, we just didn’t get to see their work.

The visionary Swedish artist Hilma af Klint exemplifies this clearly, argues Halina Dyrschka, the German film-maker, whose beautiful film Beyond the Visible, about the painter’s astonishing work, is released on Friday. When I ask her why af Klint has been largely ignored since her death in 1944, Dyrschka tells me over video link from Berlin: “It’s easier to make a woman into a crazy witch than change art history to accommodate her. We still see a woman who is spiritual as a witch, while we celebrate spiritual male artists as geniuses.”

When Dyrschka first saw Hilma af Klint’s paintings seven years ago, “they spoke to me more profoundly than any art I have ever seen”. She was beguiled by the grids and intersecting circles, schematic flower forms, painted numbers, looping lines, pyramids and sunbursts.“It felt like a personal insult that those paintings had been hidden from me for so long.”

Af Klint had three strikes against her. She was a woman, she had no contacts in the art world, and, worst of all, she was a medium who believed her art flowed through her unmediated by ego. She worked for many years in quiet obscurity on a Swedish island where she cared for her mother as the latter went blind. Today, her work is being appreciated, but not bought up, by collectors because it is held by her descendants. As Ulla af Klint, widow of the nephew who inherited the artist’s work, says in the film: “You can’t make money out of Hilma.”

Af Klint’s mysticism hobbled her reputation long after her death. In the 1970s, her grand nephew Johan af Klint offered paintings to Sweden’s leading modern art museum, the Moderna Museet. The then-director turned them down. “When he heard that she was a medium, there was no discussion. He didn’t even look at the pictures.” Only in 2013 did the museum redeem itself with a retrospective.

“For some it’s very provocative when someone says, ‘I did this physically but it’s not by me. I was in contract with energies greater than me,’” says Iris Müller-Westermann, who curated that show. But, she adds, Kandinsky, Mondrian and Malevich were all influenced by contemporary spiritual movements such as theosophy and anthroposophy too, as they sought to transcend the physical world and the constraints of representational art.

It’s striking that many female artists have been mediums but, unlike, say, the late British pianist and dinner lady Rosemary Brown, who claimed to have transcribed new works from the beyond by Rachmaninov, Beethoven and Liszt, Hilma af Klint was directed not to transcribe new works by dead artists but by forces from a higher realm. In one notebook, she described how she was inspired. “I registered their magnitude within me. Above the easel I saw the Jupiter symbol which [shone] brightly and persisted for several seconds, brightly. I started the work immediately proceeding in such a way that the pictures were painted directly through me with great power.”

When she died, Af Klint left more than 1,300 works, which had only been seen by a handful of people. She also left 125 notebooks, in one of which she stipulated that her work should not be publicly displayed until 20 years after her death. The “Higher Ones” she was in contact with through seances told Af Klint that the world was not ready yet for her work. Maybe they had a point.

Group X, No 2, Altarpiece, Group X, No 3, Altarpiece and Group X, No 1, Altarpiece (left to right) by painter Hilma af Klint (1862-1944) hanging in the Guggenheim.
Group X, No 2, Altarpiece, Group X, No 3, Altarpiece and Group X, No 1, Altarpiece (left to right) by painter Hilma af Klint (1862-1944) hanging in the Guggenheim. Photograph: Dpa Picture Alliance/Alamy Stock Photo

In 1944, three great pioneers of abstract art died: Wassily Kandinsky, Piet Mondrian and af Klint. Kandinsky claimed to have created the first abstract painting in 1911. And when in 2012 New York’s Museum of Modern Art staged their show Inventing Abstraction 1910-1925, Af Klint was not even included as a footnote. And yet, as Frankfurter Allgemeine Zweitung art critic Julia Voss argues in the film, the Swedish artist had the jump on Kandinsky by five years in producing the first abstract painting in 1906.

For her film, Dyrschka contracted MoMA to find out why Af Klint had been erased from art history and was told “they weren’t so sure Hilma af Klint’s art worked as abstract art. After all, she hadn’t exhibited in her lifetime so how could one tell?” In the film, Dyrschka tries to answer that question by juxtaposing paintings by Af Klint with those of famous 20th-century male artists. Her golden square from 1916 is placed alongside a similar image by Josef Albers from 1971; her automatic writing doodles from 1896 are pitted against Cy Twombly’s 1967 squiggles. They make the rhetorical point strongly: whatever the men were doing, af Klint had probably done it first.

Hilma af Klint was born in Stockholm in 1862. Thanks to the family fortune she was able to study at the Royal Academy in Stockholm from which she graduated in 1887. She went on to support herself by painting landscapes and portraits as well as very beautiful botanical works. She joined the Theosophical Society in 1889 and in 1896 established a group of female artists called the Five, who each Friday met to pray, make automatic writing and attempt to communicate with other worlds through seances. Theosophists believe that all forms of life are part of the same cosmic whole. “It was a women’s liberation philosophy,” argues Voss. “It said: ‘Sure you can be priestesses.’”

But Af Klint was not just a conduit for occult spirits. She was also attuned to the scientific developments of the day. As Dyrschka argues in her film, the years in which the artist was creatively active was a time in which science was discovering worlds beyond the visible – including subatomic particles and electromagnetic radiation. Af Klint’s art involved making the invisible visible, be it that which science disclosed or that which the Higher Powers commissioned her to depict.

Wonderful energy … Hilma af Klint’s Altarbild, 1915.
Wonderful energy … Hilma af Klint’s Altarbild, 1915. Photograph: Albin Dahlström/© Stiftelsen Hilma af Klints Verk

But on those Friday meetings, she encountered supernatural beings beyond science’s remit. The Five claimed to receive messages from other worlds. Af Klint recorded one message in her notebook: “‘Accept,’ says the angel ‘that a wonderful energy follows from the heavenly to the earthly.’” The Five called these spirit guides High Masters and gave them names: Amaliel, Ananda, Clemens, Esther, Georg and Gregor. In 1904, these High Masters called for a temple to be built, filled with paintings that the Five would make. Only Af Klint accepted this strange commission and in November 1906 set to work on what grew over the next 11 years to become a series of of 193 paintings.

The philosopher and occultist Rudolf Steiner, whose anthroposophical society she would join, saw the early paintings in this series in 1908 but was uncomprehending. Strikingly, in the next four years Af Klint did little painting, but retreated to the obscure island of Munsö in Lake Mälaren, near her family’s estate on neighbouring Adelsö – in part because she was caring for her ailing mother, but also because Steiner’s patriarchal dismissal stung.

“She was treated locally as a crazy witch,” says Dyrschka. “The locals used to wonder what she did with all the eggs that were delivered to her studio.” They were used for her favoured material, tempera, which critics have noted gives her work on paper a luminous quality.

In a sense this retreat from the world was creatively sensible. Surrounded by water and spirits, Af Klint worked at the service of her occult beliefs. She had great hopes that Steiner would help her build a temple to house her art on a Swedish island that would glorify his philosophy. In 1932 she wrote to him: “Should the paintings which I created between 1902 and 1920, some of which you saw for yourself, be destroyed. Or can one do something with them?”

It sounds like a threat; happily, she didn’t destroy the work even though nothing came of her dream temple. Af Klint did sketch out what the temple should look like – it should be made of alabaster and have an astronomical tower with an internal spiral staircase. Poignantly, in her film Dyrschka juxtaposes this description with images of the Guggenheim in New York where Af Klint’s oeuvre was belatedly given pride of place last year. The skylight and the ramps look like the temple that Hilma af Klint died without seeing realised.

Sensation … the 2018 exhibition at the Guggenheim, New York.
Sensation … the 2018 exhibition at the Guggenheim, New York. Photograph: Dpa Picture Alliance/Alamy Stock Photo

True, the 1986 touring exhibition The Spiritual in Art: Abstract Paintings 1890–1985 exhibition at LACMA in Los Angeles marked the beginning of Af Klint’s international recognition. But it was the Guggenheim exhibition that, more than a century after Af Klint arguably invented abstract and painted some of the most beguiling if neglected canvases in art history, really got what she deserved.

For science historian Ernst Peter Ficsher quoted in the film, it is us rather than Af Klint who require reviving. We need her vision in our disenchanted age. “We know that the universe is made up of 95% dark matter but the strange thing is nobody gets excited about this. I think our world has become blurred stupid dulled unless somewhere out there there’s a Hilma af Klint painting it all so in a hundred years we will see what we’ve missed. In 1900 we still knew how to marvel. Today we sit in front of our iPhones and media and are bored.” Hilma af Klint’s paintings, just maybe, gives us the opportunity to escape the everyday and marvel anew.

  • Beyond the Visible – Hilma af Klint is released on 9 October.

People with extremist views less able to do complex mental tasks, research suggests (The Guardian)

theguardian.com

Natalie Grover, 22 Feb 2021


Cambridge University team say their findings could be used to spot people at risk from radicalisation
Head jigsaw puzzle
A key finding of the psychologists was that people with extremist attitudes tended to think about the world in a black and white way. Photograph: designer491/Getty Images/iStockphoto

Our brains hold clues for the ideologies we choose to live by, according to research, which has suggested that people who espouse extremist attitudes tend to perform poorly on complex mental tasks.

Researchers from the University of Cambridge sought to evaluate whether cognitive disposition – differences in how information is perceived and processed – sculpt ideological world-views such as political, nationalistic and dogmatic beliefs, beyond the impact of traditional demographic factors like age, race and gender.

The study, built on previous research, included more than 330 US-based participants aged 22 to 63 who were exposed to a battery of tests – 37 neuropsychological tasks and 22 personality surveys – over the course of two weeks.

The tasks were engineered to be neutral, not emotional or political – they involved, for instance, memorising visual shapes. The researchers then used computational modelling to extract information from that data about the participant’s perception and learning, and their ability to engage in complex and strategic mental processing.

Overall, the researchers found that ideological attitudes mirrored cognitive decision-making, according to the study published in the journal Philosophical Transactions of the Royal Society B.

A key finding was that people with extremist attitudes tended to think about the world in black and white terms, and struggled with complex tasks that required intricate mental steps, said lead author Dr Leor Zmigrod at Cambridge’s department of psychology.

“Individuals or brains that struggle to process and plan complex action sequences may be more drawn to extreme ideologies, or authoritarian ideologies that simplify the world,” she said.

She said another feature of people with tendencies towards extremism appeared to be that they were not good at regulating their emotions, meaning they were impulsive and tended to seek out emotionally evocative experiences. “And so that kind of helps us understand what kind of individual might be willing to go in and commit violence against innocent others.”

Participants who are prone to dogmatism – stuck in their ways and relatively resistant to credible evidence – actually have a problem with processing evidence even at a perceptual level, the authors found.

“For example, when they’re asked to determine whether dots [as part of a neuropsychological task] are moving to the left or to the right, they just took longer to process that information and come to a decision,” Zmigrod said.

In some cognitive tasks, participants were asked to respond as quickly and as accurately as possible. People who leant towards the politically conservative tended to go for the slow and steady strategy, while political liberals took a slightly more fast and furious, less precise approach.

“It’s fascinating, because conservatism is almost a synonym for caution,” she said. “We’re seeing that – at the very basic neuropsychological level – individuals who are politically conservative … simply treat every stimuli that they encounter with caution.”

The “psychological signature” for extremism across the board was a blend of conservative and dogmatic psychologies, the researchers said.

The study, which looked at 16 different ideological orientations, could have profound implications for identifying and supporting people most vulnerable to radicalisation across the political and religious spectrum.

“What we found is that demographics don’t explain a whole lot; they only explain roughly 8% of the variance,” said Zmigrod. “Whereas, actually, when we incorporate these cognitive and personality assessments as well, suddenly, our capacity to explain the variance of these ideological world-views jumps to 30% or 40%.”

Developing Algorithms That Might One Day Be Used Against You (Gizmodo)

gizmodo.com

Ryan F. Mandelbaum, Jan 24, 2021


Brian Nord is an astrophysicist and machine learning researcher.
Brian Nord is an astrophysicist and machine learning researcher. Photo: Mark Lopez/Argonne National Laboratory

Machine learning algorithms serve us the news we read, the ads we see, and in some cases even drive our cars. But there’s an insidious layer to these algorithms: They rely on data collected by and about humans, and they spit our worst biases right back out at us. For example, job candidate screening algorithms may automatically reject names that sound like they belong to nonwhite people, while facial recognition software is often much worse at recognizing women or nonwhite faces than it is at recognizing white male faces. An increasing number of scientists and institutions are waking up to these issues, and speaking out about the potential for AI to cause harm.

Brian Nord is one such researcher weighing his own work against the potential to cause harm with AI algorithms. Nord is a cosmologist at Fermilab and the University of Chicago, where he uses artificial intelligence to study the cosmos, and he’s been researching a concept for a “self-driving telescope” that can write and test hypotheses with the help of a machine learning algorithm. At the same time, he’s struggling with the idea that the algorithms he’s writing may one day be biased against him—and even used against him—and is working to build a coalition of physicists and computer scientists to fight for more oversight in AI algorithm development.

This interview has been edited and condensed for clarity.

Gizmodo: How did you become a physicist interested in AI and its pitfalls?

Brian Nord: My Ph.d is in cosmology, and when I moved to Fermilab in 2012, I moved into the subfield of strong gravitational lensing. [Editor’s note: Gravitational lenses are places in the night sky where light from distant objects has been bent by the gravitational field of heavy objects in the foreground, making the background objects appear warped and larger.] I spent a few years doing strong lensing science in the traditional way, where we would visually search through terabytes of images, through thousands of candidates of these strong gravitational lenses, because they’re so weird, and no one had figured out a more conventional algorithm to identify them. Around 2015, I got kind of sad at the prospect of only finding these things with my eyes, so I started looking around and found deep learning.

Here we are a few years later—myself and a few other people popularized this idea of using deep learning—and now it’s the standard way to find these objects. People are unlikely to go back to using methods that aren’t deep learning to do galaxy recognition. We got to this point where we saw that deep learning is the thing, and really quickly saw the potential impact of it across astronomy and the sciences. It’s hitting every science now. That is a testament to the promise and peril of this technology, with such a relatively simple tool. Once you have the pieces put together right, you can do a lot of different things easily, without necessarily thinking through the implications.

Gizmodo: So what is deep learning? Why is it good and why is it bad?

BN: Traditional mathematical models (like the F=ma of Newton’s laws) are built by humans to describe patterns in data: We use our current understanding of nature, also known as intuition, to choose the pieces, the shape of these models. This means that they are often limited by what we know or can imagine about a dataset. These models are also typically smaller and are less generally applicable for many problems.

On the other hand, artificial intelligence models can be very large, with many, many degrees of freedom, so they can be made very general and able to describe lots of different data sets. Also, very importantly, they are primarily sculpted by the data that they are exposed to—AI models are shaped by the data with which they are trained. Humans decide what goes into the training set, which is then limited again by what we know or can imagine about that data. It’s not a big jump to see that if you don’t have the right training data, you can fall off the cliff really quickly.

The promise and peril are highly related. In the case of AI, the promise is in the ability to describe data that humans don’t yet know how to describe with our ‘intuitive’ models. But, perilously, the data sets used to train them incorporate our own biases. When it comes to AI recognizing galaxies, we’re risking biased measurements of the universe. When it comes to AI recognizing human faces, when our data sets are biased against Black and Brown faces for example, we risk discrimination that prevents people from using services, that intensifies surveillance apparatus, that jeopardizes human freedoms. It’s critical that we weigh and address these consequences before we imperil people’s lives with our research.

Gizmodo: When did the light bulb go off in your head that AI could be harmful?

BN: I gotta say that it was with the Machine Bias article from ProPublica in 2016, where they discuss recidivism and sentencing procedure in courts. At the time of that article, there was a closed-source algorithm used to make recommendations for sentencing, and judges were allowed to use it. There was no public oversight of this algorithm, which ProPublica found was biased against Black people; people could use algorithms like this willy nilly without accountability. I realized that as a Black man, I had spent the last few years getting excited about neural networks, then saw it quite clearly that these applications that could harm me were already out there, already being used, and we’re already starting to become embedded in our social structure through the criminal justice system. Then I started paying attention more and more. I realized countries across the world were using surveillance technology, incorporating machine learning algorithms, for widespread oppressive uses.

Gizmodo: How did you react? What did you do?

BN: I didn’t want to reinvent the wheel; I wanted to build a coalition. I started looking into groups like Fairness, Accountability and Transparency in Machine Learning, plus Black in AI, who is focused on building communities of Black researchers in the AI field, but who also has the unique awareness of the problem because we are the people who are affected. I started paying attention to the news and saw that Meredith Whittaker had started a think tank to combat these things, and Joy Buolamwini had helped found the Algorithmic Justice League. I brushed up on what computer scientists were doing and started to look at what physicists were doing, because that’s my principal community.

It became clear to folks like me and Savannah Thais that physicists needed to realize that they have a stake in this game. We get government funding, and we tend to take a fundamental approach to research. If we bring that approach to AI, then we have the potential to affect the foundations of how these algorithms work and impact a broader set of applications. I asked myself and my colleagues what our responsibility in developing these algorithms was and in having some say in how they’re being used down the line.

Gizmodo: How is it going so far?

BN: Currently, we’re going to write a white paper for SNOWMASS, this high-energy physics event. The SNOWMASS process determines the vision that guides the community for about a decade. I started to identify individuals to work with, fellow physicists, and experts who care about the issues, and develop a set of arguments for why physicists from institutions, individuals, and funding agencies should care deeply about these algorithms they’re building and implementing so quickly. It’s a piece that’s asking people to think about how much they are considering the ethical implications of what they’re doing.

We’ve already held a workshop at the University of Chicago where we’ve begun discussing these issues, and at Fermilab we’ve had some initial discussions. But we don’t yet have the critical mass across the field to develop policy. We can’t do it ourselves as physicists; we don’t have backgrounds in social science or technology studies. The right way to do this is to bring physicists together from Fermilab and other institutions with social scientists and ethicists and science and technology studies folks and professionals, and build something from there. The key is going to be through partnership with these other disciplines.

Gizmodo: Why haven’t we reached that critical mass yet?

BN: I think we need to show people, as Angela Davis has said, that our struggle is also their struggle. That’s why I’m talking about coalition building. The thing that affects us also affects them. One way to do this is to clearly lay out the potential harm beyond just race and ethnicity. Recently, there was this discussion of a paper that used neural networks to try and speed up the selection of candidates for Ph.D programs. They trained the algorithm on historical data. So let me be clear, they said here’s a neural network, here’s data on applicants who were denied and accepted to universities. Those applicants were chosen by faculty and people with biases. It should be obvious to anyone developing that algorithm that you’re going to bake in the biases in that context. I hope people will see these things as problems and help build our coalition.

Gizmodo: What is your vision for a future of ethical AI?

BN: What if there were an agency or agencies for algorithmic accountability? I could see these existing at the local level, the national level, and the institutional level. We can’t predict all of the future uses of technology, but we need to be asking questions at the beginning of the processes, not as an afterthought. An agency would help ask these questions and still allow the science to get done, but without endangering people’s lives. Alongside agencies, we need policies at various levels that make a clear decision about how safe the algorithms have to be before they are used on humans or other living things. If I had my druthers, these agencies and policies would be built by an incredibly diverse group of people. We’ve seen instances where a homogeneous group develops an app or technology and didn’t see the things that another group who’s not there would have seen. We need people across the spectrum of experience to participate in designing policies for ethical AI.

Gizmodo: What are your biggest fears about all of this?

BN: My biggest fear is that people who already have access to technology resources will continue to use them to subjugate people who are already oppressed; Pratyusha Kalluri has also advanced this idea of power dynamics. That’s what we’re seeing across the globe. Sure, there are cities that are trying to ban facial recognition, but unless we have a broader coalition, unless we have more cities and institutions willing to take on this thing directly, we’re not going to be able to keep this tool from exacerbating white supremacy, racism, and misogyny that that already exists inside structures today. If we don’t push policy that puts the lives of marginalized people first, then they’re going to continue being oppressed, and it’s going to accelerate.

Gizmodo: How has thinking about AI ethics affected your own research?

BN: I have to question whether I want to do AI work and how I’m going to do it; whether or not it’s the right thing to do to build a certain algorithm. That’s something I have to keep asking myself… Before, it was like, how fast can I discover new things and build technology that can help the world learn something? Now there’s a significant piece of nuance to that. Even the best things for humanity could be used in some of the worst ways. It’s a fundamental rethinking of the order of operations when it comes to my research.

I don’t think it’s weird to think about safety first. We have OSHA and safety groups at institutions who write down lists of things you have to check off before you’re allowed to take out a ladder, for example. Why are we not doing the same thing in AI? A part of the answer is obvious: Not all of us are people who experience the negative effects of these algorithms. But as one of the few Black people at the institutions I work in, I’m aware of it, I’m worried about it, and the scientific community needs to appreciate that my safety matters too, and that my safety concerns don’t end when I walk out of work.

Gizmodo: Anything else?

BN: I’d like to re-emphasize that when you look at some of the research that has come out, like vetting candidates for graduate school, or when you look at the biases of the algorithms used in criminal justice, these are problems being repeated over and over again, with the same biases. It doesn’t take a lot of investigation to see that bias enters these algorithms very quickly. The people developing them should really know better. Maybe there needs to be more educational requirements for algorithm developers to think about these issues before they have the opportunity to unleash them on the world.

This conversation needs to be raised to the level where individuals and institutions consider these issues a priority. Once you’re there, you need people to see that this is an opportunity for leadership. If we can get a grassroots community to help an institution to take the lead on this, it incentivizes a lot of people to start to take action.

And finally, people who have expertise in these areas need to be allowed to speak their minds. We can’t allow our institutions to quiet us so we can’t talk about the issues we’re bringing up. The fact that I have experience as a Black man doing science in America, and the fact that I do AI—that should be appreciated by institutions. It gives them an opportunity to have a unique perspective and take a unique leadership position. I would be worried if individuals felt like they couldn’t speak their mind. If we can’t get these issues out into the sunlight, how will we be able to build out of the darkness?

Ryan F. Mandelbaum – Former Gizmodo physics writer and founder of Birdmodo, now a science communicator specializing in quantum computing and birds

Some Brazilians long considered themselves White. Now many identify as Black as fight for equity inspires racial redefinition. (Washington Post)

washingtonpost.com

Terrence McCoy and Heloísa Traiano, November 15, 2020 at 5:23 p.m. GMT-3


RIO DE JANEIRO — For most of his 57 years, to the extent that he thought about his race, José Antônio Gomes used the language he was raised with. He was “pardo” — biracial — which was how his parents identified themselves. Or maybe “moreno,” as people back in his hometown called him. Perhaps “mestiço,” a blend of ethnicities.

It wasn’t until this year, when protests for racial justice erupted across the United States after George Floyd’s killing in police custody, that Gomes’s own uncertainty settled. Watching television, he saw himself in the thousands of people of color protesting amid the racially diverse crowds. He saw himself in Floyd.

Gomes realized he wasn’t mixed. He was Black.

So in September, when he announced his candidacy for city council in the southeastern city of Turmalina, Gomes officially identified himself that way. “In reality, I’ve always been Black,” he said. “But I didn’t think I was Black. But now we have more courage to see ourselves that way.”

Brazil is home to more people of African heritage than any country outside Africa. But it is rarely identified as a Black nation, or as closely identifying with any race, really. It has seen itself as simply Brazilian — a tapestry of European, African and Indigenous backgrounds that has defied the more rigid racial categories used elsewhere. Some were darker, others lighter. But almost everyone was a mix.

Now, however, as affirmative action policies diversify Brazilian institutions and the struggle for racial equality in the United States inspires a similar movement here, a growing number of people are redefining themselves. Brazilians who long considered themselves to be White are reexamining their family histories and concluding that they’re pardo. Others who thought of themselves as pardo now say they’re Black.

In Brazil, which still carries the imprint of colonization and slavery, where class and privilege are strongly associated with race, the racial reconfiguration has been striking. Over the past decade, the percentage of Brazilians who consider themselves White has dropped from 48 percent to 43 percent, according to the Brazilian Institute of Geography and Statistics, while the number of people who identify as Black or mixed has risen from 51 percent to 56 percent.

“We are clearly seeing more Black people publicly declare themselves as Black, as they would in other countries,” said Kleber Antonio de Oliveira Amancio, a social historian at the Federal University of Recôncavo da Bahia. “Racial change is much more fluid here than it is in the United States.”

One of the clearest illustrations of that fluidity — and the growing movement to identify as Black — was the registration process for the 5,500 or so municipal elections held here Sunday. Candidates were required to identify as White, Black, mixed, Indigenous or Asian. And that routine bureaucratic step yielded fairly stunning results.

More than a quarter of the 168,000 candidates who also ran in 2016 have changed their race, according to a Washington Post analysis of election registration data. Nearly 17,000 who said they were White in 2016 are now mixed. Around 6,000 who said they were mixed are now Black. And more than 14,000 who said they were mixed now identify as White.

For some candidates, the jump was even further. Nearly 900 went from White to Black, and nearly 600 went from Black to White.

How to explain it?

Some say they’re simply correcting bureaucratic error: A party official charged with registering candidates saw their picture and recorded their race inaccurately. One woman joked that she’d gotten a lot less sun this year while quarantined and decided to declare herself White. Another candidate told the Brazilian newspaper O Globo that he was Black but was a “fan” of the Indigenous, and so has now joined them. Some believed candidates were taking advantage of a recent court decision that requires parties to dispense campaign funds evenly among racial categories.

And others said they didn’t see what all of the fuss was about.

“Race couldn’t exist,” reasoned Carlos Lacerda, a city council candidate in the southeastern city of Araçatuba, who described himself as White in 2016 and Black this year. “It’s nationalism, and that’s it. Race is something I’d never speak about.”

“We have way more important things to talk about than my race,” said Ribamar Antônio da Silva, a city council member seeking reelection in the southeastern city of Osasco.

But others looked at the racial registration as a chance to fulfill a long-denied identity.

Cristovam Andrade, 36, a city council candidate in the northeastern city of São Felipe, was raised on a farm in rural Bahia, where the influence of West Africa never felt far away. With limited access to information outside his community — let alone Brazil — he grew up believing he was White. That was how his parents had always described him.

“I didn’t have any idea about race in North America or in Europe,” he said. “But I knew a lot of people who were darker than me, so I saw myself as White.”

As he began to see himself as Black, Brazil did, too. For much of its history, Brazil’s intellectual elite described Latin America’s largest country as a “racial democracy,” saying its history of intermixing had spared it the racism that plagues other countries. Around 5 million enslaved Africans were shipped to Brazil — more than 10 times the number that ended up in North America — and the country was the last in the Western Hemisphere to abolish slavery, in 1888. Its history since has been one of profound racial inequality: White people earn nearly twice as much as Black people on average, and more than 75 percent of the 5,800 people killed by police last year were Black.

But Brazil never adopted prohibitions on intermarrying or draconian racial distinctions. Race became malleable.

The Brazilian soccer player Neymar famously said he wasn’t Black. Former president Fernando Henrique Cardoso famously said he was, at least in part. The 20th-century Brazilian sociologist Gilberto de Mello Freyre wrote in the 1930s that all Brazilians — “even the light-skinned fair-haired one” — carried Indigenous or African lineage.

“The self-declaration as Black is a very complex question in Brazilian society,” said Wlamyra Albuquerque, a historian at the Federal University of Bahia. “And one of the reasons for this is that the myth of a racial democracy is still in political culture in Brazil. The notion that we’re all mixed, and because of this, racism couldn’t exist in the country, is still dominant.”

Given the choice, many Afro-Brazilians, historians and sociologists argue, have historically chosen not to identify as Black — whether consciously or not — to distance themselves from the enduring legacy of slavery and societal inequality. Wealth and privilege allowed some to separate even further from their skin color.

“In Brazilian schools, we didn’t learn who was an African person, who was an Indigenous person,” said Bartolina Ramalho Catanante, a historian at the Federal University of Mato Grosso do Sul. “We only learned who was a European person and how they came here. To be Black wasn’t valued.”

But over the past two decades, as diversity efforts elevated previously marginalized voices into newscasts, telenovelas and politics, people such as Andrade have begun to think of themselves differently. To Andrade’s mother, he was White. But he wasn’t so sure. His late father had been Black. His grandparents had been Black. Just because his skin color was lighter, did that make his African roots, and his family’s experience of slavery, any less a part of his history?

In 2016, when Andrade ran for office, an official with the leftist Workers’ Party asked him what race he would like to declare. He had a decision to make.

“I am going to mark Black as a way to recognize my ancestry and origin,” he thought. “Outside of Brazil, we would never be considered White. We live in a bubble in this country.”

But this year, when he ran again, no one asked him which race he preferred. Someone saw his picture and made the decision for him. He was put down as White. For Andrade, it felt like an erasure.

“It’s easy for some to say they’re Black or mixed or White, but for me it’s not easy,” he said. “And I’m not going to be someone who isn’t White all over the world but is White only in Brazil. If I’m not White elsewhere in the world, I’m not White.”

He’s Black. And if he seeks public office again in 2024, he said, he’ll make sure that’s how he will be known.

Indigenous knowledge still undervalued – study (EurekaAlert!)

News Release 3-Sep-2020

Respondents describe a power imbalance in environmental decision-making

Anglia Ruskin University

New research has found that Indigenous knowledge is regularly underutilised and misunderstood when making important environmental decisions.

Published in a special edition of the journal People and Nature, the study investigates how to improve collaborations between Indigenous knowledge holders and scientists, and recommends that greater equity is necessary to better inform decision-making and advance common environmental goals.

The research, led by Dr Helen Wheeler of Anglia Ruskin University (ARU), involved participants from the Arctic regions of Norway, Sweden, Greenland, Russia, Canada, and the United States.

Indigenous peoples inhabit 25% of the land surface and have strong links to their environment, meaning they can provide unique insights into natural systems. However, the greater resources available to scientists often creates a power imbalance when environmental decisions are made.

The study’s Indigenous participants identified numerous problems, including that Indigenous knowledge is often perceived as less valuable than scientific knowledge and added as anecdotes to scientific studies.

They also felt that Indigenous knowledge was being forced into frameworks that did not match Indigenous people’s understanding of the world and is often misinterpreted through scientific validation. One participant expressed the importance of Indigenous knowledge being reviewed by Indigenous knowledge holders, rather than by scientists.

Another concern was that while funding for Arctic science was increasing, the same was not happening for research rooted in Indigenous knowledge or conducted by Indigenous peoples.

Gunn-Britt Retter, Head of the Arctic and Environmental Unit of the Saami Council, said: “Although funding for Arctic science is increasing, we are not experiencing this same trend for Indigenous knowledge research.

“Sometimes Indigenous organisations feel pressured to agree to requests for collaboration with scientists so that we can have some influence in decision-making, even when these collaborations feel tokenistic and do not meet the needs of our communities. This is because there is a lack of funding for Indigenous-led research.”

Victoria Buschman, Inupiaq Inuit wildlife and conservation biologist at the University of Washington, said: “Much of the research community has not made adequate space for Indigenous knowledge and continues to undermine its potential for information decision-making. We must let go of the narrative that working with Indigenous knowledge is too challenging.”

The study concludes that values, laws, institutions, funding and mechanisms of support that create equitable power-relations between collaborators are necessary for successful relationships between scientists and Indigenous groups.

Lead author Dr Helen Wheeler, Lecturer in Zoology at Anglia Ruskin University (ARU), said: “The aim of this study was to understand how to work better with Indigenous knowledge. For those who do research on Indigenous people’s land, such as myself, I think this is an important question to ask.

“Our study suggests there are still misconceptions about Indigenous knowledge, particularly around the idea that it is limited in scope or needs verifying by science to be useful. Building capacity for research within Indigenous institutions is also a high priority, which will ensure Indigenous groups have greater power when it comes to informed decision-making.

“Indigenous knowledge is increasingly used in decision-making at many levels from developing international policy on biodiversity to local decisions about how to manage wildlife. However, as scientists and decision-makers use knowledge, they must do so in a way that reflects the needs of Indigenous knowledge holders. This should lead to better decisions and more equitable and productive partnerships.”

Related Journal Article

http://dx.doi.org/10.1002/pan3.10131

Higher-class individuals are worse at reading emotions and assuming the perspectives of others, study finds (PsyPost)

Eric W. Dolan – September 4, 2020

New research provides evidence that people from higher social classes are worse at understanding the minds of others compared to those from lower social classes. The study has been published in the Personality and Social Psychology Bulletin.

“My co-author and I set out to examine a question that we deemed important given the trend of rising economic inequality in American society today: How does access to resources (e.g., money, education) influence the way we process information about other human beings?” said study author Pia Dietze, a postdoctoral scholar at the University of California, Irvine.

“We tried to answer this question by examining two essential components within the human repertoire to understand each other’s minds: the way in which we read emotional states from other people’s faces and how inclined we are to take the visual perspective of another person.”

For their study, the researchers recruited 300 U.S. individuals from Amazon’s Mechanical Turk platform and another 452 U.S. individuals from the Prolific Academic platform. The participants completed a test of cognitive empathy called the Reading the Mind in the Eyes Test, which assesses the ability to recognize or infer someone else’s state of mind from looking only at their eyes and surrounding areas.

The researchers also had 138 undergraduates at New York University complete a test of visual perspective-taking known as the Director Task, in which they were required to move objects on a computer screen based on the perspective of a virtual avatar.

The researchers found that lower-class people tended to perform better on the Reading the Mind in the Eyes Test and Director Task than their higher-class counterparts.

“We find that individuals from lower social class backgrounds are better at identifying emotions from other people’s faces and are more likely to spontaneously take another person’s visual perspective. This is in line with a large body of work documenting a tendency for lower-class people to be more socially attuned to others. In addition, our research shows that this can happen at a very basic level; within seconds or milliseconds of encountering a new face or person,” Dietze told PsyPost.

But like all research, the new study includes some limitations.

“This research is based on correlational data. As such, we need to see this research as part of a larger body work to answer the question of causality. However, the insights gained from our study allows us to speculate about how and why we think these tendencies develop,” Dietze explained.

“We theorize that social class can influence social information processing (i.e., the processing of information about other people) at such a basic level because social classes can be conceptualized as a form of culture. As such, social class cultures (like other forms of culture, for example, national cultures), have a pervasive psychological influence that impact many aspects of life, at times even at spontaneous levels.”

The study, “Social Class Predicts Emotion Perception and Perspective-Taking Performance in Adults“, was authored by Pia Dietze and Eric D. Knowles.

Estudantes produzem dicionário biográfico Excluídos da História (Agência Brasil)

Olimpíada de história do Brasil foi criada em 2009 na Unicamp

Publicado em 15/08/2020 – 18:49 Por Akemi Nitahara – Repórter da Agência Brasil – Rio de Janeiro

Do cacique Tibiriçá, nascido antes de 1500 e batizado pelos jesuítas como Martim Afonso de Sousa, que teve papel importante na fundação da cidade de São Paulo a Jackson Viana de Paula dos Santos, jovem escritor nascido em Rio Branco (AC) no ano 2000, fundador da Academia Juvenil de Letras e representante da região norte na Brazil Conference, em Harvard.

Essas são as duas pontas de uma linha do tempo que busca contar a história de importantes personagens brasileiros que estão fora dos livros oficiais, num total de 2.251 verbetes, publicados agora como dicionário biográfico Excluídos da História.

O trabalho foi feito pelos 6.753 estudantes que participaram da quinta fase da Olimpíada Nacional em História do Brasil (ONHB) do ano passado, entre os dias 3 e 8 de junho de 2019, divididos em equipes de três participantes cada.

A olimpíada foi criada em 2009 pela Universidade Estadual de Campinas (Unicamp) e reúne atualmente mais de 70 mil estudantes dos ensinos fundamental e médio em uma maratona de busca pelo conhecimento em história do Brasil. A competição tem cinco fases online, com duração de uma semana cada, e uma prova para os finalistas das equipes mais bem pontuadas para definir os medalhistas.

Começou com samba

A Olimpíada Nacional em História do Brasil (ONHB) é um projeto que iniciou no ano de 2009, no âmbito do Museu Exploratório de Ciências da Universidade Estadual de Campinas (Unicamp) e que prossegue sendo elaborado por docentes e pós-graduandos

O dicionário biográfico Excluídos da História foi feito pelos estudantes que participaram da quinta fase da Olimpíada Nacional em História do Brasil (ONHB), iniciativa criada em 2009 pela Unicamp  Divulgação/Unicamp/Direitos Reservados

A coordenadora da Olimpíada Nacional em História do Brasil, Cristina Meneguello, explica que a história do dicionário começou a partir do samba enredo da Estação Primeira de Mangueira, escola campeã do carnaval carioca no ano passado, que levou para a Sapucaí o enredo História para Ninar Gente Grande.

Os versos abriram alas para os “heróis de barracões” com “versos que o livro apagou” para contar “a história que a história não conta” e mostrar “um país que não está no retrato” e o “avesso do mesmo lugar”. Versos que caíram no gosto popular antes mesmo do desfile oficial, sendo tocado em blocos de rua e rodas de samba pela cidade.

Segundo Cristina, a discussão sobre os excluídos da história foi intensa entre os historiadores depois do carnaval no ano passado e o tema permeou toda a competição, que começou no dia 6 de maio.

“Logo na primeira fase da prova a gente fez uma pergunta usando o próprio samba enredo da Mangueira. A gente usa documentos variados, letra de música, propaganda, documentos históricos mais clássicos, imagens, etc. A gente já tinha definido que esse seria o tema da tarefa deles para a quinta fase e fomos colocando as perguntas para eles irem entendendo o tema desde a primeira fase”, lembra.

De acordo com a professora, originalmente não havia a intenção de se publicar o material produzido pelos estudantes. Porém, diante da riqueza e diversidade das pesquisas apresentadas, a coordenação decidiu compartilhar o material com professores, estudantes e todos os interessados, disponibilizando o conteúdo online.

“A gente já sabia que ia ficar uma tarefa muito boa, porque esse conhecimento que eles produzem a partir da escola é sempre muito surpreendente. Mas teve uma série de fatores. O primeiro foi que realmente ficou muito bom o trabalho realizado pelos participantes. Depois, o template que foi criado, com essas quatro páginas como se fosse de um livro didático, ficou um design muito bom e ganhou a medalha de prata no Brasil Design Award no ano passado, como design de sistema educativo”.

Personagens desconhecidos

A escolha do personagem era livre para os estudantes, dentro do critério de ser importante para a história do Brasil e não ser lembrado nos livros didáticos. Cristina diz que o resultado surpreendeu a organização, com verbetes sobre pessoas com importância local e regional, inclusive muitos ainda vivos, mostrando que os participantes entenderam que a história é construída continuamente por personagens diversos, inclusive os que não são apontados pelos historiadores.

“Superou nossa expectativa. Nós observamos que esses personagens desconhecidos são personagens negros, são mulheres importantes para a história do Brasil, são mulheres negras, são líderes locais. Muitos fizeram o verbete de pessoas que estão vivas. São líderes indígenas, pessoas perseguidas na ditadura militar, professores que foram censurados na ditadura militar. Temos de personagens do Brasil colônia até pessoas que estão vivas nesses verbetes”.

Alguns personagens foram lembrados por mais de um grupo, portanto, há verbetes repetidos no dicionário, mas que trazem abordagens diferentes sobre a mesma pessoa.

O grupo da estudante Juliana Kreitlon Pereira foi um dos dois que escreveram sobre Mercedes Baptista, a primeira bailarina negra do Theatro Municipal do Rio de Janeiro.

A sugestão da personagem foi feita por Juliana, que estava no último ano da Escola Estadual de Dança Maria Olenewa e conheceu a história de Mercedes Baptista pelo professor de História da Dança Paulo Melgaço, semanas antes do desafio da olimpíada.

“A Mercedes sempre fez questão de trazer a dança brasileira para os palcos. Foi uma das coisas que mais me chamou atenção. Ela trabalhou com a Katherine Dunham, uma pesquisadora de movimento e coreógrafa dos Estados Unidos. A Mercedes viu o quanto a gente precisava desse tipo de estudo no Brasil também. Ela recorreu a vários movimentos culturais, coisas que já ocorriam no Brasil mas não tinham holofote. E ela sempre quis trazer bastante atenção para isso”.

Falecida em 2014, Mercedes teve sua estátua inaugurada em 2016 no Largo da Prainha, no circuito Pequena África da zona portuária do Rio de Janeiro.

Juliana se diz muito feliz com a publicação do dicionário online. “Eu não sabia que seria publicado. A gente se esforçou tanto, eu li o livro dela inteiro, até porque era muito interessante. Pensei, poxa, não vai acontecer nada. Quando foi publicado eu fiquei muito feliz porque mais pessoas poderiam conhecer essa bailarina”.

Já a equipe do estudante Lucas do Herval Costa Teles de Menezes decidiu escrever sobre um personagem que representasse o Rio de Janeiro e estivesse presente no cotidiano, mas que as pessoas não percebessem. Um personagem que não tivesse sido completamente apagado da história. O escolhido tem um feriado municipal em sua homenagem em Niterói e dá nome à estação das barcas que chegam do Rio de Janeiro e à praça em frente a ela, onde tem uma estátua: o indígena temiminó Araribóia.

“Eu achei interessante a dinâmica que o personagem teve com os povos estrangeiros, no caso, os portugueses e os franceses. Porque, geralmente, quando a gente aprende sobre a relação dos povos indígenas e os povos europeus invasores, a gente não pensa muito em identificar esses povos indígenas, nunca aprende sobre a história individual de uma figura indígena. Eu achei que ele teve uma história individual muito interessante, foi uma figura de liderança, teve muito envolvimento em mais de uma narrativa política daquela época, e isso me chamou atenção.”

O grupo de Lucas foi o único a lembrar de Araribóia, conhecido como fundador de Niterói e figura fundamental na disputa entre portugueses e franceses que levou à expulsão destes.

Olímpiada

A 12ª edição da Olimpíada Nacional em História do Brasil está com inscrições abertas até o dia 7 de setembro. Podem se inscrever equipes de três estudantes de 8º e 9º anos do ensino fundamental e todos os anos do ensino médio, com a orientação de um professor ou uma professora, de escolas públicas e particulares.

Diferentemente da maioria das olimpíadas científicas, a ONHB estimula a busca pelo conhecimento em história, e não avaliar o que o estudante já sabe por meio de uma prova.

“É um sistema de aprendizagem participar de olimpíadas. Ela é muito exigente e não quer aferir se os estudantes já sabem, ela dá tempo para eles estudarem, perguntam para o professor, perguntam uns para os outros. Tem uma pergunta de uma coisa que ele nunca ouviu falar, não viu na escola. Mas do lado tem um texto, ele lê, se informa, pesquisa na internet e volta para responder. Nesse processo ele aprendeu história. Eu não estou muito interessada se ele já sabia, mas se ele aprendeu naquele momento, o nosso objetivo pedagógico é esse”, afirma Cristina Meneguello.

A primeira edição da ONHB, em 2009, contou com 15 mil participantes. No ano passado, o número chegou a 73 mil. Por causa da pandemia de covid-19, a competição deste ano será online, não havendo a prova presencial para os finalistas que normalmente é aplicada na Unicamp.

As fases são compostas por questões de múltipla escolha e uma tarefa que será corrigida por outros grupos. Serão escolhidas 400 equipes finalistas, o dobro do usual, com distribuição de 20 medalhas de ouro, 30 de prata e 40 de bronze, que serão enviadas para as escolas.

Ouça na Radioagência Nacional

Edição: Lílian Beraldo

Elio Gaspari: A fila única para a Covid-19 está na mesa (Folha de S.Paulo)

Os barões da medicina privada mantiveram-se em virótico silêncio

Folha de S.Paulo

3 de maio de 2020

O médico sanitarista Gonzalo Vecina Neto defendeu a instituição de uma fila única para o atendimento de pacientes de Covid-19 em hospitais públicos e privados. Nas suas palavras: “Dói, mas tem que fazer. Porque se não brasileiros pobres vão morrer e brasileiros ricos vão se salvar. Não tem cabimento isso”.

Ex-diretor da Agência de Vigilância Sanitária e ex-superintendente do hospital Sírio Libanês, Vecina tem autoridade para dizer o que disse. A fila única não é uma ideia só dele. Foi proposta no início de abril por grupos de estudo das universidades de São Paulo e Federal do Rio.

Na quarta-feira (29), o presidente do Conselho Nacional de Saúde, Fernando Zasso Pigatto, enviou ao ministro Nelson Teich e aos secretários estaduais de Saúde sua Recomendação 26, para que assumam a coordenação “da alocação dos recursos assistenciais existentes, incluindo leitos hospitalares de propriedade de particulares, requisitando seu uso quando necessário, e regulando o acesso segundo as prioridades sanitárias de cada caso”.

Por quê? Porque a rede privada tem 15.898 leitos de UTIs, com ociosidade de 50%, e a rede pública tem 14.876 e está a um passo do colapso.

O ex-ministro Luiz Henrique Mandetta (ex-diretor de uma Unimed) jamais tocou no assunto. Seu sucessor, Nelson Teich (cuja indicação para a pasta foi cabalada por agentes do baronato) também não. Depois da recomendação do conselho, quatro guildas da medicina privada saíram do silêncio, condenaram a ideia e apresentaram quatro propostas alternativas. Uma delas, a testagem da população, é risível e duas são dilatórias (a construção de hospitais de campanha e a publicação de editais para a contratação de leitos e serviços). A quarta vem a ser boa ideia: a revitalização de leitos públicos. Poderia ter sido oferecida em março.

Desde o início da epidemia os barões da medicina privada mantiveram-se em virótico silêncio. Eles viviam no mundo encantado da saúde de grife, contratando médicos renomados como se fossem jogadores de futebol, inaugurando hospitais com hotelarias estreladas e atendendo clientes de planos de saúde bilionários. Veio a Covid-19, e descobriram-se num país com 40 milhões de invisíveis e 12 milhões de desempregados.

Se o vírus tivesse sido enfrentado com a energia da Nova Zelândia, o silêncio teria sido eficaz. Como isso era impossível, acordaram no Brasil, com 90 mil infectados e mais de 6.000 mortos.

A Agência Nacional de Saúde ofereceu aos planos de saúde acesso ao recursos de um fundo se elas aceitassem atender (até julho) clientes inadimplentes. Nem pensar. Dos 780 planos só 9 aderiram.

O silêncio virótico provocou-lhes uma tosse com a recomendação do Conselho Nacional de Saúde. A fila única é um remédio com efeitos laterais tóxicos. Se a burocracia ficar encarregada de organizá-la, arrisca só ficar pronta em 2021. Ademais é discutível se uma pessoa que pagou caro pelo acesso a um hospital deve ficar atrás de alguém que não pagou. Na outra ponta dessa discussão, fica a frase de Vecina: “Brasileiros pobres vão morrer e brasileiros ricos vão se salvar”. Os números da epidemia mostram que o baronato precisa sair da toca.

A Covid-19 jogou o sistema de saúde brasileiro na arapuca daquele navio cujo nome não deve ser pronunciado (com Leonardo DiCaprio estrelando o filme). O transatlântico tinha 2.200 passageiros, mas nos seus botes salva-vidas só cabiam 1.200 pessoas. 34% dos homens da primeira classe salvaram-se.

Na terceira classe, só 12%.

How Spanish flu helped create Sweden’s modern welfare state (The Guardian)

The 1918 pandemic ravaged the remote city of Östersund. But its legacy is a city – and country – well-equipped to deal with 21st century challenges

Brian Melican

Wed 29 Aug 2018 07.15 BST Last modified on Mon 3 Feb 2020 12.47 GMT

Archive black and white picture Östersund
Spanish flu reached Östersund a century ago. Photograph: Alamy

On 15 September 1918, a 12-year-old boy named Karl Karlsson who lived just outside Östersund, Sweden, wrote a short diary entry: “Two who died of Spanish flu buried today. A few snowflakes in the air.”

For all its brevity and matter-of-fact tone, Karlsson’s journal makes grim reading. It is 100 years since a particularly virulent strain of avian flu, known as the Spanish flu despite probably originating in America, ravaged the globe, killing somewhere between 50 million and 100 million people. While its effects were felt everywhere, it struck particularly hard in Östersund, earning the city the nickname “capital of the Spanish flu”.

“Looking back through contemporaneous accounts was quite creepy,” says Jim Hedlund at the city’s state archive. “As many people died in two months as generally died in a whole year. I even found out that three of my forbears were buried on the same day.”

There were three main reasons why the flu hit this remote city so hard: Östersund had speedy railway connections, several army regiments stationed in close quarters and a malnourished population living in cramped accommodation. As neutral Sweden kept its armed forces on high alert between 1914 and 1918, the garrison town’s population swelled from 9,000 to 13,000.

By 1917, when navvies poured in and construction started on an inland railway to the north, widespread food shortages had led to violent workers’ demonstrations and a near mutiny among the army units.

The city became a hotbed of political activism. Its small size put the unequal distribution of wealth in early industrial society under the microscope. While working-class families crowded into insalubrious accommodation, wealthy tourists from other parts of Sweden and further afield came for the fresh mountain air and restorative waters – as well as the excellent fishing and elk hunting (passionate angler Winston Churchill was a regular visitor).

“The catastrophic spread of the flu was in no small part down to the authorities’ bewilderment and often clumsy reactions” – Hans Jacobsson, historian

“Many of the demonstrators’ concerns seem strikingly modern,” says Hedlund, pointing to a copy of a political poster that reads: “Tourists out of our buildings in times of crisis. Butter, milk and potatoes for workers!”

It wasn’t just the urban proletariat demanding better accommodation. At Sweden’s first ever national convention of the indigenous Sami peoples held in Östersund in early 1918, delegates demanded an end to discriminatory policies that forced them to live in tents.

Social inequality in the city meant the Spanish flu hit all the harder.

As the epidemic raged in late August, when around 20 people were dying daily, the city’s bank director Carl Lignell withdrew funds from Stockholm without authorisation and requisitioned a school for use as a hospital (the city didn’t have one).

View of Ostersund
‘You can drop your kids off at kindergarten on the way to work and be out hiking or skiing by late afternoon.’ Photograph: Sergei Bobylev/TASS Advertisement

“If it hadn’t been for him, Östersund might quite literally have disappeared,” says Hedlund. For a brief period, Lignell worked like a benevolent dictator, quarantining suspected cases in their homes – and revealing the squalor in which they lived.

As his hastily convened medical team moved through Östersund, they found whole families crowded into wooden shacks, just a few streets away from the proud, stone-built civic structures. In some homes, sick children lay on the floor for want of beds.

The local newspaper Östersunds-Posten asked rhetorically: “Who would have thought that in our fine city there could be such awful destitution?”

People of all political convictions and stations in life started cooperating in a city otherwise riven by the class divisions of early industrial society. Östersunds-Posten itself moved from simply reporting on the epidemic to helping to organise relief, publishing calls for money, food and clothing, and opening its offices for use as storerooms. The state had proven itself inadequate, as historian Hans Jacobsson wrote: “The catastrophic spread of the Spanish flu in 1918 was in no small part down to the authorities’ bewilderment and often clumsy reactions.”

“After the epidemic, the state made tentative steps towards a cooperative approach to social reform” – Jim Hedlund, archivist

He cites the fact that Stockholm High Command refused to halt planned military exercises for weeks, despite the fact that regimental sickbays were overflowing. “What is interesting is that, after the epidemic, the state dropped investigations against Lignell and made tentative steps towards a cooperative approach to social reform. Issues such as poor nutrition and housing were on the political agenda,” says Hedlund. Anyone trying to date the inception of Sweden’s welfare state cannot overlook the events of autumn 1918.

One hundred years on, there are few better places than Östersund to see the effects of Sweden’s much-vaunted social model. The city is once again growing rapidly, but nothing could seem further away than epidemics and political radicalism. The left of centre Social Democrats have been in power in city hall since 1994, and council leader AnnSofie Andersson has made housing a priority – new developments are spacious, well-ordered and equipped with schools and playgrounds.

“There’s nothing that shows confidence like building stuff,” she says. “In fact, our local authority building partnership should, in my view, keep a small excess of flats in hand, because without a reserve people won’t move here.”

Östersund attracts a net inflow of people from southern Sweden. “It’s partly a quality of life issue,” says Andersson. “You can drop your kids off at kindergarten in the morning on the way to work and be out hiking or skiing by late afternoon.”

The city has recovered from the relocation of the Swedish armed forces fighter jet squadron in the 1990s by playing to its strengths: sports and tourism. A university now occupies the old barracks with a special focus on sports materials and technology. The airbase has become a thriving airport, handling half a million passengers a year.

Despite the net inflow of working-age people however, Östersund is facing a demographic challenge as baby boomers begin to retire. The shortages are being felt most acutely at the regional health authority, which occupies the Epidemisjukhusthe building hastily converted into wards during the Spanish flu by Carl Lignell. Clinical staff are proving hard to find and retain, and the region’s health service is underfunded. Some residents still suggest solving that lack of funding from central government “the Jämtland way”, like Lignell once did.

History doesn’t repeat itself identically, though. Sweden’s consensus-orientated political model now tends to defuse conflict even in proud cities with a liking for mavericks. One of Andersson’s strategies for dealing with the approaching lack of labour, for instance, is cooperating with local and national institutions to train up the young refugees the city has welcomed since 2015.

“School starts tomorrow – for the last time,” confides Karl Karlsson to his journal on 4 September 1918. “I leave in spring and it feels melancholy. I like farming, but I would still prefer to continue at school and study. But it’s impossible.” Ten days later, he notes that his family’s food stores are running low. “We’re almost out of flour and bread, the barley hasn’t dried yet, and we shan’t get any more rations, everything is being requisitioned.”

One hundred years later, a city – and a society – once unable to educate or even feed its youth is now one of the world’s wealthiest and fairest.

Coronavírus e as quebradas: 16 perguntas ainda sem resposta sobre impacto da pandemia nas periferias (Periferia em Movimento)

Publicado porThiago Borges –

Precisamos falar sobre o novo coronavírus, mas sem pânico.

Nesta quinta-feira (12/03), o Brasil acordou com 52 pessoas infectadas pelo coronavírus e foi dormir com 69 casos confirmados. Em todo o mundo, são 122 mil casos confirmados e mais de 4.500 mortes registradas. A Organização Mundial da Saúde (OMS) declarou pandemia, isto é, o vírus deixou de ser restrito determinadas regiões e passa a ser uma questão de saúde pública global.

A taxa de mortalidade do novo vírus, ainda sem vacina, é considerada baixa – em torno de 3% dos casos – e atinge principalmente pessoas com maior vulnerabilidade, como idosos ou com doenças pré-existentes (como diabetes, câncer, etc.).

Com mais de 50 casos no País, o Ministério da Saúde do governo de Jair Bolsonaro alerta que a transmissão deve se dar de forma geométrica – isto é, deixa de ser restrita a pessoas que se infectaram em outras regiões do mundo e passa a acontecer no próprio território.

Segundo o Instituto Pensi do Hospital Infantil Sabará, após atingir 50 casos confirmados o total de infectados no Brasil pode aumentar para 4.000 casos em 15 dias e cerca de 30.000 depois de 21 dias.

Com isso, o vírus deve se expandir rapidamente nas próximas semanas e o Sistema Único de Saúde (SUS) precisaria de 3.200 novos leitos em UTI (Unidade de Terapia Intensiva) para dar conta da demanda – 95% dos 16.000 leitos de hoje já estão ocupados.

Dito isso, nós moradoras e moradores de periferias urbanas, povos da floresta e marginalizados em geral, precisamos nos atentar com as medidas de prevenção (confira no gráfico abaixo) mas também com efeitos colaterais dessa pandemia no nosso dia a dia.

Muito se fala no impacto da pandemia sobre a economia global. Mas em um País marcado por desigualdade social, machismo, racismo e LGBTfobia, com cortes em políticas públicas e desemprego recorde, o coronavírus tem potencial de impactar não apenas nossa saúde como também nossa frágil convivência em sociedade. Precisamos de solidariedade e vigilância nesse momento.

Por isso, a Periferia em Movimento faz 16 perguntas ainda sem resposta (a lista continua em atualização) sobre esse novo cenário:

1. As periferias vão receber recursos da saúde de forma proporcional às nossas necessidades?

2. O governo vai adotar medidas de confinamento ou restrição de circulação de pessoas?

3. Como fazer quarentena em área de aglomeração, como periferias e favelas?

4. Os governantes vão acionar a Polícia Militar pra controlar a população nas periferias?

5. Se rolar quarentena, quem vai dirigir os ônibus, fazer o pão de cada dia e entregar a comida do ifood no apartamento da classe média?

6. Com o desemprego recorde e o mercado informal em alta, pessoas que vivem de bico vão conseguir fazer dinheiro como?

7. Se as aulas forem suspensas, com quem ficarão as crianças que frequentam creches em período integral?

8. Sem aulas, sem merenda: estudantes em situação de insegurança alimentar vão passar fome se não forem pra escola?

9. Ainda sobre a suspensão das aulas, qual é o risco da explosão de casos de violência sexual contra crianças e adolescentes – que passarão mais tempo em casa?

10. O maior tempo em casa também aumenta o risco de mulheres sofrerem violência de seus companheiros?

11. E com mais pessoas com circulação restrita, o risco de conflitos em comunidades também aumenta?

12. Como os governantes avaliam as possibilidades de aumento em todos os tipos de violência com essa pandemia?

13. Como idosos em situação de vulnerabilidade serão assistidos pelo governo?

14. De que forma, a pandemia deve impactar a população em situação de rua?

15. Como ficam os presidiários, que já vivem em situações de aglomeração, tortura e com doenças que estão controladas no mundo externo?

16. E como serão atendidos os indígenas, que necessitam de estratégias específicas de saúde devido à menor imunidade a doenças transmitidas desde a invasão europeia ao continente americano?

Esta imagem possuí um atributo alt vazio; O nome do arquivo é catarse_-pem-_-stories.jpg

Seja assinante e contribua!