Arquivo da tag: Método científico

The rise and fall of peer review (Experimental History)

experimentalhistory.substack.com

Adam Mastroianni

Dec 13, 2022


Photo cred: my dad

For the last 60 years or so, science has been running an experiment on itself. The experimental design wasn’t great; there was no randomization and no control group. Nobody was in charge, exactly, and nobody was really taking consistent measurements. And yet it was the most massive experiment ever run, and it included every scientist on Earth.

Most of those folks didn’t even realize they were in an experiment. Many of them, including me, weren’t born when the experiment started. If we had noticed what was going on, maybe we would have demanded a basic level of scientific rigor. Maybe nobody objected because the hypothesis seemed so obviously true: science will be better off if we have someone check every paper and reject the ones that don’t pass muster. They called it “peer review.”

This was a massive change. From antiquity to modernity, scientists wrote letters and circulated monographs, and the main barriers stopping them from communicating their findings were the cost of paper, postage, or a printing press, or on rare occasions, the cost of a visit from the Catholic Church. Scientific journals appeared in the 1600s, but they operated more like magazines or newsletters, and their processes of picking articles ranged from “we print whatever we get” to “the editor asks his friend what he thinks” to “the whole society votes.” Sometimes journals couldn’t get enough papers to publish, so editors had to go around begging their friends to submit manuscripts, or fill the space themselves. Scientific publishing remained a hodgepodge for centuries.

(Only one of Einstein’s papers was ever peer-reviewed, by the way, and he was so surprised and upset that he published his paper in a different journal instead.)

That all changed after World War II. Governments poured funding into research, and they convened “peer reviewers” to ensure they weren’t wasting their money on foolish proposals. That funding turned into a deluge of papers, and journals that previously struggled to fill their pages now struggled to pick which articles to print. Reviewing papers before publication, which was “quite rare” until the 1960s, became much more common. Then it became universal.

Now pretty much every journal uses outside experts to vet papers, and papers that don’t please reviewers get rejected. You can still write to your friends about your findings, but hiring committees and grant agencies act as if the only science that exists is the stuff published in peer-reviewed journals. This is the grand experiment we’ve been running for six decades.

The results are in. It failed. 

Peer review was a huge, expensive intervention. By one estimate, scientists collectively spend 15,000 years reviewing papers every year. It can take months or years for a paper to wind its way through the review system, which is a big chunk of time when people are trying to do things like cure cancer and stop climate change. And universities fork over millions for access to peer-reviewed journals, even though much of the research is taxpayer-funded, and none of that money goes to the authors or the reviewers.

Huge interventions should have huge effects. If you drop $100 million on a school system, for instance, hopefully it will be clear in the end that you made students better off. If you show up a few years later and you’re like, “hey so how did my $100 million help this school system” and everybody’s like “uhh well we’re not sure it actually did anything and also we’re all really mad at you now,” you’d be really upset and embarrassed. Similarly, if peer review improved science, that should be pretty obvious, and we should be pretty upset and embarrassed if it didn’t.

It didn’t. In all sorts of different fields, research productivity has been flat or declining for decades, and peer review doesn’t seem to have changed that trend. New ideas are failing to displace older ones. Many peer-reviewed findings don’t replicate, and most of them may be straight-up false. When you ask scientists to rate 20th century discoveries in physics, medicine, and chemistry that won Nobel Prizes, they say the ones that came out before peer review are just as good or even better than the ones that came out afterward. In fact, you can’t even ask them to rate the Nobel Prize-winning physics discoveries from the 1990s and 2000s because there aren’t enough of them.

Of course, a lot of other stuff has changed since World War II. We did a terrible job running this experiment, so it’s all confounded. All we can say from these big trends is that we have no idea whether peer review helped, it might have hurt, it cost a ton, and the current state of the scientific literature is pretty abysmal. In this biz, we call this a total flop.

What went wrong?

Here’s a simple question: does peer review actually do the thing it’s supposed to do? Does it catch bad research and prevent it from being published?

It doesn’t. Scientists have run studies where they deliberately add errors to papers, send them out to reviewers, and simply count how many errors the reviewers catch. Reviewers are pretty awful at this. In this study reviewers caught 30% of the major flaws, in this study they caught 25%, and in this study they caught 29%. These were critical issues, like “the paper claims to be a randomized controlled trial but it isn’t” and “when you look at the graphs, it’s pretty clear there’s no effect” and “the authors draw conclusions that are totally unsupported by the data.” Reviewers mostly didn’t notice.

In fact, we’ve got knock-down, real-world data that peer review doesn’t work: fraudulent papers get published all the time. If reviewers were doing their job, we’d hear lots of stories like “Professor Cornelius von Fraud was fired today after trying to submit a fake paper to a scientific journal.” But we never hear stories like that. Instead, pretty much every story about fraud begins with the paper passing review and being published. Only later does some good Samaritan—often someone in the author’s own lab!—notice something weird and decide to investigate. That’s what happened with this this paper about dishonesty that clearly has fake data (ironic), these guys who have published dozens or even hundreds of fraudulent papers, and this debacle:

Why don’t reviewers catch basic errors and blatant fraud? One reason is that they almost never look at the data behind the papers they review, which is exactly where the errors and fraud are most likely to be. In fact, most journals don’t require you to make your data public at all. You’re supposed to provide them “on request,” but most people don’t. That’s how we’ve ended up in sitcom-esque situations like ~20% of genetics papers having totally useless data because Excel autocorrected the names of genes into months and years.

(When one editor started asking authors to add their raw data after they submitted a paper to his journal, half of them declined and retracted their submissions. This suggests, in the editor’s words, “a possibility that the raw data did not exist from the beginning.”)

The invention of peer review may have even encouraged bad research. If you try to publish a paper showing that, say, watching puppy videos makes people donate more to charity, and Reviewer 2 says “I will only be impressed if this works for cat videos as well,” you are under extreme pressure to make a cat video study work. Maybe you fudge the numbers a bit, or toss out a few outliers, or test a bunch of cat videos until you find one that works and then you never mention the ones that didn’t. 🎶 Do a little fraud // get a paper published // get down tonight 🎶

Here’s another way that we can test whether peer review worked: did it actually earn scientists’ trust? 

Scientists often say they take peer review very seriously. But people say lots of things they don’t mean, like “It’s great to e-meet you” and “I’ll never leave you, Adam.” If you look at what scientists actually do, it’s clear they don’t think peer review really matters.

First: if scientists cared a lot about peer review, when their papers got reviewed and rejected, they would listen to the feedback, do more experiments, rewrite the paper, etc. Instead, they usually just submit the same paper to another journal. This was one of the first things I learned as a young psychologist, when my undergrad advisor explained there is a “big stochastic element” in publishing (translation: “it’s random, dude”). If the first journal didn’t work out, we’d try the next one. Publishing is like winning the lottery, she told me, and the way to win is to keep stuffing the box with tickets. When very serious and successful scientists proclaim that your supposed system of scientific fact-checking is no better than chance, that’s pretty dismal.

Second: once a paper gets published, we shred the reviews. A few journals publish reviews; most don’t. Nobody cares to find out what the reviewers said or how the authors edited their paper in response, which suggests that nobody thinks the reviews actually mattered in the first place. 

And third: scientists take unreviewed work seriously without thinking twice. We read “preprints” and working papers and blog posts, none of which have been published in peer-reviewed journals. We use data from Pew and Gallup and the government, also unreviewed. We go to conferences where people give talks about unvetted projects, and we do not turn to each other and say, “So interesting! I can’t wait for it to be peer reviewed so I can find out if it’s true.”

Instead, scientists tacitly agree that peer review adds nothing, and they make up their minds about scientific work by looking at the methods and results. Sometimes people say the quiet part loud, like Nobel laureate Sydney Brenner:

I don’t believe in peer review because I think it’s very distorted and as I’ve said, it’s simply a regression to the mean. I think peer review is hindering science. In fact, I think it has become a completely corrupt system.

I used to think about all the ways we could improve peer review. Reviewers should look at the data! Journals should make sure that papers aren’t fraudulent! 

It’s easy to imagine how things could be better—my friend Ethan and I wrote a whole paper on it—but that doesn’t mean it’s easy to make things better. My complaints about peer review were a bit like looking at the ~35,000 Americans who die in car crashes every year and saying “people shouldn’t crash their cars so much.” Okay, but how? 

Lack of effort isn’t the problem: remember that our current system requires 15,000 years of labor every year, and it still does a really crappy job. Paying peer reviewers doesn’t seem to make them any better. Neither does training them. Maybe we can fix some things on the margins, but remember that right now we’re publishing papers that use capital T’s instead of error bars, so we’ve got a long, long way to go.

What if we made peer review way stricter? That might sound great, but it would make lots of other problems with peer review way worse. 

For example, you used to be able to write a scientific paper with style. Now, in order to please reviewers, you have to write it like a legal contract. Papers used to begin like, “Help! A mysterious number is persecuting me,” and now they begin like, “Humans have been said, at various times and places, to exist, and even to have several qualities, or dimensions, or things that are true about them, but of course this needs further study (Smergdorf & Blugensnout, 1978; Stikkiwikket, 2002; von Fraud et al., 2018b)”. 

This blows. And as a result, nobody actually reads these papers. Some of them are like 100 pages long with another 200 pages of supplemental information, and all of it is written like it hates you and wants you to stop reading immediately. Recently, a friend asked me when I last read a paper from beginning to end; I couldn’t remember, and neither could he. “Whenever someone tells me they loved my paper,” he said, “I say thank you, even though I know they didn’t read it.” Stricter peer review would mean even more boring papers, which means even fewer people would read them.

Making peer review harsher would also exacerbate the worst problem of all: just knowing that your ideas won’t count for anything unless peer reviewers like them makes you worse at thinking. It’s like being a teenager again: before you do anything, you ask yourself, “BUT WILL PEOPLE THINK I’M COOL?” When getting and keeping a job depends on producing popular ideas, you can get very good at thought-policing yourself into never entertaining anything weird or unpopular at all. That means we end up with fewer revolutionary ideas, and unless you think everything’s pretty much perfect right now, we need revolutionary ideas real bad.

On the off chance you do figure out a way to improve peer review without also making it worse, you can try convincing the nearly 30,000 scientific journals in existence to apply your magical method to the ~4.7 million articles they publish every year. Good luck!

Peer review doesn’t work and there’s probably no way to fix it. But a little bit of vetting is better than none at all, right?

I say: no way. 

Imagine you discover that the Food and Drug Administration’s method of “inspecting” beef is just sending some guy (“Gary”) around to sniff the beef and say whether it smells okay or not, and the beef that passes the sniff test gets a sticker that says “INSPECTED BY THE FDA.” You’d be pretty angry. Yes, Gary may find a few batches of bad beef, but obviously he’s going to miss most of the dangerous meat. This extremely bad system is worse than nothing because it fools people into thinking they’re safe when they’re not.

That’s what our current system of peer review does, and it’s dangerous. That debunked theory about vaccines causing autism comes from a peer-reviewed paper in one of the most prestigious journals in the world, and it stayed there for twelve years before it was retracted. How many kids haven’t gotten their shots because one rotten paper made it through peer review and got stamped with the scientific seal of approval?

If you want to sell a bottle of vitamin C pills in America, you have to include a disclaimer that says none of the claims on the bottle have been evaluated by the Food and Drug Administration. Maybe journals should stamp a similar statement on every paper: “NOBODY HAS REALLY CHECKED WHETHER THIS PAPER IS TRUE OR NOT. IT MIGHT BE MADE UP, FOR ALL WE KNOW.” That would at least give people the appropriate level of confidence.

Why did peer review seem so reasonable in the first place?

I think we had the wrong model of how science works. We treated science like it’s a weak-link problem where progress depends on the quality of our worst work. If you believe in weak-link science, you think it’s very important to stamp out untrue ideas—ideally, prevent them from being published in the first place. You don’t mind if you whack a few good ideas in the process, because it’s so important to bury the bad stuff.

But science is a strong-link problem: progress depends on the quality of our best work.Better ideas don’t always triumph immediately, but they do triumph eventually, because they’re more useful. You can’t land on the moon using Aristotle’s physics, you can’t turn mud into frogs using spontaneous generation, and you can’t build bombs out of phlogiston. Newton’s laws of physics stuck around; his recipe for the Philosopher’s Stone didn’t. We didn’t need a scientific establishment to smother the wrong ideas. We needed it to let new ideas challenge old ones, and time did the rest.

If you’ve got weak-link worries, I totally get it. If we let people say whatever they want, they will sometimes say untrue things, and that sounds scary. But we don’t actually prevent people from saying untrue things right now; we just pretend to. In fact, right now we occasionally bless untrue things with big stickers that say “INSPECTED BY A FANCY JOURNAL,” and those stickers are very hard to get off. That’s way scarier.

Weak-link thinking makes scientific censorship seem reasonable, but all censorship does is make old ideas harder to defeat. Remember that it used to be obviously true that the Earth is the center of the universe, and if scientific journals had existed in Copernicus’ time, geocentrist reviewers would have rejected his paper and patted themselves on the back for preventing the spread of misinformation. Eugenics used to be hot stuff in science—do you think a bunch of racists would give the green light to a paper showing that Black people are just as smart as white people? Or any paper at all by a Black author? (And if you think that’s ancient history: this dynamic is still playing out today.) We still don’t understand basic truths about the universe, and many ideas we believe today will one day be debunked. Peer review, like every form of censorship, merely slows down truth.

Nobody was in charge of our peer review experiment, which means nobody has the responsibility of saying when it’s over. Seeing no one else, I guess I’ll do it: 

We’re done, everybody! Champagne all around! Great work, and congratulations. We tried peer review and it didn’t work.

Honesty, I’m so relieved. That system sucked! Waiting months just to hear that an editor didn’t think your paper deserved to be reviewed? Reading long walls of text from reviewers who for some reason thought your paper was the source of all evil in the universe? Spending a whole day emailing a journal begging them to let you use the word “years” instead of always abbreviating it to “y” for no reason (this literally happened to me)? We never have to do any of that ever again.

I know we all might be a little disappointed we wasted so much time, but there’s no shame in a failed experiment. Yes, we should have taken peer review for a test run before we made it universal. But that’s okay—it seemed like a good idea at the time, and now we know it wasn’t. That’s science! It will always be important for scientists to comment on each other’s ideas, of course. It’s just this particular way of doing it that didn’t work.

What should we do now? Well, last month I published a paper, by which I mean I uploaded a PDF to the internet. I wrote it in normal language so anyone could understand it. I held nothing back—I even admitted that I forgot why I ran one of the studies. I put jokes in it because nobody could tell me not to. I uploaded all the materials, data, and code where everybody could see them. I figured I’d look like a total dummy and nobody would pay any attention, but at least I was having fun and doing what I thought was right.

Then, before I even told anyone about the paper, thousands of people found it, commented on it, and retweeted it. 

Total strangers emailed me thoughtful reviews. Tenured professors sent me ideas. NPR asked for an interview. The paper now has more views than the last peer-reviewed paper I published, which was in the prestigious Proceedings of the National Academy of Sciences. And I have a hunch far more people read this new paper all the way to the end, because the final few paragraphs got a lot of comments in particular. So I dunno, I guess that seems like a good way of doing it?

I don’t know what the future of science looks like. Maybe we’ll make interactive papers in the metaverse or we’ll download datasets into our heads or whisper our findings to each other on the dance floor of techno-raves. Whatever it is, it’ll be a lot better than what we’ve been doing for the past sixty years. And to get there, all we have to do is what we do best: experiment.

The Complicated Legacy of E. O. Wilson (Scientific American)

scientificamerican.com

Monica R. McLemore

We must reckon with his and other scientists’ racist ideas if we want an equitable future

December 29, 2021


American biologist E. O. Wilson in Lexington, Mass., on October 21, 2021. Credit: Gretchen Ertl/Reuters/Alamy

With the death of biologist E. O. Wilson on Sunday, I find myself again reflecting on the complicated legacies of scientists whose works are built on racist ideas and how these ideas came to define our understanding of the world.

After a long clinical career as a registered nurse, I became a laboratory-trained scientist as researchers mapped the first draft of the human genome. It was during this time that I intimately familiarized myself with Wilson’s work and his dangerous ideas on what factors influence human behavior.

His influential text Sociobiology: The New Synthesis contributed to the false dichotomy of nature versus nurture and spawned an entire field of behavioral psychology grounded in the notion that differences among humans could be explained by genetics, inheritance and other biological mechanisms. Finding out that Wilson thought this way was a huge disappointment, because I had enjoyed his novel Anthill, which was published much later and written for the public.

Wilson was hardly alone in his problematic beliefs. His predecessors—mathematician Karl Pearson, anthropologist Francis Galton, Charles Darwin, Gregor Mendel and others—also published works and spoke of theories fraught with racist ideas about distributions of health and illness in populations without any attention to the context in which these distributions occur.

Even modern geneticists and genome scientists struggle with inherent racism in the way they gather and analyze data. In his memoir A Life Decoded: My Genome: My Life, geneticist J. Craig Venter writes, “The complex provenance of ideas means their origin is often open to interpretation.”

To put the legacy of their work in the proper perspective, a more nuanced understanding of problematic scientists is necessary. It is true that work can be both important and problematic—they can coexist. Therefore it is necessary to evaluate and critique these scientists, considering, specifically the value of their work and, at the same time, their contributions to scientific racism.

First, the so-called normal distribution of statistics assumes that there are default humans who serve as the standard that the rest of us can be accurately measured against. The fact that we don’t adequately take into account differences between experimental and reference group determinants of risk and resilience, particularly in the health sciences, has been a hallmark of inadequate scientific methods based on theoretical underpinnings of a superior subject and an inferior one. Commenting on COVID and vaccine acceptance in an interview with PBS NewsHour, recently retired director of the National Institutes of Health Francis Collins pointed out, “You know, maybe we underinvested in research on human behavior.”

Second, the application of the scientific method matters: what works for ants and other nonhuman species is not always relevant for health and/or human outcomes. For example, the associations of Black people with poor health outcomes, economic disadvantage and reduced life expectancy can be explained by structural racism, yet Blackness or Black culture is frequently cited as the driver of those health disparities. Ant culture is hierarchal and matriarchal, based on human understandings of gender. And the descriptions and importance of ant societies existing as colonies is a component of Wilson’s work that should have been critiqued. Context matters.

Lastly, examining nurture versus nature without any attention to externalities, such as opportunities and potential (financial structures, religiosity, community resources and other societal structures), that deeply influence human existence and experiences is both a crude and cruel lens. This dispassionate query will lead to individualistic notions of the value and meaning of human lives while, as a society, our collective fates are inextricably linked.

As we are currently seeing in the COVID-19 pandemic, public health and prevention measures are colliding with health services delivery and individual responsibility. Coexistence of approaches that take both of these  into account are interrelated and necessary.

So how do we engage with the problematic work of scientists whose legacy is complicated? I would suggest three strategies to move toward a more nuanced understanding of their work in context.

First, truth and reconciliation are necessary in the scientific record, including attention to citational practices when using or reporting on problematic work. This approach includes thinking critically about where and when to include historically problematic work and the context necessary for readers to understand the limitations of the ideas embedded in it. This will require commitments from journal editors, peer reviewers and the scientific community to invest in retrofitting existing publications with this expertise. They can do so by employing humanities scholars, journalists and other science communicators with the appropriate expertise to evaluate health and life sciences manuscripts submitted for publication.

Second, diversifying the scientific workforce is crucial not only to asking new types of research questions and unlocking new discoveries but also to conducting better science. Other scholars have pointed out that feminist standpoint theory is helpful in understanding white empiricism and who is eligible to be a worthy observer of the human condition and our world. We can apply the same approach to scientific research. All of society loses when there are limited perspectives that are grounded in faulty notions of one or another group of humans’ potential. As my work and that of others have shown, the people most burdened by poor health conditions are more often the ones trying to address the underlying causes with innovative solutions and strategies that can be scientifically tested.

Finally, we need new methods. One of the many gifts of the Human Genome Project was the creativity it spawned beyond revealing the secrets of the genome, such as new rules about public availability and use of data. Multiple labs and trainees were able to collaborate and share work while establishing independent careers. New rules of engagement emerged around the ethical, legal and social implications of the work. Undoing scientific racism will require commitments from the entire scientific community to determine the portions of historically problematic work that are relevant and to let the scientific method function the way it was designed—to allow for dated ideas to be debunked and replaced.

The early work of Venter and Collins was foundational to my dissertation, which examined tumor markers of ovarian cancer. I spent time during my training at the NIH learning from these iconic clinicians and scholars and had occasion to meet and question both of them. As a person who uses science as one of many tools to understand the world, it is important to remain curious in our work. Creative minds should not be resistant to change when rigorous new data are presented. How we engage with old racist ideas is no exception.

Weaving Indigenous knowledge into the scientific method (Nature)

nature.com

Saima May Sidik

11 January 2022; Correction 24 January 2022


Dominique David-Chavez working with Randal Alicea, a Caribbean Indigenous farmer, in his tobacco drying shed in Cidra, Borikén.
Dominique David-Chavez works with Randal Alicea, an Indigenous farmer, in his tobacco-drying shed in Cidra, Borikén (Puerto Rico).Credit: Norma Ortiz

Many scientists rely on Indigenous people to guide their work — by helping them to find wildlife, navigate rugged terrain or understand changing weather trends, for example. But these relationships have often felt colonial, extractive and unequal. Researchers drop into communities, gather data and leave — never contacting the locals again, and excluding them from the publication process.

Today, many scientists acknowledge the troubling attitudes that have long plagued research projects in Indigenous communities. But finding a path to better relationships has proved challenging. Tensions surfaced last year, for example, when seven University of Auckland academics argued that planned changes to New Zealand’s secondary school curriculum, to “ensure parity between mātauranga Māori”, or Maori knowledge, and “other bodies of knowledge”, could undermine trust in science.

Last month, the University of Auckland’s vice-chancellor, Dawn Freshwater, announced a symposium to be held early this year, at which different viewpoints can be discussed. In 2016, the US National Science Foundation (NSF) launched Navigating the New Arctic — a programme that encouraged scientists to explore the wide-reaching consequences of climate change in the north. A key sentence in the programme description reflected a shift in perspective: “Given the deep knowledge held by local and Indigenous residents in the Arctic, NSF encourages scientists and Arctic residents to collaborate on Arctic research projects.” The Natural Sciences and Engineering Research Council of Canada and New Zealand’s Ministry of Business, Innovation and Employment have made similar statements. So, too, have the United Nations cultural organization UNESCO and the Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services.

But some Indigenous groups feel that despite such well-intentioned initiatives, their inclusion in research is only a token gesture to satisfy a funding agency.

There’s no road map out of science’s painful past. Nature asked three researchers who belong to Indigenous communities in the Americas and New Zealand, plus two funders who work closely with Northern Indigenous communities, how far we’ve come toward decolonizing science — and how researchers can work more respectfully with Indigenous groups.

DANIEL HIKUROA: Weave folklore into modern science

Daniel Hikuroa is an Earth systems and environmental humanities researcher at Te Wānanga o Waipapa, University of Auckland, New Zealand, and a member of the Māori community.

We all have a world view. Pūrākau, or traditional stories, are a part of Māori culture with great potential for informing science. But what you need to understand is that they’re codified according to an Indigenous world view.

For example, in Māori tradition, we have these things called taniwha that are like water serpents. When you think of taniwha, you think, danger, risk, be on your guard! Taniwha as physical entities do not exist. Taniwha are a mechanism for describing how rivers behave and change through time. For example, pūrākau say that taniwha live in a certain part of the Waikato River, New Zealand’s longest, running for 425 kilometres through the North Island. That’s the part of the river that tends to flood. Fortunately, officials took knowledge of taniwha into account when they were designing a road near the Waikato river in 2002. Because of this, we’ve averted disasters.

Sometimes, it takes a bit of explanation to convince non-Indigenous scientists that pūrākau are a variation on the scientific method. They’re built on observations and interpretations of the natural world, and they allow us to predict how the world will function in the future. They’re repeatable, reliable, they have rigour, and they’re accurate. Once scientists see this, they have that ‘Aha!’ moment where they realize how well Western science and pūrākau complement each other.

We’re very lucky in New Zealand because our funding agencies help us to disseminate this idea. In 2005, the Ministry of Research, Science and Technology (which has since been incorporated into the Ministry of Business, Innovation and Employment) developed a framework called Vision Mātauranga. Mātauranga is the Māori word for knowledge, but it also includes the culture, values and world view of Māori people. Whenever a scientist applies for funding, they’re asked whether their proposal addresses a Māori need or can draw on Māori knowledge. The intent of Vision Mātauranga is to broaden the science sector by unlocking the potential of Māori mātauranga.

In the early days of Vision Mātauranga, some Indigenous groups found themselves inundated with last-minute requests from researchers who just wanted Indigenous people to sign off on their proposals to make their grant applications more competitive. It was enormously frustrating. These days, most researchers are using the policy with a higher degree of sophistication.

Vision Mātauranga is at its best when researchers develop long-term relationships with Indigenous groups so that they know about those groups’ dreams and aspirations and challenges, and also about their skill sets. Then the conversation can coalesce around where those things overlap with the researchers’ own goals. The University of Waikato in Hamilton has done a great job with this, establishing a chief-to-chief relationship in which the university’s senior management meets maybe twice a year with the chiefs of the Indigenous groups in the surrounding area. This ongoing relationship lets the university and the Indigenous groups have high-level discussions that build trust and can inform projects led by individual labs.

We’ve made great progress towards bridging Māori culture and scientific culture, but attitudes are still evolving — including my own. In 2011, I published my first foray into using Māori knowledge in science, and I used the word ‘integrate’ to describe the process of combining the two. I no longer use that word, because I think weaving is a more apt description. When you weave two strands together, the integrity of the individual components can remain, but you end up with something that’s ultimately stronger than what you started with.

DOMINIQUE DAVID-CHAVEZ: Listen and learn with humility

Dominique David-Chavez is an Indigenous land and data stewardship researcher at Colorado State University in Fort Collins, and a member of the Arawak Taíno community.

People often ask how can we integrate Indigenous knowledge into Western science. But framing the question in this way upholds the unhealthy power dynamic between Western and Indigenous scientists. It makes it sound as though there are two singular bodies of knowledge, when in fact Indigenous knowledge — unlike Western science — is drawn from thousands of different communities, each with its own knowledge systems.

At school, I was taught this myth that it was European and American white men who discovered all these different physical systems on Earth — on land, in the skies and in the water. But Indigenous people have been observing those same systems for hundreds or thousands of years. When Western scientists claim credit for discoveries that Indigenous people made first, they’re stealing Indigenous people’s contributions to science. This theft made me angry, but it also drove me. I decided to undertake graduate studies so that I could look critically at how we validate who creates knowledge, who creates science and who are the scientists.

To avoid perpetuating harmful power dynamics, researchers who want to work in an Indigenous people’s homeland should first introduce themselves to the community, explain their skills and convey how their research could serve the community. And they should begin the work only if the community invites them to. That invitation might take time to come! The researchers should also build in time to spend in the community to listen, be humbled and learn.

If you don’t have that built-in relational accountability, then maybe you’re better off in a supporting role.

Overall, my advice to Western researchers is this: always be questioning your assumptions about where science came from, where it’s going and what part you should be playing in its development.

MARY TURNIPSEED: Fund relationship building and follow-ups

Mary Turnipseed is an ecologist and grantmaker at the Gordon and Betty Moore Foundation, Palo Alto, California.

I’ve been awarding grants in the Arctic since 2015, when I became a marine-conservation programme officer at the Gordon and Betty Moore Foundation. A lesson I learnt early on about knowledge co-production — the term used for collaborations between academics and non-academics — is to listen. In the non-Indigenous parts of North America, we’re used to talking, but flipping that on its end helps us to work better with Indigenous communities.

Listening to our Indigenous Alaskan Native partners is often how I know whether a collaboration is working well or not. If the community is supportive of a particular effort, that means they’ve been able to develop a healthy relationship with the researchers. We have quarterly check-ins with our partners about how projects are going; and, in non-pandemic times, I frequently travelled to Alaska to talk directly with our partners.

One way in which we help to spur productive relationships is by giving research teams a year of preliminary funding — before they even start their research — so that they can work with Indigenous groups to identify the questions their research will address and decide how they’re going to tackle them. We really need more funding agencies to set aside money for this type of early relationship-building, so that everyone goes into a project with the same expectations, and with a level of trust for one another.

People working on the Ikaaġvik Sikukun collaboration in the snow cutting on ice core samples.
Members of the Ikaaġvik Sikukun collaboration at the Native Village of Kotzebue, Alaska.Credit: Sarah Betcher/Farthest North Films

Developing relationships takes time, so it’s easiest when Indigenous communities have a research coordinator, such as Alex Whiting (environmental programme director for the Native Village of Kotzebue), to handle all their collaborations. I think the number of such positions could easily be increased tenfold, and I’d love to see the US federal government offer more funding for these types of position.

Funding agencies should provide incentives for researchers to go back to the communities that they’ve worked with and share what they’ve found. There’s always talk among Indigenous groups about researchers who come in, collect data, get their PhDs and never show up again. Every time that happens, it hurts the community, and it hurts the next researchers to come. I think it’s essential for funding agencies to prevent this from happening.

ALEX WHITING: Develop a toolkit to decolonize relationships

Alex Whiting is an environmental specialist in Kotzebue, Alaska, and a formally adopted member of the Qikiktagrukmiut community.

A lot of the time, researchers who operate in a colonial way aren’t aware of the harm they’re doing. But many people are realizing that taking knowledge without involving local people is not only unethical, but inefficient. In 1997, the Native Village of Kotzebue — a federally recognized seat of tribal government representing the Qikiktagrukmiut, northwest Alaska’s original inhabitants — hired me as its environmental programme director. I helped the community to develop a research protocol that lays out our expectations of scientists who work in our community, and an accompanying questionnaire.

By filling in the one-page questionnaire, researchers give us a quick overview of what they plan to do; its relevance and potential benefit to our community; the need for local involvement; and how we’ll be compensated financially. This provides us with a tool through which to develop relationships with researchers, make sure that our priorities and rights are addressed, and hold researchers accountable. Making scientists think about how they’ll engage with us has helped to make research a more equitable, less extractive activity.

We cannot force scientists to deal with us. It’s a free country. But the Qikiktagrukmiut are skilled at activities such as boating, travelling on snow and capturing animals — and those skills are extremely useful for fieldwork, as is our deep historical knowledge of the local environment. It’s a lot harder for scientists to accomplish their work without our involvement. Many scientists realize this, so these days we get 6–12 research proposals per year. We say yes to most of them.

The NSF’s Navigating the New Arctic programme has definitely increased the number of last-minute proposals that communities such as ours get swamped with a couple of weeks before the application deadline. Throwing an Indigenous component into a research proposal at the last minute is definitely not an ideal way to go about things, because it doesn’t give us time to fully consider the research before deciding whether we want to participate. But at least the NSF has recognized that working with Indigenous people is a thing! They’re just in the growing-pains phase.

Not all Indigenous groups have had as much success as we have, and some are still experiencing the extractive side of science. But incorporating Indigenous knowledge into science can create rapid growths in understanding, and we’re happy we’ve helped some researchers do this in a respectful way.

NATAN OBED: Fund research on Indigenous priorities

Natan Obed is president of Inuit Tapiriit Kanatami, and a member of the Inuit community.

Every year, funding agencies devote hundreds of millions of dollars to work that occurs in the Inuit homeland in northern Canada. Until very recently, almost none of those agencies considered Inuit peoples’ priorities.

These Indigenous communities face massive social and economic challenges. More than 60% of Inuit households are food insecure, meaning they don’t always have enough food to maintain an active, healthy life. On average, one-quarter as many doctors serve Inuit communities as serve urban Canadian communities. Our life expectancy is ten years less than the average non-Indigenous Canadian’s. The list goes on. And yet, very little research is devoted to addressing these inequities.

Last year, the Inuit advocacy organization Inuit Tapiriit Kanatami (the name means ‘Inuit are united in Canada’) collaborated with the research network ArcticNet to start its own funding programme, which is called the Inuit Nunangat Research Program (INRP). Funding decisions are led entirely by Inuit people to ensure that all grants support research on Inuit priorities. Even in the programme’s first year, we got more requests than we could fund. We selected 11 proposals that all relate directly to the day-to-day lives of Inuit people. For example, one study that we’re funding aims to characterize a type of goose that has newly arrived in northern Labrador; another focuses on how social interactions spread disease in Inuit communities.

Our goal with the INRP is twofold: first, we want to generate knowledge that addresses Inuit concerns, and second, we want to create an example of how other granting agencies can change so that they respect the priorities of all groups. We’ve been moderately successful in getting some of the main Canadian granting agencies, such as the Canadian Institutes of Health Research, to allocate more resources to things that matter to Inuit people. I’d like to think that the INRP gives them a model for how to become even more inclusive.

We hope that, over the next ten years, it will become normal for granting agencies to consider the needs of Indigenous communities. But we also know that institutions change slowly. Looking back at where we’ve been, we have a lot to be proud of, but we still have a huge task ahead of us.

These interviews have been edited for length and clarity.

For better science, increase Indigenous participation in publishing (Nature)

10 January 2022

Amending long-established processes to include fresh perspectives is challenging, but journal editor Lisa Loseto is trying to find a path forward.

Saima May Sidik

Lisa Loseto at a campfire, where she is shutting down a research site at a traditional whaling camp.
Lisa Loseto stands by a campfire.Credit: Oksana Schimnowski

Lisa Loseto is a research scientist at Fisheries and Oceans Canada, a federal government department whose regional offices include one in Winnipeg, where she is based. Some of Northern Canada’s Indigenous people have shaped her research into how beluga whales (Delphinapterus leucas) interact with their environments, and have taught her to rethink her own part in the scientific method. As co-editor-in-chief of the journal Arctic Science since 2017, she is looking at ways to increase Indigenous representation in scientific publishing, including the editorial and peer-review processes.

What got you thinking about the role of Indigenous people in scientific publishing?

In 2020, Arctic Science published a special issue centred on knowledge co-produced by Western scientists and Indigenous people. As production of that issue progressed, the peer-review and editorial processes stuck out as aspects lacking Indigenous representation. We were soliciting papers to highlight the contributions of Indigenous knowledge — but the peer-review process was led by non-Indigenous editors like myself, and academics to review the articles. A few members of the editorial board thought, ‘Let’s talk about this and think about ways to provide more balance.’ We discussed the issue in a workshop that included representatives from several groups that are indigenous to Canada’s Arctic.

What did the workshop reveal about the Indigenous participants’ perceptions of scientific publishing?

For a lot of people, publishing seemed like a distant concept, so we explained how the editorial and peer-review processes work. We described peer review as a method for validating knowledge before it’s published, and many Indigenous participants recognized similarities between that process and one in their own lives: in the Arctic, each generation passes down knowledge of how to live in a harsh environment, and over time this knowledge is tested and refined. The Indigenous workshop participants said, “We would die if we didn’t have the peer-review process.”

The scientific method used by Westerners is colonial: it emphasizes objectivity and performing experiments in the absence of outside influences. This mindset can feel alienating for many Indigenous people, who see themselves as integral parts of nature. This makes me think scientific publishing doesn’t fit an Indigenous framework.

The dense jargon and idiosyncratic structures of scientific publications make them difficult for people without a formal scientific education to jump into. Even people training to become scientists often don’t get involved in publishing until they’re in graduate school because there’s so much background knowledge that they need to have first.

If a journal article draws on Indigenous knowledge, should it include an Indigenous peer reviewer?

Perhaps, but trying to force Indigenous perspectives into a process that was created to advance Western priorities can come with its own problems. Scientific publications serve the dual purposes of disseminating information and acting as a tool of measure for scientists’ careers. Most members of Indigenous groups aren’t concerned with building up their academic CVs; in fact, some are uncomfortable with being named as authors because they see their knowledge as part of a collective body, rather than belonging solely to themselves. So do publications have the same weight for Indigenous people? Maybe not. In light of this, is participating in this system really the best use of time for Indigenous people who aren’t in academia — especially when their communities are already overtaxed with researchers’ requests for guidance through prepublication aspects of performing research in remote areas?

In Arviat, Nunavut, Canada, a local woman demonstrates historic tools used by Inuit, with a polar tent in background.
Indigenous communities hold a wealth of knowledge that can advance science.Credit: Galaxiid/Alamy

As an alternative to contributing to research articles, we’re considering starting a commentary section of Arctic Science. This could give more Indigenous people a venue to publish their views on the scientific process, and their observations of natural trends, in a less technical format.

Can Indigenous journal editors help to bridge the divide between Indigenous people and academic publications?

Yes, but there are very few Indigenous journal editors. Historically, editor positions have been reserved for senior scientists, and many senior scientists are white men. I’m trying to bring on more early-career scientists as editors, as this group is often more diverse. By moving away from offering these positions to only the most senior scientists, I think we’ll see a shift in demographics. At the same time, I don’t want to put the burden of bridging current divides entirely on Indigenous people. That job is for all of us.

What is Arctic Science planning to do moving forward?

My hope is to build an Indigenous advisory group that can advise Arctic Science on the peer-review process generally and consider, on a case-by-case basis, whether articles could benefit from an Indigenous peer reviewer. Beyond that, we’re still figuring out how to engage more people without being prescriptive about how they’re engaged.

What do you hope these actions will achieve?

Publications are power. Policy decisions are based on things that are written down and tangible: peer-reviewed papers and reports. Not only do scientific publications guide policy decisions, they also determine who gets money. The more you publish, and the better the journals you publish in, the more power you have.

Indigenous communities have tremendous knowledge, but much of it is passed down orally rather than published in written form. I think the fact that Indigenous representation is weak in academia, including in publishing, upholds the power imbalance that exists between Indigenous people and settlers. I want to find a better balance.

doi: https://doi.org/10.1038/d41586-022-00058-x

This interview has been edited for length and clarity.

The Petabyte Age: Because More Isn’t Just More — More Is Different (Wired)

WIRED Staff, Science, 06.23.2008 12:00 PM

Introduction: Sensors everywhere. Infinite storage. Clouds of processors. Our ability to capture, warehouse, and understand massive amounts of data is changing science, medicine, business, and technology. As our collection of facts and figures grows, so will the opportunity to find answers to fundamental questions. Because in the era of big data, more isn’t just more. […]

petabyte age
Marian Bantjes

Introduction:

Sensors everywhere. Infinite storage. Clouds of processors. Our ability to capture, warehouse, and understand massive amounts of data is changing science, medicine, business, and technology. As our collection of facts and figures grows, so will the opportunity to find answers to fundamental questions. Because in the era of big data, more isn’t just more. More is different.

The End of Theory:

The Data Deluge Makes the Scientific Method Obsolete

Feeding the Masses:
Data In, Crop Predictions Out

Chasing the Quark:
Sometimes You Need to Throw Information Away

Winning the Lawsuit:
Data Miners Dig for Dirt

Tracking the News:
A Smarter Way to Predict Riots and Wars

__Spotting the Hot Zones: __
Now We Can Monitor Epidemics Hour by Hour

__ Sorting the World:__
Google Invents New Way to Manage Data

__ Watching the Skies:__
Space Is Big — But Not Too Big to Map

Scanning Our Skeletons:
Bone Images Show Wear and Tear

Tracking Air Fares:
Elaborate Algorithms Predict Ticket Prices

Predicting the Vote:
Pollsters Identify Tiny Voting Blocs

Pricing Terrorism:
Insurers Gauge Risks, Costs

Visualizing Big Data:
Bar Charts for Words

Big data and the end of theory? (The Guardian)

theguardian.com

Mark Graham, Fri 9 Mar 2012 14.39 GM

Does big data have the answers? Maybe some, but not all, says Mark Graham

In 2008, Chris Anderson, then editor of Wired, wrote a provocative piece titled The End of Theory. Anderson was referring to the ways that computers, algorithms, and big data can potentially generate more insightful, useful, accurate, or true results than specialists or
domain experts who traditionally craft carefully targeted hypotheses
and research strategies.

This revolutionary notion has now entered not just the popular imagination, but also the research practices of corporations, states, journalists and academics. The idea being that the data shadows and information trails of people, machines, commodities and even nature can reveal secrets to us that we now have the power and prowess to uncover.

In other words, we no longer need to speculate and hypothesise; we simply need to let machines lead us to the patterns, trends, and relationships in social, economic, political, and environmental relationships.

It is quite likely that you yourself have been the unwitting subject of a big data experiment carried out by Google, Facebook and many other large Web platforms. Google, for instance, has been able to collect extraordinary insights into what specific colours, layouts, rankings, and designs make people more efficient searchers. They do this by slightly tweaking their results and website for a few million searches at a time and then examining the often subtle ways in which people react.

Most large retailers similarly analyse enormous quantities of data from their databases of sales (which are linked to you by credit card numbers and loyalty cards) in order to make uncanny predictions about your future behaviours. In a now famous case, the American retailer, Target, upset a Minneapolis man by knowing more about his teenage daughter’s sex life than he did. Target was able to predict his daughter’s pregnancy by monitoring her shopping patterns and comparing that information to an enormous database detailing billions of dollars of sales. This ultimately allows the company to make uncanny
predictions about its shoppers.

More significantly, national intelligence agencies are mining vast quantities of non-public Internet data to look for weak signals that might indicate planned threats or attacks.

There can by no denying the significant power and potentials of big data. And the huge resources being invested in both the public and private sectors to study it are a testament to this.

However, crucially important caveats are needed when using such datasets: caveats that, worryingly, seem to be frequently overlooked.

The raw informational material for big data projects is often derived from large user-generated or social media platforms (e.g. Twitter or Wikipedia). Yet, in all such cases we are necessarily only relying on information generated by an incredibly biased or skewed user-base.

Gender, geography, race, income, and a range of other social and economic factors all play a role in how information is produced and reproduced. People from different places and different backgrounds tend to produce different sorts of information. And so we risk ignoring a lot of important nuance if relying on big data as a social/economic/political mirror.

We can of course account for such bias by segmenting our data. Take the case of using Twitter to gain insights into last summer’s London riots. About a third of all UK Internet users have a twitter profile; a subset of that group are the active tweeters who produce the bulk of content; and then a tiny subset of that group (about 1%) geocode their tweets (essential information if you want to know about where your information is coming from).

Despite the fact that we have a database of tens of millions of data points, we are necessarily working with subsets of subsets of subsets. Big data no longer seems so big. Such data thus serves to amplify the information produced by a small minority (a point repeatedly made by UCL’s Muki Haklay), and skew, or even render invisible, ideas, trends, people, and patterns that aren’t mirrored or represented in the datasets that we work with.

Big data is undoubtedly useful for addressing and overcoming many important issues face by society. But we need to ensure that we aren’t seduced by the promises of big data to render theory unnecessary.

We may one day get to the point where sufficient quantities of big data can be harvested to answer all of the social questions that most concern us. I doubt it though. There will always be digital divides; always be uneven data shadows; and always be biases in how information and technology are used and produced.

And so we shouldn’t forget the important role of specialists to contextualise and offer insights into what our data do, and maybe more importantly, don’t tell us.

Mark Graham is a research fellow at the Oxford Internet Institute and is one of the creators of the Floating Sheep blog

The Paradox of the Proof (Project Wordsworth)

By Caroline Chen

MAY 9, 2013


On August 31, 2012, Japanese mathematician Shinichi Mochizuki posted four papers on the Internet.

The titles were inscrutable. The volume was daunting: 512 pages in total. The claim was audacious: he said he had proved the ABC Conjecture, a famed, beguilingly simple number theory problem that had stumped mathematicians for decades.

Then Mochizuki walked away. He did not send his work to the Annals of Mathematics. Nor did he leave a message on any of the online forums frequented by mathematicians around the world. He just posted the papers, and waited.

Two days later, Jordan Ellenberg, a math professor at the University of Wisconsin-Madison, received an email alert from Google Scholar, a service which scans the Internet looking for articles on topics he has specified. On September 2, Google Scholar sent him Mochizuki’s papers: You might be interested in this.

“I was like, ‘Yes, Google, I am kind of interested in that!’” Ellenberg recalls. “I posted it on Facebook and on my blog, saying, ‘By the way, it seems like Mochizuki solved the ABC Conjecture.’”

The Internet exploded. Within days, even the mainstream media had picked up on the story. “World’s Most Complex Mathematical Theory Cracked,” announced the Telegraph. “Possible Breakthrough in ABC Conjecture,” reported the New York Times, more demurely.

On MathOverflow, an online math forum, mathematicians around the world began to debate and discuss Mochizuki’s claim. The question which quickly bubbled to the top of the forum, encouraged by the community’s “upvotes,” was simple: “Can someone briefly explain the philosophy behind his work and comment on why it might be expected to shed light on questions like the ABC conjecture?” asked Andy Putman, assistant professor at Rice University. Or, in plainer words: I don’t get it. Does anyone?

The problem, as many mathematicians were discovering when they flocked to Mochizuki’s website, was that the proof was impossible to read. The first paper, entitled “Inter-universal Teichmuller Theory I: Construction of Hodge Theaters,” starts out by stating that the goal is “to establish an arithmetic version of Teichmuller theory for number fields equipped with an elliptic curve…by applying the theory of semi-graphs of anabelioids, Frobenioids, the etale theta function, and log-shells.”

This is not just gibberish to the average layman. It was gibberish to the math community as well.

“Looking at it, you feel a bit like you might be reading a paper from the future, or from outer space,” wrote Ellenberg on his blog.

“It’s very, very weird,” says Columbia University professor Johan de Jong, who works in a related field of mathematics.

Mochizuki had created so many new mathematical tools and brought together so many disparate strands of mathematics that his paper was populated with vocabulary that nobody could understand. It was totally novel, and totally mystifying.

As Tufts professor Moon Duchin put it: “He’s really created his own world.”

It was going to take a while before anyone would be able to understand Mochizuki’s work, let alone judge whether or not his proof was right. In the ensuing months, the papers weighed like a rock in the math community. A handful of people approached it and began examining it. Others tried, then gave up. Some ignored it entirely, preferring to observe from a distance. As for the man himself, the man who had claimed to solve one of mathematics’ biggest problems, there was not a sound.

For centuries, mathematicians have strived towards a single goal: to understand how the universe works, and describe it. To this objective, math itself is only a tool — it is the language that mathematicians have invented to help them describe the known and query the unknown.

This history of mathematical inquiry is marked by milestones that come in the form of theorems and conjectures. Simply put, a theorem is an observation known to be true. The Pythagorean theorem, for example, makes the observation that for all right-angled triangles, the relationship between the lengths of the three sides, ab and is expressed in the equation a2+ b2= c2. Conjectures are predecessors to a theorem — they are proposals for theorems, observations that mathematicians believe to be true, but are yet to be confirmed. When a conjecture is proved, it becomes a theorem and when that happens, mathematicians rejoice, and add the new theorem to their tally of the understood universe.

“The point is not to prove the theorem,” explains Ellenberg. “The point is to understand how the universe works and what the hell is going on.”

Ellenberg is doing the dishes while talking to me over the phone, and I can hear the sound of a small infant somewhere in the background. Ellenberg is passionate about explaining mathematics to the world. He writes a math column for Slate magazine and is working on a book called How Not To Be Wrong, which is supposed to help laypeople apply math to their lives.

The sounds of the dishes pause as Ellenberg explains what motivates him and his fellow mathematicians. I imagine him gesturing in the air with soapy hands: “There’s a feeling that there’s a vast dark area of ignorance, but all of us are pushing together, taking steps together to pick at the boundaries.”

The ABC Conjecture probes deep into the darkness, reaching at the foundations of math itself. First proposed by mathematicians David Masser and Joseph Oesterle in the 1980s, it makes an observation about a fundamental relationship between addition and multiplication. Yet despite its deep implications, the ABC Conjecture is famous because, on the surface, it seems rather simple.

It starts with an easy equation: a + b = c.

The variables ab, and c, which give the conjecture its name, have some restrictions. They need to be whole numbers, and and cannot share any common factors, that is, they cannot be divisible by the same prime number. So, for example, if was 64, which equals 26, then could not be any number that is a multiple of two. In this case, could be 81, which is 34. Now and do not share any factors, and we get the equation 64 + 81 = 145.

It isn’t hard to come up with combinations of and that satisfy the conditions. You could come up with huge numbers, such as 3,072 + 390,625 = 393,697 (3,072 = 210 x 3 and 390,625 = 58, no overlapping factors there), or very small numbers, such as 3 + 125 = 128 (125 = 5 x 5 x5).

What the ABC conjecture then says is that the properties of a and affect the properties of c. To understand the observation, it first helps to rewrite these equations a + b = c into versions made up of the prime factors:

Our first equation, 64 + 81 = 145, is equivalent to 26+ 34= 5 x 29.

Our second example, 3,072 + 390,625 = 393,697 is equivalent to  210 x 3 + 58 = 393,697 (which happens to be prime!)

Our last example, 3 + 125 = 128, is equivalent to 3 + 53= 27

The first two equations are not like the third, because in the first two equations, you have lots of prime factors on the left hand side of the equation, and very few on the right hand side. The third example is the opposite — there are more primes on the right hand side (seven) of the equation than on the left (only four). As it turns out, in all the possible combinations of a, b, and c, situation three is pretty rare. The ABC Conjecture essentially says that when there are lots of prime factors on the left hand of the equation then, usually, there will be not very many on the right side of the equation.

Of course, “lots of,” “not very many,” and “usually” are very vague words, and in a formal version of the ABC Conjecture, all these terms are spelled out in more precise math-speak. But even in this watered-down version, one can begin to appreciate the conjecture’s implications. The equation is based on addition, but the conjecture’s observation is more about multiplication.

“It really is about something very, very basic, about a tight constraint that relates multiplicative and additive properties of numbers,” says Minhyong Kim, professor at Oxford University. “If there’s something new to discover about that, you might expect it to be very influential.”

This is not intuitive. While mathematicians came up with addition and multiplication in the first place, based on their current knowledge of mathematics, there is no reason for them to presume that the additive properties of numbers would somehow influence or affect their multiplicative properties.

“There’s very little evidence for it,” says Peter Sarnak, professor at Princeton University, who is a self-described skeptic of the ABC conjecture. “I’ll only believe it when it’s proved.”

But if it were true? Mathematicians say that it would reveal a deep relationship between addition and multiplication that they never knew of before.

Even Sarnak, the skeptic, acknowledges this.

“If it’s true, then it will be the most powerful thing we have,” he says.

It would be so powerful, in fact, that it would automatically unlock many legendary math puzzles. One of these would be Fermat’s last theorem, an infamous math problem that was proposed in 1637, and solved only recently by Andrew Wiles in 1993. Wiles’ proof earned him more than 100,000 Deutsche marks in prize money (equivalent to about $50,000 in 1997), a reward that was offered almost a century before, in 1908. Wiles did not solve Fermat’s Last Theorem via the ABC conjecture — he took a different route — but if the ABC conjecture were to be true, then the proof for Fermat’s Last Theorem would be an easy consequence.

Because of its simplicity, the ABC Conjecture is well-known by all mathematicians. CUNY professor Lucien Szpiro says that “every professional has tried at least one night” to theorize about a proof. Yet few people have seriously attempted to crack it. Szpiro, whose eponymous conjecture is a precursor of the ABC Conjecture, presented a proof in 2007, but it was soon found to be problematic. Since then, nobody has dared to touch it, not until Mochizuki.

When Mochizuki posted his papers, the math community had much reason to be enthusiastic. They were excited not just because someone had claimed to prove an important conjecture, but because of who that someone was.

Mochizuki was known to be brilliant. Born in Tokyo, he moved to New York with his parents, Kiichi and Anne Mochizuki, when he was 5 years old. He left home for high school, attending Philips Exeter Academy, a selective prep school in New Hampshire. There, he whipped through his academics with lightning speed, graduating after two years, at age 16, with advanced placements in mathematics, physics, American and European history, and Latin.

Then Mochizuki enrolled at Princeton University where, again, he finished ahead of his peers, earning his bachelor’s degree in mathematics in three years and moving quickly onto his Ph.D, which he received at age 23. After lecturing at Harvard University for two years, he returned to Japan, joining the Research Institute for Mathematical Sciences at Kyoto University. In 2002, he became a full professor at the unusually young age of 33. His early papers were widely acknowledged to be very good work.

Academic prowess is not the only characteristic that set Mochizuki apart from his peers. His friend, Oxford professor Minhyong Kim, says that Mochizuki’s most outstanding characteristic is his intense focus on work.

“Even among many mathematicians I’ve known, he seems to have an extremely high tolerance for just sitting and doing mathematics for long, long hours,” says Kim.

Mochizuki and Kim met in the early 1990s, when Mochizuki was still an undergraduate student at Princeton. Kim, on exchange from Yale University, recalls Mochizuki making his way through the works of French mathematician Alexander Grothedieck, whose books on algebraic and arithmetic geometry are a must-read for any mathematician in the field.

“Most of us gradually come to understand [Grothendieck’s works] over many years, after dipping into it here and there,” said Kim. “It adds up to thousands and thousands of pages.”

But not Mochizuki.

“Mochizuki…just read them from beginning to end sitting at his desk,” recalls Kim. “He started this process when he was still an undergraduate, and within a few years, he was just completely done.”

A few years after returning to Japan, Mochizuki turned his focus to the ABC Conjecture. Over the years, word got around that he believed to have cracked the puzzle, and Mochizuki himself said that he expected results by 2012. So when the papers appeared, the math community was waiting, and eager. But then the enthusiasm stalled.

“His other papers – they’re readable, I can understand them and they’re fantastic,” says de Jong, who works in a similar field. Pacing in his office at Columbia University, de Jong shook his head as he recalled his first impression of the new papers. They were different. They were unreadable. After working in isolation for more than a decade, Mochizuki had built up a structure of mathematical language that only he could understand. To even begin to parse the four papers posted in August 2012, one would have to read through hundreds, maybe even thousands, of pages of previous work, none which had been vetted or peer-reviewed. It would take at least a year to read and understand everything. De Jong, who was about to go on sabbatical, briefly considered spending his year on Mochizuki’s papers, but when he saw height of the mountain, he quailed.

“I decided, I can’t possibly work on this. It would drive me nuts,” he said.

Soon, frustration turned into anger. Few professors were willing to directly critique a fellow mathematician, but almost every person I interviewed was quick to point out that Mochizuki was not following community standards. Usually, they said, mathematicians discuss their findings with their colleagues. Normally, they publish pre-prints to widely respected online forums. Then they submit their papers to the Annals of Mathematics, where papers are refereed by eminent mathematicians before publication. Mochizuki was bucking the trend. He was, according to his peers, “unorthodox.”

But what roused their ire most was Mochizuki’s refusal to lecture. Usually, after publication, a mathematician lectures on his papers, travelling to various universities to explain his work and answer questions from his colleagues. Mochizuki has turned down multiple invitations.

“A very prominent research university has asked him, ‘Come explain your result,’ and he said, ‘I couldn’t possibly do that in one talk,’” says Cathy O’Neil, de Jong’s wife, a former math professor better known as the blogger “Mathbabe.”

“And so they said, ‘Well then, stay for a week,’ and he’s like, ‘I couldn’t do it in a week.’

“So they said, ‘Stay for a month. Stay as long as you want,’ and he still said no.

“The guy does not want to do it.”

Kim sympathizes with his frustrated colleagues, but suggests a different reason for the rancor. “It really is painful to read other people’s work,” he says. “That’s all it is… All of us are just too lazy to read them.”

Kim is also quick to defend his friend. He says Mochizuki’s reticence is due to being a “slightly shy character” as well as his assiduous work ethic. “He’s a very hard working guy and he just doesn’t want to spend time on airplanes and hotels and so on.”

O’Neil, however, holds Mochizuki accountable, saying that his refusal to cooperate places an unfair burden on his colleagues.

“You don’t get to say you’ve proved something if you haven’t explained it,” she says. “A proof is a social construct. If the community doesn’t understand it, you haven’t done your job.”

Today, the math community faces a conundrum: the proof to a very important conjecture hangs in the air, yet nobody will touch it. For a brief moment in October, heads turned when Yale graduate student Vesselin Dimitrov pointed out a potential contradiction in the proof, but Mochizuki quickly responded, saying he had accounted for the problem. Dimitrov retreated, and the flicker of activity subsided.

As the months pass, the silence has also begun to call into question a basic premise of mathematical academia. Duchin explains the mainstream view this way: “Proofs are right or wrong. The community passes verdict.”

This foundational stone is one that mathematicians are proud of. The community works together; they are not cut-throat or competitive. Colleagues check each other’s work, spending hours upon hours verifying that a peer got it right. This behavior is not just altruistic, but also necessary: unlike in medical science, where you know you’re right if the patient is cured, or in engineering, where the rocket either launches or it doesn’t, theoretical math, better known as “pure” math, has no physical, visible standard. It is entirely based on logic. To know you’re right means you need someone else, preferably many other people, to walk in your footsteps and confirm that every step was made on solid ground. A proof in a vacuum is no proof at all.

Even an incorrect proof is better than no proof, because if the ideas are novel, they may still be useful for other problems, or inspire another mathematician to figure out the right answer. So the most pressing question isn’t whether or not Mochizuki is right — the more important question is, will the math community fulfill their promise, step up to the plate and read the papers?

The prospects seem thin. Szpiro is among the few who have made attempts to understand short segments of the paper. He holds a weekly workshop with his post-doctoral students at CUNY to discuss the paper, but he says they are limited to “local” analysis and do not understand the big picture yet. The only other known candidate is Go Yamashita, a colleague of Mochizuki at Kyoto University. According to Kim, Mochizuki is holding a private seminar with Yamashita, and Kim hopes that Yamashita will then go on to share and explain the work. If Yamashita does not pull through, it is unclear who else might be up to the task.

For now, all the math community can do is wait. While they wait, they tell stories, and recall great moments in math — the year Wiles cracked Fermat’s Last Theorem; how Perelman proved the Poincaré Conjecture. Columbia professor Dorian Goldfeld tells the story of Kurt Heegner, a high school teacher in Berlin, who solved a classic problem proposed by Gauss. “Nobody believed it. All the famous mathematicians pooh-poohed it and said it was wrong.” Heegner’s paper gathered dust for more than a decade until finally, four years after his death, mathematicians realized that Heegner had been right all along. Kim recalls Yoichi Miyaoka’s proposed proof of Fermat’s Last Theorem in 1988, which garnered a lot of media attention before serious flaws were discovered. “He became very embarrassed,” says Kim.

As they tell these stories, Mochizuki and his proofs hang in the air. All these stories are possible outcomes. The only question is – which?

Kim is one of the few people who remains optimistic about the future of this proof. He is planning a conference at Oxford University this November, and hopes to invite Yamashita to come and share what he has learned from Mochizuki. Perhaps more will be made clear, then.

As for Mochizuki, who has refused all media requests, who seems so reluctant to promote even his own work, one has to wonder if he is even aware of the storm he has created.

On his website, one of the only photos of Mochizuki available on the Internet shows a middle-aged man with old-fashioned 90’s style glasses, staring up and out, somewhere over our heads. A self-given title runs over his head. It is not “mathematician” but, rather, “Inter-universal Geometer.”

What does it mean? His website offers no clues. There are his papers, thousands of pages long, reams upon reams of dense mathematics. His resume is spare and formal. He reports his marital status as “Single (never married).” And then there is a page called Thoughts of Shinichi Mochizuki, which has only 17 entries. “I would like to report on my recent progress,” he writes, February 2009. “Let me report on my progress,” October 2009. “Let me report on my progress,” April 2010, June 2011, January 2012. Then follows math-speak. It is hard to tell if he is excited, daunted, frustrated, or enthralled.

Mochizuki has reported all this progress for years, but where is he going? This “inter-universal geometer,” this possible genius, may have found the key that would redefine number theory as we know it. He has, perhaps, charted a new path into the dark unknown of mathematics. But for now, his footsteps are untraceable. Wherever he is going, he seems to be travelling alone.