Arquivo mensal: janeiro 2015

Do we need “the Anthropocene?” (Inhabiting the Anthropocene)

Zev Trachtenberg | January 5, 2015 at 7:00 am

As 2014 came to a close I received a wonderfully provocative e-mail from my friend and colleague in the Environmental Political Theory community John Meyer. He wrote that he has been led to

ask — out loud — a question that may seem either naive or cynical, but is not meant as either: so what’s the big deal about the Anthropocene? . . . To be clear, I get why it’s a big deal in geological terms. But what I’m wondering is: in what ways does it alter our understanding/approach/argument as philosophers, political theorists, political ecologists, environmental humanists, etc., that have already been working on environmental/sustainability concerns?

Does it add to or modify established critiques of “nature”? Does it convey an urgency that might otherwise be lacking? Does it alter our sense of human/more-than-human relations? Is it primarily a vehicle that might convey a set of concerns to a broader public? I know that none of these questions are original, but I pose them b/c I’m fascinated with the explosion of attention to the concept over the past couple years and yet genuinely struggling to make sense of the impetus/es for it.

This strikes me as a really good question. So as 2015 begins, here are some (I hope) seasonally appropriate reflections–not direct answers to John–on whether speaking about the Anthropocene adds some distinctive value to preexisting conversations about anthropogenic environmental change.

An immediate issue has to do with the status of the word as a term in Geology; in that context of course the Anthropocene is a proposed period in the geological time-scale, and it is an open question as to whether or not it will be formally adopted by the International Commission on Stratigraphy (the “ICS”—the decision is anticipated in 2016; here is the website for the working group handling the proposal). But the “explosion of attention” John mentions is due to the usage of the term in an informal way to refer to the massive transformation of Earth systems by human beings. Reference to the Anthropocene lends a kind of scientific prestige; it may be that work in the Humanities (my own area) is particularly prone to the urge to bolster its relevance and credibility by affiliating itself with a scientific endorsement of the project of discussing human-induced environmental change. And that appeal (made explicitly or implicitly) to Geology seems to vindicate the sense that anthropogenic change is really happening.

There is, no doubt, a degree of “wow factor” to the idea that humanity has become a force of nature, akin to geological phenomena like volcanoes and earthquakes, and potentially just as cataclysmic. Reference to the Anthropocene seems to ground this amazing thought in the sober authority of dispassionate geologists attuned to processes that shape the Earth itself. To speak of the Anthropocene is thus to hitch one’s claims to a fundamental understanding of nature, which can help justify one’s own demands on one’s audience for belief, and for action. It is not impossible, therefore, that we are experiencing a bandwagon effect–that the term “Anthropocene” is functioning as a buzzword in what will turn out to be a passing wave of academic fashion. Its passage might be accelerated if people find that, after all, adding the term to studies of particular examples of anthropogenic environmental change does not in fact add any value. And I can’t help but wonder what would happen if the ICS ends up rejecting the term next year. Will that deflate an academic bubble? Or will there be an intensification of C.P. Snow’s split between two cultures?

My own sense is that the “buzzier” sense of “Anthropocene” in fact does have some value—though I want to acknowledge that it is probably not be the best word for the job I want to approve. As a geological term “Anthropocene” refers to a hypothesized condition or set of facts about the Earth; it is the task of the ICS to decide whether that hypothesis is, in it sbest scientific judgment, true. But the informal usage of the word seems to connote a meaning over and above the idea that the present condition of the Earth has been profoundly shaped by human activity. On this additional meaning the word refers not to a condition, but to a broad intellectual approach. In this sense “Anthropocene” can be taken to name something like a paradigm: an intellectual framework which provides a consistent way for understanding diverse phenomena. The framework brings together a range of ideas and outlooks which harmonize around the theme that human activity has led to a distinctive condition of the Earth; it might therefore be called “Anthropocenism.” Thankfully I’ve not see that word before—and hope never to again. But the absence of a viable name leaves the imprecise usage—of the name for the condition—in place as the label for the approach, i.e. for the cluster of views that overlap by attending to anthropogenic environmental change.

In other words, the recent “explosion of attention” to the Anthropocene John notices might reflect the emergence of a consensus across a fairly wide range of disciplines on how to think about the relationship between human beings and the physical environment. The concept may not add any new information to any given field—many of which have well established traditions of examining that relationship. But, by redescribing ideas that are already available it facilitates the recognition that disparate fields indeed address a common theme. The shared term holds out at least the potential that researchers with profoundly different interests can see in each other’s work ideas that can advance their own. At the risk of sounding Pollyannaish, I believe that the possibility that the Anthropocene proposal might facilitate disciplinary cross-fertilization means that the value it adds to existing work is not negligible.

What I’ve said so far is pretty general; I have not given much detail about the content of the “paradigm” I’ve suggested the term the Anthropocene should be taken to name. One hope for this blog is that that content might emerge out the readings we are presenting in our reading posts. But I will conclude with a highly compressed (and too general) statement of what I take to be the core notions.

As the name of an outlook, the Anthropocene articulates the idea that human beings are natural: human life is embedded in the natural world. I draw two key implications from this starting point. First, while it is a commonplace of environmental thinking that our embeddedness means that human beings are essentially dependent on the causal processes at work in natural world, embeddedness equally means that human actions have effects in the natural world; this fact is also essential to our status as natural beings. The causal continuity here points to a systemic understanding, whereby there is no clear conceptual distinction between human and natural domains. Second, the humancharacter of the causal processes by which human beings affect the world is associated with technology. An image from the beginning of Stanley Kubrick’s 2001 conveys my point here. The proto-human creature becomes human by using a tool—the bone it uses as a weapon. It then tosses the bone in the air, and we next see a space craft. But the human character of human causality is at the same time social—and technology can only be understood in terms of the social and economic structures and processes through which it is developed and deployed.

As a matter of shorthand I interpret the Anthropocene (in the precise sense of a condition of the Earth) as the consequence of these two implications of naturalism: the socially organized deployment of technology so amplifies and concentrates human causal power that human activity can redirect or disrupt planetary-scale Earth system processes, yielding a state of the system best characterized by reference to human influence. But I am suggesting that we also use the term Anthropocene in a less precise way, to point to something like a paradigm. In that sense it gathers together empirical research that describes and explains the socially and technologically mediated effects human beings have on the world. Within this paradigm the project of understanding observations involves interpreting them in terms of the traces of human causal influence they might reveal. And that is why, I believe, this paradigm can successfully link normative inquiries to descriptive ones. For, by attending centrally to the structure and dynamics of human causal power within the natural world, it keeps in clear focus the issue of moral responsibility.

The Surprising Link Between Gut Bacteria And Anxiety (Huff Post)

 |  By

Posted: 01/04/2015 10:05 am EST 

GUT BACTERIA

In recent years, neuroscientists have become increasingly interested in the idea that there may be a powerful link between the human brain and gut bacteria. And while a growing body of research has provided evidence of the brain-gut connection, most of these studies so far have been conducted on animals.

Now, promising new research from neurobiologists at Oxford University offers some preliminary evidence of a connection between gut bacteria and mental health in humans. The researchers found that supplements designed to boost healthy bacteria in the gastrointestinal tract (“prebiotics”) may have an anti-anxiety effect insofar as they alter the way that people process emotional information.

While probiotics consist of strains of good bacteria, prebiotics are carbohydrates that act as nourishment for those bacteria. With increasing evidence that gut bacteria may exert some influence on brain function and mental health, probiotics and prebiotics are being increasingly studied for the potential alleviation of anxiety and depression symptoms.

“Prebiotics are dietary fibers (short chains of sugar molecules) that good bacteria break down, and use to multiply,” the study’s lead author, Oxford psychiatrist and neurobiologist Dr. Philip Burnet, told The Huffington Post. “Prebiotics are ‘food’ for good bacteria already present in the gut. Taking prebiotics therefore increases the numbers of all species of good bacteria in the gut, which will theoretically have greater beneficial effects than [introducing] a single species.”

To test the efficacy of prebiotics in reducing anxiety, the researchers asked 45 healthy adults between the ages of 18 and 45 to take either a prebiotic or a placebo every day for three weeks. After the three weeks had passed, the researchers completed several computer tests assessing how they processed emotional information, such as positive and negatively-charged words.

The results of one of the tests revealed that subjects who had taken the prebiotic paid less attention to negative information and more attention to positive information, compared to the placebo group, suggesting that the prebiotic group had less anxiety when confronted with negative stimuli. This effect is similar to that which has been observed among individuals who have taken antidepressants or anti-anxiety medication.

The researchers also found that the subjects who took the prebiotics had lower levels of cortisol — a stress hormone which has been linked with anxiety and depression — in their saliva when they woke up in the morning.

While previous research has documented that altering gut bacteria has a similarly anxiety-reducing effect in mice, the new study is one of the first to examine this phenomenon in humans. As of now, research on humans is in its early stages. A study conducted last year at UCLA found that women who consumed probiotics through regularly eating yogurt exhibited altered brain function in both a resting state and when performing an emotion-recognition task.

“Time and time again, we hear from patients that they never felt depressed or anxious until they started experiencing problems with their gut,” Dr. Kirsten Tillisch, the study’s lead author, said in a statement. “Our study shows that the gut–brain connection is a two-way street.”

So are we moving towards a future in which mental illness may be able to be treated (or at least managed) using targeted probiotic cocktails? Burnet says it’s possible, although they’re unlikely to replace conventional treatment.

“I think pre/probiotics will only be used as ‘adjuncts’ to conventional treatments, and never as mono-therapies,” Burnet tells HuffPost. “It is likely that these compounds will help to manage mental illness… they may also be used when there are metabolic and/or nutritional complications in mental illness, which may be caused by long-term use of current drugs.”

The findings were published in the journal Psychopharmacology.

How Mathematicians Used A Pump-Action Shotgun to Estimate Pi (The Physics arXiv Blog)

The Physics arXiv Blog

If you’ve ever wondered how to estimate pi using a Mossberg 500 pump-action shotgun, a sheet of aluminium foil and some clever mathematics, look no further

Imagine the following scenario. The end of civilisation has occurred, zombies have taken over the Earth and all access to modern technology has ended. The few survivors suddenly need to know the value of π and, being a mathematician, they turn to you. What do you do?

If ever you find yourself in this situation, you’ll be glad of the work of Vincent Dumoulin and Félix Thouin at the Université de Montréal in Canada. These guys have worked out how to calculate an approximate value of π using the distribution of pellets from a Mossberg 500 pump-action shotgun, which they assume would be widely available in the event of a zombie apocalypse.

The principle is straightforward. Imagine a square with sides of length 1 and which contains an arc drawn between two opposite corners to form a quarter circle. The area of the square is 1 while the area of the quarter circle is π/4.

Next, sprinkle sand or rice over the square so that it is covered with a random distribution of grains. Then count the number of grains inside the quarter circle and the total number that cover the entire square.

The ratio of these two numbers is an estimate of the ratio between the area of the quarter circle and the square, in other words π/4.

So multiplying this ratio by 4 gives you π, or at least an estimate of it. And that’s it.

This technique is known as a Monte Carlo approximation (after the casino where the uncle of the physicist who developed it used to gamble). And it is hugely useful in all kinds of simulations.

Of course, the accuracy of the technique depends on the distribution of the grains on the square. If they are truly random, then a mere 30,000 grains can give you an estimate of π which is within 0.07 per cent of the actual value.

Dumoulin and Thouin’s idea is to use the distribution of shotgun pellets rather than sand or rice (which would presumably be in short supply in the post-apocalyptic world). So these guys set up an experiment consisting of a 28-inch barrel Mossberg 500 pump-action shotgun aimed at a sheet of aluminium foil some 20 metres away.

They loaded the gun with cartridges composed of 3 dram equivalent of powder and 32 grams of #8 lead pellets. When fired from the gun, these pellets have an average muzzle velocity of around 366 metres per second.

Dumoulin and Thouin then fired 200 shots at the aluminium foil, peppering it with 30,857 holes. Finally, they used the position of these holes in the same way as the grains of sand or rice in the earlier example, to calculate the value of π.

They immediately have a problem, however. The distribution of pellets is influenced by all kinds of factors, such as the height of the gun, the distance to the target, wind direction and so on. So this distribution is not random.

To get around this, they are able to fall back on a technique known as importance sampling. This is a trick that allows mathematicians to estimate the properties of one type of distribution while using samples generated by a different distribution.

Of their 30,000 pellet holes, they chose 10,000 at random to perform this estimation trick. They then use the remaining 20,000 pellet holes to get an estimate of π, safe in the knowledge that importance sampling allows the calculation to proceed as if the distribution of pellets had been random.

The result? Their value of π is 3.131, which is just 0.33 per cent off the true value. “We feel confident that ballistic Monte Carlo methods constitute reliable ways of computing mathematical constants should a tremendous civilization collapse occur,” they conclude.

Quite! Other methods are also available.

Ref: arxiv.org/abs/1404.1499 : A Ballistic Monte Carlo Approximation of π

Quantum Experiment Shows How Time ‘Emerges’ from Entanglement (The Physics arXiv Blog)

Time is an emergent phenomenon that is a side effect of quantum entanglement, say physicists. And they have the first experimental results to prove it

The Physics arXiv Blog

When the new ideas of quantum mechanics spread through science like wildfire in the first half of the 20th century, one of the first things physicists did was to apply them to gravity and general relativity. The results were not pretty.

It immediately became clear that these two foundations of modern physics were entirely incompatible. When physicists attempted to meld the approaches, the resulting equations were bedeviled with infinities making it impossible to make sense of the results.

Then in the mid-1960s, there was a breakthrough. The physicists John Wheeler and Bryce DeWitt successfully combined the previously incompatible ideas in a key result that has since become known as the Wheeler-DeWitt equation. This is important because it avoids the troublesome infinites—a huge advance.

But it didn’t take physicists long to realise that while the Wheeler-DeWitt equation solved one significant problem, it introduced another. The new problem was that time played no role in this equation. In effect, it says that nothing ever happens in the universe, a prediction that is clearly at odds with the observational evidence.

This conundrum, which physicists call ‘the problem of time’, has proved to be a thorn in flesh of modern physicists, who have tried to ignore it but with little success.

Then in 1983, the theorists Don Page and William Wootters came up with a novel solution based on the quantum phenomenon of entanglement. This is the exotic property in which two quantum particles share the same existence, even though they are physically separated.

Entanglement is a deep and powerful link and Page and Wootters showed how it can be used to measure time. Their idea was that the way a pair of entangled particles evolve is a kind of clock that can be used to measure change.

But the results depend on how the observation is made. One way to do this is to compare the change in the entangled particles with an external clock that is entirely independent of the universe. This is equivalent to god-like observer outside the universe measuring the evolution of the particles using an external clock.

In this case, Page and Wootters showed that the particles would appear entirely unchanging—that time would not exist in this scenario.

But there is another way to do it that gives a different result. This is for an observer inside the universe to compare the evolution of the particles with the rest of the universe. In this case, the internal observer would see a change and this difference in the evolution of entangled particles compared with everything else is an important a measure of time.

This is an elegant and powerful idea. It suggests that time is an emergent phenomenon that comes about because of the nature of entanglement. And it exists only for observers inside the universe. Any god-like observer outside sees a static, unchanging universe, just as the Wheeler-DeWitt equations predict.

Of course, without experimental verification, Page and Wootter’s ideas are little more than a philosophical curiosity. And since it is never possible to have an observer outside the universe, there seemed little chance of ever testing the idea.

Until now. Today, Ekaterina Moreva at the Istituto Nazionale di Ricerca Metrologica (INRIM) in Turin, Italy, and a few pals have performed the first experimental test of Page and Wootters’ ideas. And they confirm that time is indeed an emergent phenomenon for ‘internal’ observers but absent for external ones.

The experiment involves the creation of a toy universe consisting of a pair of entangled photons and an observer that can measure their state in one of two ways. In the first, the observer measures the evolution of the system by becoming entangled with it. In the second, a god-like observer measures the evolution against an external clock which is entirely independent of the toy universe.

The experimental details are straightforward. The entangled photons each have a polarisation which can be changed by passing it through a birefringent plate. In the first set up, the observer measures the polarisation of one photon, thereby becoming entangled with it. He or she then compares this with the polarisation of the second photon. The difference is a measure of time.

In the second set up, the photons again both pass through the birefringent plates which change their polarisations. However, in this case, the observer only measures the global properties of both photons by comparing them against an independent clock.

In this case, the observer cannot detect any difference between the photons without becoming entangled with one or the other. And if there is no difference, the system appears static. In other words, time does not emerge.

“Although extremely simple, our model captures the two, seemingly contradictory, properties of the Page-Wootters mechanism,” say Moreva and co.

That’s an impressive experiment. Emergence is a popular idea in science. In particular, physicists have recently become excited about the idea that gravity is an emergent phenomenon. So it’s a relatively small step to think that time may emerge in a similar way.

What emergent gravity has lacked, of course, is an experimental demonstration that shows how it works in practice. That’s why Moreva and co’s work is significant. It places an abstract and exotic idea on firm experimental footing for the first time.

Perhaps most significant of all is the implication that quantum mechanics and general relativity are not so incompatible after all. When viewed through the lens of entanglement, the famous ‘problem of time’ just melts away.

The next step will be to extend the idea further, particularly to the macroscopic scale. It’s one thing to show how time emerges for photons, it’s quite another to show how it emerges for larger things such as humans and train timetables.

And therein lies another challenge.

Ref: arxiv.org/abs/1310.4691 :Time From Quantum Entanglement: An Experimental Illustration

Why Physicists Are Saying Consciousness Is A State Of Matter, Like a Solid, A Liquid Or A Gas (The Physics arXiv Blog)

Why Physicists Are Saying Consciousness Is A State Of Matter, Like a Solid, A Liquid Or A Gas

A new way of thinking about consciousness is sweeping through science like wildfire. Now physicists are using it to formulate the problem of consciousness in concrete mathematical terms for the first time

The Physics arXiv Blog

There’s a quiet revolution underway in theoretical physics. For as long as the discipline has existed, physicists have been reluctant to discuss consciousness, considering it a topic for quacks and charlatans. Indeed, the mere mention of the ‘c’ word could ruin careers.

That’s finally beginning to change thanks to a fundamentally new way of thinking about consciousness that is spreading like wildfire through the theoretical physics community. And while the problem of consciousness is far from being solved, it is finally being formulated mathematically as a set of problems that researchers can understand, explore and discuss.

Today, Max Tegmark, a theoretical physicist at the Massachusetts Institute of Technology in Cambridge, sets out the fundamental problems that this new way of thinking raises. He shows how these problems can be formulated in terms of quantum mechanics and information theory. And he explains how thinking about consciousness in this way leads to precise questions about the nature of reality that the scientific process of experiment might help to tease apart.

Tegmark’s approach is to think of consciousness as a state of matter, like a solid, a liquid or a gas. “I conjecture that consciousness can be understood as yet another state of matter. Just as there are many types of liquids, there are many types of consciousness,” he says.

He goes on to show how the particular properties of consciousness might arise from the physical laws that govern our universe. And he explains how these properties allow physicists to reason about the conditions under which consciousness arises and how we might exploit it to better understand why the world around us appears as it does.

Interestingly, the new approach to consciousness has come from outside the physics community, principally from neuroscientists such as Giulio Tononi at the University of Wisconsin in Madison.

In 2008, Tononi proposed that a system demonstrating consciousness must have two specific traits. First, the system must be able to store and process large amounts of information. In other words consciousness is essentially a phenomenon of information.

And second, this information must be integrated in a unified whole so that it is impossible to divide into independent parts. That reflects the experience that each instance of consciousness is a unified whole that cannot be decomposed into separate components.

Both of these traits can be specified mathematically allowing physicists like Tegmark to reason about them for the first time. He begins by outlining the basic properties that a conscious system must have.

Given that it is a phenomenon of information, a conscious system must be able to store in a memory and retrieve it efficiently.

It must also be able to to process this data, like a computer but one that is much more flexible and powerful than the silicon-based devices we are familiar with.

Tegmark borrows the term computronium to describe matter that can do this and cites other work showing that today’s computers underperform the theoretical limits of computing by some 38 orders of magnitude.

Clearly, there is so much room for improvement that allows for the performance of conscious systems.

Next, Tegmark discusses perceptronium, defined as the most general substance that feels subjectively self-aware. This substance should not only be able to store and process information but in a way that forms a unified, indivisible whole. That also requires a certain amount of independence in which the information dynamics is determined from within rather than externally.

Finally, Tegmark uses this new way of thinking about consciousness as a lens through which to study one of the fundamental problems of quantum mechanics known as the quantum factorisation problem.

This arises because quantum mechanics describes the entire universe using three mathematical entities: an object known as a Hamiltonian that describes the total energy of the system; a density matrix that describes the relationship between all the quantum states in the system; and Schrodinger’s equation which describes how these things change with time.

The problem is that when the entire universe is described in these terms, there are an infinite number of mathematical solutions that include all possible quantum mechanical outcomes and many other even more exotic possibilities.

So the problem is why we perceive the universe as the semi-classical, three dimensional world that is so familiar. When we look at a glass of iced water, we perceive the liquid and the solid ice cubes as independent things even though they are intimately linked as part of the same system. How does this happen? Out of all possible outcomes, why do we perceive this solution?

Tegmark does not have an answer. But what’s fascinating about his approach is that it is formulated using the language of quantum mechanics in a way that allows detailed scientific reasoning. And as a result it throws up all kinds of new problems that physicists will want to dissect in more detail.

Take for example, the idea that the information in a conscious system must be unified. That means the system must contain error-correcting codes that allow any subset of up to half the information to be reconstructed from the rest.

Tegmark points out that any information stored in a special network known as a Hopfield neural net automatically has this error-correcting facility. However, he calculates that a Hopfield net about the size of the human brain with 10^11 neurons, can only store 37 bits of integrated information.

“This leaves us with an integration paradox: why does the information content of our conscious experience appear to be vastly larger than 37 bits?” asks Tegmark.

That’s a question that many scientists might end up pondering in detail. For Tegmark, this paradox suggests that his mathematical formulation of consciousness is missing a vital ingredient. “This strongly implies that the integration principle must be supplemented by at least one additional principle,” he says. Suggestions please in the comments section!

And yet the power of this approach is in the assumption that consciousness does not lie beyond our ken; that there is no “secret sauce” without which it cannot be tamed.

At the beginning of the 20th century, a group of young physicists embarked on a quest to explain a few strange but seemingly small anomalies in our understanding of the universe. In deriving the new theories of relativity and quantum mechanics, they ended up changing the way we comprehend the cosmos. These physcists, at least some of them, are now household names.

Could it be that a similar revolution is currently underway at the beginning of the 21st century?

Ref:arxiv.org/abs/1401.1219: Consciousness as a State of Matter

Futuro gestor da Sabesp diz que Estado tem de estar ‘preparado para o pior’ (Folha de S.Paulo)

ARTUR RODRIGUES

DE SÃO PAULO

02/01/2015 02h00

Escolhido pela gestão de Geraldo Alckmin para dirigir a Sabesp, o ex-presidente da ANA (Agência Nacional de Águas) Jerson Kelman afirma que o Estado tem de estar “preparado para o pior”.

“A situação é preocupante, é grave, e nós temos que torcer pelo melhor, mas estar preparados para o pior”, disse à Folha.

Em novembro, em entrevista ao jornal “Brasil Econômico”, ele havia dito que o governo não atuou da melhor maneira possível e não foi transparente como a gestão Fernando Henrique Cardoso (PSDB) em 2001, época do racionamento de energia. Kelman foi presidente da ANA entre 2001 e 2004.

“O Alckmin deve ter tido suas razões, porque em 2002 quem ganhou as eleições foi o Lula, não o Serra. Esse deve ter sido o raciocínio dele. Mas do ponto de vista de governo, de interesse público, a atuação não foi a melhor, desperdiçou-se água e os reservatórios estão mais vazios que deveriam”, disse na entrevista.

Após ser reeleito, Alckmin passou a adotar medidas mais duras para diminuir o consumo, como a sobretaxa para quem aumentar o consumo.

O novo secretário de Saneamento e Recursos Hídricos, Benedito Braga, afirma que vai avaliar o resultado da taxa extra e cogita até adotar medidas adicionais.

Os nomes de Kelman e de Ricardo Borsari, como superintendente do Daee (Departamento de Águas e Energia Elétrica), foram anunciados nesta quinta (1º) por Braga.

A nova equipe vai enfrentar a maior crise hídrica da história do sistema Cantareira. Nesta quinta, o sistema permaneceu estável, com 7,2% de sua capacidade.

Oficiamente, o nome de Kelman ainda tem de passar por um conselho da Sabesp antes de ele assumir o cargo.

Ele deve ser o substituto de Dilma Pena, que se desgastou com a crise.

Kelman é professor de Recursos Hídricos da COPPE-UFRJ. Também foi presidente do grupo Light e diretor-geral da Aneel (Agência Nacional de Energia Elétrica).

A seguir, a conversa dele com a Folha.

Folha – Em uma entrevista anterior, o senhor falou da necessidade de se dar mais transparência à questão da crise hídrica.
Jorge Kelman – O conceito de transparência é um conceito universal, que eu sempre defendo, não vai aí nenhuma crítica. Eu penso que a população do Estado de São Paulo sabe compreender as condições hidrológicas do momento. A melhor política é de absoluta transparência, de compartilhar com a população da forma mais clara possível.

O senhor também disse que talvez pudesse ter sido melhor a condução da crise pelo governo, que a questão eleitoral atrapalhou.
Eu estou aceitando esse cargo olhando para a frente, não quero ficar aqui atirando pedras para o passado. Olhando para a frente, a situação de São Paulo é preocupante, é grave, e temos que torcer pelo melhor, mas estar preparados para o pior.

O senhor chega no período de chuva, mas se preparando para a estiagem. Como vamos chegar a esse período?
Nesse período de chuvas agora, que às vezes tem tempestades muito fortes, é preciso deixar claro para manter sempre esclarecida a população de que essas chuvas intensas e curtas não resolvem o problema.
O abastecimento depende do estoque de água do sistema Cantareira.
O que poderia acontecer de ruim é você ter uma enchente, carros boiarem, essas coisas que acontecem no verão, e ter um efeito colateral indesejável que seria a população imaginar que o problema está resolvido e relaxar na postura que está tendo de uso econômico da água.

Partículas telepáticas (Folha de S.Paulo)

CASSIO LEITE VIEIRA

ilustração JOSÉ PATRÍCIO

28/12/2014 03h08

RESUMO Há 50 anos, o físico norte-irlandês John Bell (1928-90) chegou a um resultado que demonstra a natureza “fantasmagórica” da realidade no mundo atômico e subatômico. Seu teorema é hoje visto como a arma mais eficaz contra a espionagem, algo que garantirá, num futuro talvez próximo, a privacidade absoluta das informações.

*

Um país da América do Sul quer manter a privacidade de suas informações estratégicas, mas se vê obrigado a comprar os equipamentos para essa tarefa de um país bem mais avançado tecnologicamente. Esses aparelhos, porém, podem estar “grampeados”.

Surge, então, a dúvida quase óbvia: haverá, no futuro, privacidade 100% garantida? Sim. E isso vale até mesmo para um país que compre a tecnologia antiespionagem do “inimigo”.
O que possibilita a resposta afirmativa acima é o resultado que já foi classificado como o mais profundo da ciência: o teorema de Bell, que trata de uma das perguntas filosóficas mais agudas e penetrantes feitas até hoje e que alicerça o próprio conhecimento: o que é a realidade? O teorema -que neste ano completou seu 50º aniversário- garante que a realidade, em sua dimensão mais íntima, é inimaginavelmente estranha.

José Patricio

A história do teorema, de sua comprovação experimental e de suas aplicações modernas tem vários começos. Talvez, aqui, o mais apropriado seja um artigo publicado em 1935 pelo físico de origem alemã Albert Einstein (1879-1955) e dois colaboradores, o russo Boris Podolsky (1896-1966) e o americano Nathan Rosen (1909-95).

Conhecido como paradoxo EPR (iniciais dos sobrenomes dos autores), o experimento teórico ali descrito resumia uma longa insatisfação de Einstein com os rumos que a mecânica quântica, a teoria dos fenômenos na escala atômica, havia tomado. Inicialmente, causou amargo no paladar do autor da relatividade o fato de essa teoria, desenvolvida na década de 1920, fornecer apenas a probabilidade de um fenômeno ocorrer. Isso contrastava com a “certeza” (determinismo) da física dita clássica, a que rege os fenômenos macroscópicos.

Einstein, na verdade, estranhava sua criatura, pois havia sido um dos pais da teoria quântica. Com alguma relutância inicial, o indeterminismo da mecânica quântica acabou digerido por ele. Algo, porém, nunca lhe passou pela garganta: a não localidade, ou seja, o estranhíssimo fato de algo aqui influenciar instantaneamente algo ali -mesmo que esse “ali” esteja muito distante. Einstein acreditava que coisas distantes tinham realidades independentes.

Einstein chegou a comparar -vale salientar que é só uma analogia- a não localidade a um tipo de telepatia. Mas a definição mais famosa dada por Einstein a essa estranheza foi “fantasmagórica ação a distância”.

EMARANHADO

A essência do argumento do paradoxo EPR é o seguinte: sob condições especiais, duas partículas que interagiram e se separaram acabam em um estado denominado emaranhado, como se fossem “gêmeas telepáticas”. De forma menos pictórica, diz-se que as partículas estão conectadas (ou correlacionadas, como preferem os físicos) e assim seguem, mesmo depois da interação.

A estranheza maior vem agora: se uma das partículas desse par for perturbada -ou seja, sofrer uma medida qualquer, como dizem os físicos-, a outra “sente” essa perturbação instantaneamente. E isso independe da distância entre as duas partículas. Podem estar separadas por anos-luz.

Os autores do paradoxo EPR diziam que era impossível imaginar que a natureza permitisse a conexão instantânea entre os dois objetos. E, por meio de argumentação lógica e complexa, Einstein, Podolsky e Rosen concluíam: a mecânica quântica tem que ser incompleta. Portanto, provisória.

SUPERIOR À LUZ?

Uma leitura apressada (porém, muito comum) do paradoxo EPR é dizer que uma ação instantânea (não local, no vocabulário da física) é impossível, porque violaria a relatividade de Einstein: nada pode viajar com velocidade superior à da luz no vácuo, 300 mil km/s.

No entanto, a não localidade atuaria apenas na dimensão microscópica -não pode ser usada, por exemplo, para mandar ou receber mensagens. No mundo macroscópico, se quisermos fazer isso, teremos que usar sinais que nunca viajam com velocidade maior que a da luz no vácuo. Ou seja, relatividade é preservada.

A não localidade tem a ver com conexões persistentes (e misteriosas) entre dois objetos: interferir com (alterar, mudar etc.) um deles interfere com (altera, muda etc.) o outro. Instantaneamente. O simples ato de observar um deles interfere no estado do outro.

Einstein não gostou da versão final do artigo de 1935, que só viu impressa -a redação ficou a cargo de Podolsky. Ele havia imaginado um texto menos filosófico. Pouco meses depois, viria a resposta do físico dinamarquês Niels Bohr (1885-1962) ao EPR -poucos anos antes, Einstein e Bohr haviam protagonizado o que para muitos é um dos debates filosóficos mais importantes da história: o tema era a “alma da natureza”, nas palavras de um filósofo da física.

Em sua resposta ao EPR, Bohr reafirmou tanto a completude da mecânica quântica quanto sua visão antirrealista do universo atômico: não é possível dizer que uma entidade quântica (elétron, próton, fóton etc.) tenha uma propriedade antes que esta seja medida. Ou seja, tal propriedade não seria real, não estaria oculta à espera de um aparelho de medida ou qualquer interferência (até mesmo o olhar) do observador. Quanto a isso, Einstein, mais tarde, ironizaria: “Será que a Lua só existe quando olhamos para ela?”.

AUTORIDADE

Um modo de entender o que seja uma teoria determinista é o seguinte: é aquela na qual se pressupõe que a propriedade a ser medida está presente (ou “escondida”) no objeto e pode ser determinada com certeza. Os físicos denominam esse tipo de teoria com um nome bem apropriado: teoria de variáveis ocultas.

Em uma teoria de variáveis ocultas, a tal propriedade (conhecida ou não) existe, é real. Daí, por vezes, os filósofos classificarem esse cenário como realismo -Einstein gostava do termo “realidade objetiva”: as coisas existem sem a necessidade de serem observadas.

Mas, na década de 1930, um teorema havia provado que seria impossível haver uma versão da mecânica quântica como uma teoria de variáveis ocultas. O feito era de um dos maiores matemáticos de todos os tempos, o húngaro John von Neumann (1903-57). E, fato não raro na história da ciência, valeu o argumento da autoridade em vez da autoridade do argumento.

O teorema de Von Neumann era perfeito do ponto de vista matemático, mas “errado, tolo” e “infantil” (como chegou a ser classificado) no âmbito da física, pois partia de uma premissa equivocada. Sabe-se hoje que Einstein desconfiou dessa premissa: “Temos que aceitar isso como verdade?”, perguntou a dois colegas. Mas não foi além.

O teorema de Von Neumann serviu, porém, para praticamente pisotear a versão determinista (portanto, de variáveis ocultas) da mecânica quântica feita em 1927 pelo nobre francês Louis de Broglie (1892-1987), Nobel de Física de 1929, que acabou desistindo dessa linha de pesquisa.

Por exatas duas décadas, o teorema de Von Neumann e as ideias de Bohr -que formou em torno dele uma influente escola de jovens notáveis- dissuadiram tentativas de buscar uma versão determinista da mecânica quântica.

Mas, em 1952, o físico norte-americano David Bohm (1917-92), inspirado pelas ideias de De Broglie, apresentou uma versão de variáveis ocultas da mecânica quântica -hoje, denominada mecânica quântica bohmiana, homenagem ao pesquisador que trabalhou na década de 1950 na Universidade de São Paulo (USP), quando perseguido nos EUA pelo macarthismo.

A mecânica quântica bohmiana tinha duas características em sua essência: 1) era determinista (ou seja, de variáveis ocultas); 2) era não local (isto é, admitia a ação a distância) -o que fez com que Einstein, localista convicto, perdesse o interesse inicial nela.

PROTAGONISTA

Eis que entra em cena a principal personagem desta história: o físico norte-irlandês John Stewart Bell, que, ao tomar conhecimento da mecânica bohmiana, teve uma certeza: o “impossível havia sido feito”. Mais: Von Neumann estava errado.

A mecânica quântica de Bohm -ignorada logo de início pela comunidade de físicos- acabava de cair em terreno fértil: Bell remoía, desde a universidade, como um “hobby”, os fundamentos filosóficos da mecânica quântica (EPR, Von Neumann, De Broglie etc.). E tinha tomado partido nesses debates: era um einsteiniano assumido e achava Bohr obscuro.

Bell nasceu em 28 de junho de 1928, em Belfast, em uma família anglicana sem posses. Deveria ter parado de estudar aos 14 anos, mas, por insistência da mãe, que percebeu os dotes intelectuais do segundo de quatro filhos, foi enviado a uma escola técnica de ensino médio, onde ele aprendeu coisas práticas (carpintaria, construção civil, biblioteconomia etc.).

Formado, aos 16, tentou empregos em escritórios, mas o destino quis que terminasse como técnico preparador de experimentos no departamento de física da Queen’s University, também em Belfast.

Os professores do curso logo perceberam o interesse do técnico pela física e passaram a incentivá-lo, com indicações de leituras e aulas. Com uma bolsa de estudos, Bell se formou em 1948 em física experimental e, no ano seguinte, em física matemática. Em ambos os casos, com louvor.

De 1949 a 1960, Bell trabalhou no Aere (Estabelecimento para a Pesquisa em Energia Atômica), em Harwell, no Reino Unido. Lá conheceria sua futura mulher, a física Mary Ross, sua interlocutora em vários trabalhos sobre física. “Ao olhar novamente esses artigos, vejo-a em todo lugar”, disse Bell, em homenagem recebida em 1987, três anos antes de morrer, de hemorragia cerebral.

Defendeu doutorado em 1956, após um período na Universidade de Birmingham, sob orientação do físico teuto-britânico Rudolf Peierls (1907-95). A tese inclui uma prova de um teorema muito importante da física (teorema CPT), que havia sido descoberto pouco antes por um contemporâneo seu.

O TEOREMA

Por discordar dos rumos das pesquisas no Aere, o casal decidiu trocar empregos estáveis por posições temporárias no Centro Europeu de Pesquisas Nucleares (Cern), em Genebra (Suíça). Ele na divisão de física teórica; ela, na de aceleradores.

Bell passou 1963 e 1964 trabalhando nos EUA. Lá, encontrou tempo para se dedicar a seu “hobby” intelectual e gestar o resultado que marcaria sua carreira e lhe daria, décadas mais tarde, fama.

Ele se fez a seguinte pergunta: será que a não localidade da teoria de variáveis ocultas de Bohm seria uma característica de qualquer teoria realista da mecânica quântica? Em outras palavras, se as coisas existirem sem serem observadas, elas terão que necessariamente estabelecer entre si aquela fantasmagórica ação a distância?

O teorema de Bell, publicado em 1964, é também conhecido como desigualdade de Bell. Sua matemática não é complexa. De forma muito simplificada, podemos pensar nesse teorema como uma inequação: x ≤ 2 (x menor ou igual a dois), sendo que “x” representa, para nossos propósitos aqui, os resultados de um experimento.

As consequências mais interessantes do teorema de Bell ocorreriam se tal experimento violasse a desigualdade, ou seja, mostrasse que x > 2 (x maior que dois). Nesse caso, teríamos de abrir mão de uma das duas suposições: 1) realismo (as coisas existem sem serem observadas); 2) da localidade (o mundo quântico não permite conexões mais velozes que a luz).

O artigo do teorema não teve grande repercussão -Bell havia feito outro antes, fundamental para ele chegar ao resultado, mas, por erro do editor do periódico, acabou publicado só em 1966.

REBELDIA A retomada das ideias de Bell -e, por conseguinte, do EPR e de Bohm- ganhou momento com fatores externos à física. Muitos anos depois do agitado final dos anos 1960, o físico americano John Clauser recordaria o período: ”A Guerra do Vietnã dominava os pensamentos políticos da minha geração. Sendo um jovem físico naquele período revolucionário, eu naturalmente queria chacoalhar o mundo”.

A ciência, como o resto do mundo, acabou marcada pelo espírito da geração paz e amor; pela luta pelos direitos civis; por maio de 1968; pelas filosofias orientais; pelas drogas psicodélicas; pela telepatia -em uma palavra: pela rebeldia. Que, traduzida para a física, significava se dedicar a uma área herética na academia: interpretações (ou fundamentos) da mecânica quântica. Mas fazer isso aumentava consideravelmente as chances de um jovem físico arruinar sua carreira: EPR, Bohm e Bell eram considerados temas filosóficos, e não físicos.

O elemento final para que o campo tabu de estudos ganhasse fôlego foi a crise do petróleo de 1973, que diminuiu a oferta de postos para jovens pesquisadores -incluindo físicos. À rebeldia somou-se a recessão.
Clauser, com mais três colegas, Abner Shimony, Richard Holt e Michael Horne, publicou suas primeiras ideias sobre o assunto em 1969, com o título “Proposta de Experimento para Testar Teorias de Variáveis Ocultas”. O quarteto fez isso em parte por ter notado que a desigualdade de Bell poderia ser testada com fótons, que são mais fáceis de serem gerados. Até então se pensava em arranjos experimentais mais complicados.

Em 1972, a tal proposta virou experimento -feito por Clauser e Stuart Freedman (1944-2012)-, e a desigualdade de Bell foi violada.

O mundo parecia ser não local -ironicamente, Clauser era localista! Mas só parecia: o experimento seguiu, por cerca de uma década, incompreendido e, portanto, desconsiderado pela comunidade de físicos. Mas aqueles resultados serviram a reforçar algo importante: fundamentos da mecânica quântica não eram só filosofia. Eram também física experimental.

MUDANÇA DE CENÁRIO

O aperfeiçoamento de equipamentos de óptica (incluindo lasers) permitiu que, em 1982, um experimento se tornasse um clássico da área.

Pouco antes, o físico francês Alain Aspect havia decidido iniciar um doutorado tardio, mesmo sendo um físico experimental experiente. Escolheu como tema o teorema de Bell. Foi ao encontro do colega norte-irlandês no Cern. Em entrevista ao físico Ivan dos Santos Oliveira, do Centro Brasileiro de Pesquisas Físicas, no Rio de Janeiro, e ao autor deste texto, Aspect contou o seguinte diálogo entre ele e Bell. “Você tem um cargo estável?”, perguntou Bell. “Sim”, disse Aspect. Caso contrário, “você seria muito pressionado a não fazer o experimento”, disse Bell.

O diálogo relatado por Aspect nos permite afirmar que, quase duas décadas depois do artigo seminal de 1964, o tema continuava revestido de preconceito.

Em um experimento feito com pares de fótons emaranhados, a natureza, mais uma vez, mostrou seu caráter não local: a desigualdade de Bell foi violada. Os dados mostraram x > 2. Em 2007, por exemplo, o grupo do físico austríaco Anton Zeilinger verificou a violação da desigualdade usando fótons separados por… 144 km.

Na entrevista no Brasil, Aspect disse que, até então, o teorema era pouquíssimo conhecido pelos físicos, mas ganharia fama depois de sua tese de doutorado, de cuja banca, aliás, Bell participou.

ESTRANHO

Afinal, por que a natureza permite que haja a “telepatia” einsteiniana? É no mínimo estranho pensar que uma partícula perturbada aqui possa, de algum modo, alterar o estado de sua companheira nos confins do universo.

Há várias maneiras de interpretar as consequências do que Bell fez. De partida, algumas (bem) equivocadas: 1) a não localidade não pode existir, porque viola a relatividade; 2) teorias de variáveis ocultas (Bohm, De Broglie etc.) da mecânica quântica estão totalmente descartadas; 3) a mecânica quântica é realmente indeterminista; 4) o irrealismo -ou seja, coisas só existem quando observadas- é a palavra final. A lista é longa.

Quando o teorema foi publicado, uma leitura rasa (e errônea) dizia que ele não tinha importância, pois o teorema de Von Neumann já havia descartado as variáveis ocultas, e a mecânica quântica seria, portanto, de fato indeterminista. Entre os que não aceitam a não localidade, há ainda aqueles que chegam ao ponto de dizer que Einstein, Bohm e Bell não entenderam o que fizeram.

O filósofo da física norte-americano Tim Maudlin, da Universidade de Nova York, em dois excelentes artigos, “What Bell Did” (O que Bell fez, arxiv.org/abs/1408.1826) e “Reply to Werner” (em que responde a comentários sobre o texto anterior, arxiv.org/abs/1408.1828), oferece uma longa lista de equívocos.

Para Maudlin, renomado em sua área, o teorema de Bell e sua violação significam uma só coisa: a natureza é não local (“fantasmagórica”) e, portanto, não há esperança para a localidade, como Einstein gostaria -nesse sentido, pode-se dizer que Bell mostrou que Einstein estava errado. Assim, qualquer teoria determinista (realista) que reproduza os resultados experimentais obtidos até hoje pela mecânica quântica -por sinal, a teoria mais precisa da história da ciência- terá que necessariamente ser não local.

De Aspect até hoje, desenvolvimentos tecnológicos importantes possibilitaram algo impensável há poucas décadas: estudar isoladamente uma entidade quântica (átomo, elétron, fóton etc.). E isso deu início à área de informação quântica, que abrange o estudo da criptografia quântica -aquela que permitirá a segurança absoluta dos dados- e o dos computadores quânticos, máquinas extremamente velozes. De certo modo, trata-se de filosofia transformada em física experimental.

Muitos desses avanços se devem basicamente à rebeldia de uma geração de físicos jovens que queriam contrariar o “sistema”.

Uma história saborosa desse período está em “How the Hippies Saved Physics” (Como os hippies salvaram a física, publicado pela W. W. Norton & Company em 2011), do historiador da física norte-americano David Kaiser. E uma análise histórica detalhada em “Quantum Dissidents: Research on the Foundations of Quantum Theory circa 1970” (Dissidentes do quantum: pesquisa sobre os fundamentos da teoria quântica por volta de 1970, bit.ly/1xyipTJ, só para assinantes), do historiador da física Olival Freire Jr., da Universidade Federal da Bahia.

Para os mais interessados no viés filosófico, há os dois volumes premiados de “Conceitos de Física Quântica” (Editora Livraria da Física, 2003), do físico e filósofo Osvaldo Pessoa Jr., da USP.

PRIVACIDADE

A esta altura, o(a) leitor(a) talvez esteja se perguntando sobre o que o teorema de Bell tem a ver com uma privacidade 100% garantida.

No futuro, é (bem) provável que a informação seja enviada e recebida na forma de fótons emaranhados. Pesquisas recentes em criptografia quântica garantem que bastaria submeter essas partículas de luz ao teste da desigualdade de Bell. Se ela for violada, então não há nenhuma possibilidade de a mensagem ter sido bisbilhotada indevidamente. E o teste independe do equipamento usado para enviar ou receber os fótons. A base teórica para isso está, por exemplo, em “The Ultimate Physical Limits of Privacy” (Limites físicos extremos da privacidade), de Artur Ekert e Renato Renner (bit.ly/1gFjynG, só para assinantes).

Em um futuro não muito distante, talvez, o teorema de Bell se transforme na arma mais poderosa contra a espionagem. Isso é um tremendo alento para um mundo que parece rumar à privacidade zero. É também um imenso desdobramento de uma pergunta filosófica que, segundo o físico norte-americano Henry Stapp, especialista em fundamentos da mecânica quântica, se tornou “o resultado mais profundo da ciência”. Merecidamente. Afinal, por que a natureza optou pela “ação fantasmagórica a distância”?

A resposta é um mistério. Pena que a pergunta não seja nem sequer mencionada nas graduações de física no Brasil.

CÁSSIO LEITE VIEIRA, 54, jornalista do Instituto Ciência Hoje (RJ), é autor de “Einstein – O Reformulador do Universo” (Odysseus).
JOSÉ PATRÍCIO, 54, artista plástico pernambucano, participa da mostra “Asas a Raízes” na Caixa Cultural do Rio, de 17/1 a 15/3.

How to get published in an academic journal: top tips from editors (The Guardian)

Journal editors share their advice on how to structure a paper, write a cover letter – and deal with awkward feedback from reviewer.

hurdles athletes

 How to negotiate the many hurdles that stand between a draft paper and publication. Photograph: Clint Hughes/PA

Writing for academic journals is highly competitive. Even if you overcome the first hurdle and generate a valuable idea or piece of research – how do you then sum it up in a way that will capture the interest of reviewers?

There’s no simple formula for getting published – editors’ expectations can vary both between and within subject areas. But there are some challenges that will confront all academic writers regardless of their discipline. How should you respond to reviewer feedback? Is there a correct way to structure a paper? And should you always bother revising and resubmitting? We asked journal editors from a range of backgrounds for their tips on getting published.

The writing stage

1) Focus on a story that progresses logically, rather than chronologically

Take some time before even writing your paper to think about the logic of the presentation. When writing, focus on a story that progresses logically, rather than the chronological order of the experiments that you did.
Deborah Sweet, editor of Cell Stem Cell and publishing director at Cell Press

2) Don’t try to write and edit at the same time

Open a file on the PC and put in all your headings and sub-headings and then fill in under any of the headings where you have the ideas to do so. If you reach your daily target (mine is 500 words) put any other ideas down as bullet points and stop writing; then use those bullet points to make a start the next day.

If you are writing and can’t think of the right word (eg for elephant) don’t worry – write (big animal long nose) and move on – come back later and get the correct term. Write don’t edit; otherwise you lose flow.
Roger Watson, editor-in-chief, Journal of Advanced Nursing

3) Don’t bury your argument like a needle in a haystack

If someone asked you on the bus to quickly explain your paper, could you do so in clear, everyday language? This clear argument should appear in your abstract and in the very first paragraph (even the first line) of your paper. Don’t make us hunt for your argument as for a needle in a haystack. If it is hidden on page seven that will just make us annoyed. Oh, and make sure your argument runs all the way through the different sections of the paper and ties together the theory and empirical material.
Fiona Macaulay, editorial board, Journal of Latin American Studies

4) Ask a colleague to check your work

One of the problems that journal editors face is badly written papers. It might be that the writer’s first language isn’t English and they haven’t gone the extra mile to get it proofread. It can be very hard to work out what is going on in an article if the language and syntax are poor.
Brian Lucey, editor, International Review of Financial Analysis

5) Get published by writing a review or a response 

Writing reviews is a good way to get published – especially for people who are in the early stages of their career. It’s a chance to practice at writing a piece for publication, and get a free copy of a book that you want. We publish more reviews than papers so we’re constantly looking for reviewers.

Some journals, including ours, publish replies to papers that have been published in the same journal. Editors quite like to publish replies to previous papers because it stimulates discussion.
Yujin Nagasawa, co-editor and review editor of the European Journal for Philosophy of Religion, philosophy of religion editor of Philosophy Compass

6) Don’t forget about international readers

We get people who write from America who assume everyone knows the American system – and the same happens with UK writers. Because we’re an international journal, we need writers to include that international context.
Hugh McLaughlin, editor in chief, Social Work Education – the International Journal

7) Don’t try to cram your PhD into a 6,000 word paper

Sometimes people want to throw everything in at once and hit too many objectives. We get people who try to tell us their whole PhD in 6,000 words and it just doesn’t work. More experienced writers will write two or three papers from one project, using a specific aspect of their research as a hook.
Hugh McLaughlin, editor in chief, Social Work Education – the International Journal

Submitting your work

8) Pick the right journal: it’s a bad sign if you don’t recognise any of the editorial board

Check that your article is within the scope of the journal that you are submitting to. This seems so obvious but it’s surprising how many articles are submitted to journals that are completely inappropriate. It is a bad sign if you do not recognise the names of any members of the editorial board. Ideally look through a number of recent issues to ensure that it is publishing articles on the same topic and that are of similar quality and impact.
Ian Russell, editorial director for science at Oxford University Press

9) Always follow the correct submissions procedures

Often authors don’t spend the 10 minutes it takes to read the instructions to authors which wastes enormous quantities of time for both the author and the editor and stretches the process when it does not need to
Tangali Sudarshan, editor, Surface Engineering

10) Don’t repeat your abstract in the cover letter
We look to the cover letter for an indication from you about what you think is most interesting and significant about the paper, and why you think it is a good fit for the journal. There is no need to repeat the abstract or go through the content of the paper in detail – we will read the paper itself to find out what it says. The cover letter is a place for a bigger picture outline, plus any other information that you would like us to have.
Deborah Sweet, editor of Cell Stem Cell and publishing director at Cell Press

11) A common reason for rejections is lack of context

Make sure that it is clear where your research sits within the wider scholarly landscape, and which gaps in knowledge it’s addressing. A common reason for articles being rejected after peer review is this lack of context or lack of clarity about why the research is important.
Jane Winters, executive editor of the Institute of Historical Research’s journal, Historical Research and associate editor of Frontiers in Digital Humanities: Digital History

12) Don’t over-state your methodology

Ethnography seems to be the trendy method of the moment, so lots of articles submitted claim to be based on it. However, closer inspection reveals quite limited and standard interview data. A couple of interviews in a café do not constitute ethnography. Be clear – early on – about the nature and scope of your data collection. The same goes for the use of theory. If a theoretical insight is useful to your analysis, use it consistently throughout your argument and text.
Fiona Macaulay, editorial board, Journal of Latin American Studies

Dealing with feedback

13) Respond directly (and calmly) to reviewer comments

When resubmitting a paper following revisions, include a detailed document summarising all the changes suggested by the reviewers, and how you have changed your manuscript in light of them. Stick to the facts, and don’t rant. Don’t respond to reviewer feedback as soon as you get it. Read it, think about it for several days, discuss it with others, and then draft a response.
Helen Ball, editorial board, Journal of Human Lactation 

14) Revise and resubmit: don’t give up after getting through all the major hurdles

You’d be surprised how many authors who receive the standard “revise and resubmit” letter never actually do so. But it is worth doing – some authors who get asked to do major revisions persevere and end up getting their work published, yet others, who had far less to do, never resubmit. It seems silly to get through the major hurdles of writing the article, getting it past the editors and back from peer review only to then give up.
Fiona Macaulay, editorial board, Journal of Latin American Studies

15) It is acceptable to challenge reviewers, with good justification

It is acceptable to decline a reviewer’s suggestion to change a component of your article if you have a good justification, or can (politely) argue why the reviewer is wrong. A rational explanation will be accepted by editors, especially if it is clear you have considered all the feedback received and accepted some of it.
Helen Ball, editorial board of Journal of Human Lactation

16) Think about how quickly you want to see your paper published

Some journals rank more highly than others and so your risk of rejection is going to be greater. People need to think about whether or not they need to see their work published quickly – because certain journals will take longer. Some journals, like ours, also do advance access so once the article is accepted it appears on the journal website. This is important if you’re preparing for a job interview and need to show that you are publishable.
Hugh McLaughlin, editor in chief, Social Work Education – the International Journal

17) Remember: when you read published papers you only see the finished article

Publishing in top journals is a challenge for everyone, but it may seem easier for other people. When you read published papers you see the finished article, not the first draft, nor the first revise and resubmit, nor any of the intermediate versions – and you never see the failures.
Philip Powell, managing editor of the Information Systems Journal