Todos os posts de renzotaddei

Avatar de Desconhecido

Sobre renzotaddei

Anthropologist, professor at the Federal University of São Paulo

Mixed Methods Should Be a Valued Practice in Anthropology (Anthropology News)

METHODS

By Thomas S Weisner

1 May 2012

Methods are systematic, socially agreed upon ways to represent the world. Mixed methods integrate qualitative and quantitative evidence through intentional efforts to focus “on research questions that call for real-life contextual understandings, multi-level perspectives, and cultural influences” (Cresswell, et al, 2011, Best Practices for Mixed Methods Research in the Health Sciences, p 4).

Good anthropology will always benefit from the widest variety of data. High quality examples of combining qualitative and quantitative methods abound in anthropology today and have done so throughout our history. Although ethnography and qualitative methods remain central, it has always been true that other methods are commonly used as well in every field of anthropology.

SOME EXAMPLES
Elinor Ochs and colleagues at UCLA assembled what is arguably the richest family database in the world today (combining video, sociolinguistic, ethnographic, questionnaire, daily diary, material possession, stress hormone and other evidence) in their study of the everyday lives of two-parent, middle class working Los Angeles families and their children (www.celf.ucla.edu). Robert LeVine and collaborators combined sociolinguistic, ethnographic, systematic observational, demographic, historical and child assessment methods in their study of the connections between women’s gains in literacy, lower completed family size, improved health and changes in maternal care in communities around the world (Literacy and Mothering: How Women’s Schooling Changes the Lives of the World’s Children, 2012). The New Hope community based work and family support study (Duncan, Huston and Weisner, Higher Ground: New Hope for the Working Poor and their Children, 2007) used a random-assignment social experiment, survey, questionnaire, child assessment and qualitative ethnographic fieldwork to discover why the program was successful in improving the well-being of parents and children, and yet why sometimes only selectively so.

Andrew Fuligni, Nancy Gonzalez and I currently collaborate on a study of the daily activities, family responsibilities and obligations, and academic and behavioral outcomes of 428 Mexican American immigrant teens and parents in Los Angeles (first, second and later generations, documented and not). Methods include 14-day consecutive daily diaries, survey and questionnaire data, and school and behavior assessments. In addition, a 10% nested random sample of parents and teens from this larger sample participate in a qualitative study in the homes of parents and children in addition. We gave cameras to adolescents in ninth and tenth grades with instructions to take 25 pictures of people, places, events and activities important to them. We plugged the cameras into our laptops and talked with the teens about their photos. We asked questions such as: Who are these friends; oh you have a boyfriend? Tell me more about your soccer team. That’s your Mom cooking; what do you do for chores? That’s one of your teachers? What class is it; how is school going? Teens take photos of other family members’ photos such as grandparents they cannot visit in Mexico; one took a photo of the moon, mentioning the film Under the Same Moon (La Misma Luna).

The narratives then can be recorded, transcribed and uploaded to a mixed methods software program such as Dedoose (www.Dedoose.com), a web-based mixed method software tool. Indexing and coding are a matter of dragging and dropping codes on the relevant portions of the text. Quantitative data from the larger study also are uploaded and linked to adolescent and parent narratives and photos. Narratives can be coded; patterns in quantitative data can be enriched qualitatively. The same fieldworkers who went to the homes and did interviews, also often worked on analyses of quantitative data.

STRENGTH OF INTEGRATED METHODS
Methods and research designs are languages understood across the social sciences. To the extent that we can speak those languages in our work, we more likely will draw in those in other disciplines into conversations with us. A study that creatively integrates quantitative and qualitative methods sends a positive message to those fluent in only qualitative or quantitative methods that we take their methods (and so their identities and ideas) seriously. The increased believability in our and others’ work which often results is itself a criterion for successful mixed methods research. The use of integrated methods is growing across the social sciences; psychology (eg, Yoshikawa, et al, Developmental Psychology 44[344–54]), sociology (eg, Mario Small in Annual Review of Sociology 37[57–86]), psychiatry (Palinkas, et al, Psychiatric Services 62 [3]), public health (Plano-Clark, Qualitative Inquiry16 [6]), political science, education, economics and other fields are benefitting and sometimes looking to anthropology for collaboration. Policy and practice research benefits hugely from integrating qualitative and quantitative methods. Funders increasingly see integrated methods as a strength in grant proposals.

The stark binary contrast of the “two Q’s”—qualitative vs quantitative—is not very useful; it restricts our thinking and limits our conversations. The two Q’s oversimplifies the debates and obscures important shared goals common to all methods. A better narrative and discourse about methods should use a richer conceptual framework. The actual contrast with quantitative levels of measurement (ordinal, interval, ratio scales) should be nominal or categorical levels (words, categories, narratives, themes, patterns); both are useful. The contrast with naturalistic research should not be experimental but research that is contrived or controlled in some systematic way to aid understanding. A useful framework for anthropology should distinguish person and experience-centered, or context-centered and variable-centered methods, not a qualitative/quantitative binary. Such a methods conversation could then focus on the most important Q—our common questions.

Many of us use ethnographic settings, events or activities as our units of analysis to be sure we do not bracket out context that provides essential meaning. However, inquiry across levels of analysis beyond settings and beyond projects often requires mixed methods. We often deal with suspicions about the “bias” of ethnographic and qualitative methods. Mixed methods do not necessarily lead to common findings; there is method variance just as there is expectable heterogeneity, conflict and inconsistency in cultural beliefs and practices themselves. A more useful question is whether our methods have been systematically context-examined or remain context-unexamined—since all methods (whether qualitative or quantitative) entail a context or a set of presumptions and methods effects of some kinds.

Quantitative methods and statistical analyses have guidelines and procedures (not uncontested of course) for deciding if they are done well—if they met accepted standards and should be published and disseminated for example. These include judgments of reliability, validity, sample size and representativeness or generalizability, power, and so forth. Qualitative and ethnographic work can and should have recognized criteria as well, such as breadth, depth, holism, veridicality, specificity of context, meaning centered, narrative and behavioral coherence, shared cognitions, interpretive richness, and others. These are of course more variable, and not so easy to define, yet they are valuable and defensible if carefully described. These should be in addition to explicit descriptions of sampling, setting, and so forth. Reasonable, flexible mixed methods criteria are being developed in these respects (Weisner and Fiese in Journal of Family Psychology 25[6]). Recent NIH guidelines have been developed for the use of mixed methods in health research and in applications for funding (Cresswell, et al, 2011).

METHODS PLURALISM IN ANTHROPOLOGY
I would guess—or at least hope—that most anthropologists are fairly tolerant pluralists regarding methods. Most of us appreciate the vast range of qualitative and ethnographic methods and their integration, as in Russ Bernard’s Research Methods in Anthropology. Qualitative and Quantitative Approaches (2011). I suspect many if not most of us generally agree with this view or use mixed methods in our own research and teaching, and regularly cite such work even if we don’t do this ourselves. If we don’t do quantitative research, we may have partnered with others who do and are interested in similar questions, or we may have taught courses using books and papers with quantitative evidence. And yet it is fair to say that those who critique quantitative methods, or dismiss systematic methods altogether, including mixed methods, sometimes, without justification in my view, seek to claim the dominant view. To the contrary: the future of our field and the social sciences is far more likely to be characterized by interdisciplinary methodological pluralism, often including integrated mixed methods. Anthropology should be at the forefront of such research and practice, not critiquing from the margins or simply ignoring important methodological and research design innovations.

Donald Campbell long ago described this more modest, pluralist, pragmatic, skeptical, empirically based approach to methods: he argued that all methods are valuable and important, but that all methods are also weak in the sense that they are incomplete representations of the incredibly complex world that we hope to understand. Hence we should use the widest range of methods, so that the weaknesses of one method can be complemented by the strengths of another, and so that phenomena in the world that are holistic qualities best or only to be represented by narrative, text, photos or sound are represented that way, and phenomena best or only to be represented with numbers, variables and models are represented quantitatively. As a result, we will get closer to understanding the world, and then persuading others of the truth of what we discover and believe.

Thomas S Weisner (www.tweisner.com) is anthropology professor in the departments of psychiatry and anthropology at UCLA, and director of Center for Culture & Health. His research and teaching interests are in culture and human development; medical, psychological and cultural studies of families and children at risk; mixed methods; and evidence-informed policy.

Novelas brasileiras passam imagem de país branco, critica escritora moçambicana (Agência Brasil)

17/04/2012 – 15h35

Alex Rodrigues
Repórter da Agência Brasil

 Brasília – “Temos medo do Brasil.” Foi com um desabafo inesperado que a romancista moçambicana Paulina Chiziane chamou a atenção do público do seminário A Literatura Africana Contemporânea, que integra a programação da 1ª Bienal do Livro e da Leitura, em Brasília (DF). Ela se referia aos efeitos da presença, em Moçambique, de igrejas e templos brasileiros e de produtos culturais como as telenovelas que transmitem, na opinião dela, uma falsa imagem do país.

“Para nós, moçambicanos, a imagem do Brasil é a de um país branco ou, no máximo, mestiço. O único negro brasileiro bem-sucedido que reconhecemos como tal é o Pelé. Nas telenovelas, que são as responsáveis por definir a imagem que temos do Brasil, só vemos negros como carregadores ou como empregados domésticos. No topo [da representação social] estão os brancos. Esta é a imagem que o Brasil está vendendo ao mundo”, criticou a autora, destacando que essas representações contribuem para perpetuar as desigualdades raciais e sociais existentes em seu país.

“De tanto ver nas novelas o branco mandando e o negro varrendo e carregando, o moçambicano passa a ver tal situação como aparentemente normal”, sustenta Paulina, apontando para a mesma organização social em seu país.

A presença de igrejas brasileiras em território moçambicano também tem impactos negativos na cultura do país, na avaliação da escritora. “Quando uma ou várias igrejas chegam e nos dizem que nossa maneira de crer não é correta, que a melhor crença é a que elas trazem, isso significa destruir uma identidade cultural. Não há o respeito às crenças locais. Na cultura africana, um curandeiro é não apenas o médico tradicional, mas também o detentor de parte da história e da cultura popular”, detacou Paulina, criticando os governos dos dois países que permitem a intervenção dessas instituições.

Primeira mulher a publicar um livro em Moçambique, Paulina procura fugir de estereótipos em sua obra, principalmente, os que limitam a mulher ao papel de dependente, incapaz de pensar por si só, condicionada a apenas servir.

“Gosto muito dos poetas de meu país, mas nunca encontrei na literatura que os homens escrevem o perfil de uma mulher inteira. É sempre a boca, as pernas, um único aspecto. Nunca a sabedoria infinita que provém das mulheres”, disse Paulina, lembrando que, até a colonização europeia, cabia às mulheres desempenhar a função narrativa e de transmitir o conhecimento.

“Antes do colonialismo, a arte e a literatura eram femininas. Cabia às mulheres contar as histórias e, assim, socializar as crianças. Com o sistema colonial e o emprego do sistema de educação imperial, os homens passam a aprender a escrever e a contar as histórias. Por isso mesmo, ainda hoje, em Moçambique, há poucas mulheres escritoras”, disse Paulina.

“Mesmo independentes [a partir de 1975], passamos a escrever a partir da educação europeia que havíamos recebido, levando os estereótipos e preconceitos que nos foram transmitidos. A sabedoria africana propriamente dita, a que é conhecida pelas mulheres, continua excluída. Isso para não dizer que mais da metade da população moçambicana não fala português e poucos são os autores que escrevem em outras línguas moçambicanas”, disse Paulina.

Durante a bienal, foi relançado o livro Niketche, uma história de poligamia, de autoria da escritora moçambicana.

The U.S. Has Fallen Behind in Numerical Weather Prediction: Part I

March 28, 2012 – 05:00 AM
By Dr. Cliff Mass (Twitter @CliffMass)

It’s a national embarrassment. It has resulted in large unnecessary costs for the U.S. economy and needless endangerment of our citizens. And it shouldn’t be occurring.

What am I talking about? The third rate status of numerical weather prediction in the U.S. It is a huge story, an important story, but one the media has not touched, probably from lack of familiarity with a highly technical subject. And the truth has been buried or unavailable to those not intimately involved in the U.S. weather prediction enterprise. This is an issue I have mentioned briefly in previous blogs, and one many of you have asked to learn more about. It’s time to discuss it.

Weather forecasting today is dependent on numerical weather prediction, the numerical solution of the equations that describe the atmosphere. The technology of weather prediction has improved dramatically during the past decades as faster computers, better models, and much more data (mainly satellites) have become available.

Supercomputers are used for numerical weather prediciton.

U.S. numerical weather prediction has fallen to third or fourth place worldwide, with the clear leader in global numerical weather prediction (NWP) being the European Center for Medium Range Weather Forecasting (ECMWF). And we have also fallen behind in ensembles (using many models to give probabilistic prediction) and high-resolution operational forecasting. We used to be the world leader decades ago in numerical weather prediction: NWP began and was perfected here in the U.S. Ironically, we have the largest weather research community in the world and the largest collection of universities doing cutting-edge NWP research (like the University of Washington!). Something is very, very wrong and I will talk about some of the issues here. And our nation needs to fix it.

But to understand the problem, you have to understand the competition and the players. And let me apologize upfront for the acronyms.

In the U.S., numerical weather prediction mainly takes place at the National Weather Service’s Environmental Modeling Center (EMC), a part of NCEP (National Centers for Environmental Prediction). They run a global model (GFS) and regional models (e.g., NAM).

The Europeans banded together decades ago to form the European Center for Medium-Range Forecasting (ECMWF), which runs a very good global model. Several European countries run regional models as well.

The United Kingdom Met Office (UKMET) runs an excellent global model and regional models. So does the Canadian Meteorological Center (CMC).

There are other major global NWP centers such as the Japanese Meteorological Agency (JMA), the U.S. Navy (FNMOC), the Australian center, one in Beijing, among others. All of these centers collect worldwide data and do global NWP.

The problem is that both objective and subjective comparisons indicate that the U.S. global model is number 3 or number 4 in quality, resulting in our forecasts being noticeably inferior to the competition. Let me show you a rather technical graph (produced by the NWS) that illustrates this. This figure shows the quality of the 500hPa forecast (about halfway up in the troposphere–approximately 18,000 ft) for the day 5 forecast. The top graph is a measure of forecast skill (closer to 1 is better) from 1996 to 2012 for several models (U.S.–black, GFS; ECMWF-red, Canadian: CMC-blue, UKMET: green, Navy: FNG, orange). The bottom graph shows the difference between the U.S. and other nation’s model skill.

You first notice that forecasts are all getting better. That’s good. But you will notice that the most skillful forecast (closest to one) is clearly the red one…the European Center. The second best is the UKMET office. The U.S. (GFS model) is third…roughly tied with the Canadians.

Here is a global model comparison done by the Canadian Meteorological Center, for various global models from 2009-2012 for the 120 h forecast. This is a plot of error (RMSE, root mean square error) again for 500 hPa, and only for North America. Guess who is best again (lowest error)?–the European Center (green circle). UKMET is next best, and the U.S. (NCEP, blue triangle) is back in the pack.

Lets looks at short-term errors. Here is a plot from a paper by Garrett Wedam, Lynn McMurdie and myself comparing various models at 24, 48, and 72 hr for sea level pressure along the West Coast. Bigger bar means more error. Guess who has the lowest errors by far? You guessed it, ECMWF.

I could show you a hundred of these plots, but the answers are very consistent. ECMWF is the worldwide gold standard in global prediction, with the British (UKMET) second. We are third or fourth (with the Canadians). One way to describe this, is that the ECWMF model is not only better at the short range, but has about one day of additional predictability: their 8 day forecast is about as skillful as our 7 day forecast. Another way to look at it is that with the current upward trend in skill they are 5-7 years ahead of the U.S.

Most forecasters understand the frequent superiority of the ECMWF model. If you read the NWS forecast discussion, which is available online, you will frequently read how they often depend not on the U.S. model, but the ECMWF. And during the January western WA snowstorm, it was the ECMWF model that first indicated the correct solution. Recently, I talked to the CEO of a weather/climate related firm that was moving up to Seattle. I asked them what model they were using: the U.S. GFS? He laughed, of course not…they were using the ECMWF.

A lot of U.S. firms are using the ECMWF and this is very costly, because the Europeans charge a lot to gain access to their gridded forecasts (hundreds of thousands of dollars per year). Can you imagine how many millions of dollars are being spent by U.S. companies to secure ECMWF predictions? But the cost of the inferior NWS forecasts are far greater than that, because many users cannot afford the ECMWF grids and the NWS uses their global predictions to drive the higher-resolution regional models–which are NOT duplicated by the Europeans. All of U.S. NWP is dragged down by these second-rate forecasts and the costs for the nation has to be huge, since so much of our economy is weather sensitive. Inferior NWP must be costing billions of dollars, perhaps many billions.

The question all of you must be wondering is why this bad situation exists. How did the most technologically advanced country in the world, with the largest atmospheric sciences community, end up with third-rate global weather forecasts? I believe I can tell you…in fact, I have been working on this issue for several decades (with little to show for it). Some reasons:

1. The U.S. has inadequate computer power available for numerical weather prediction. The ECMWF is running models with substantially higher resolution than ours because they have more resources available for NWP. This is simply ridiculous–the U.S. can afford the processors and disk space it would take. We are talking about millions or tens of millions of dollars at most to have the hardware we need. A part of the problem has been NWS procurement, that is not forward-leaning, using heavy metal IBM machines at very high costs.

2. The U.S. has used inferior data assimilation. A key aspect of NWP is to assimilate the observations to create a good description of the atmosphere. The European Center, the UKMET Office, and the Canadians using 4DVAR, an advanced approach that requires lots of computer power. We used an older, inferior approach (3DVAR). The Europeans have been using 4DVAR for 20 years! Right now, the U.S. is working on another advanced approach (ensemble-based data assimilation), but it is not operational yet.

3. The NWS numerical weather prediction effort has been isolated and has not taken advantage of the research community. NCEP’s Environmental Modeling Center (EMC) is well known for its isolation and “not invented here” attitude. While the European Center has lots of visitors and workshops, such things are a rarity at EMC. Interactions with the university community have been limited and EMC has been reluctant to use the models and approaches developed by the U.S. research community. (True story: some of the advances in probabilistic weather prediction at the UW has been adopted by the Canadians, while the NWS had little interest). The National Weather Service has invested very little in extramural research and when their budget is under pressure, university research is the first thing they reduce. And the U.S. NWP center has been housed in a decaying building outside of D.C.,one too small for their needs as well. (Good news… a new building should be available soon).

4. The NWS approach to weather related research has been ineffective and divided. The governmnent weather research is NOT in the NWS, but rather in NOAA. Thus, the head of the NWS and his leadership team do not have authority over folks doing research in support of his mission. This has been an extraordinarily ineffective and wasteful system, with the NOAA research teams doing work that often has a marginal benefit for the NWS.

5. Lack of leadership. This is the key issue. The folks in NCEP, NWS, and NOAA leadership have been willing to accept third-class status, providing lots of excuses, but not making the fundamental changes in organization and priority that could deal with the problem. Lack of resources for NWP is another issue…but that is a decision made by NOAA/NWS/Dept of Commerce leadership.

This note is getting long, so I will wait to talk about the other problems in the NWS weather modeling efforts, such as our very poor ensemble (probabilistic) prediction systems. One could write a paper on this…and I may.

I should stress that I am not alone in saying these things. A blue-ribbon panel did a review of NCEP in 2009 and came to similar conclusions (found here). And these issues are frequently noted at conferences, workshops, and meetings.

Let me note that the above is about the modeling aspects of the NWS, NOT the many people in the local forecast offices. This part of the NWS is first-rate. They suffer from inferior U.S. guidance and fortunately have access to the ECMWF global forecasts. And there are some very good people at NCEP that have lacked the resources required and suitable organization necessary to push forward effectively.

This problem at the National Weather Service is not a weather prediction problem alone, but an example of a deeper national malaise. It is related to other U.S. issues, like our inferior K-12 education system. Our nation, gaining world leadership in almost all areas, became smug, self-satisfied, and a bit lazy. We lost the impetus to be the best. We were satisfied to coast. And this attitude must end…in weather prediction, education, and everything else… or we will see our nation sink into mediocrity.

The U.S. can reclaim leadership in weather prediction, but I am not hopeful that things will change quickly without pressure from outside of the NWS. The various weather user communities and our congressional representatives must deliver a strong message to the NWS that enough is enough, that the time for accepting mediocrity is over. And the Weather Service requires the resources to be first rate, something it does not have at this point.

*  *  *

Saturday, April 7, 2012

Lack of Computer Power Undermines U.S. Numerical Weather Prediction (Revised)

In my last blog on this subject, I provided objective evidence of how U.S. numerical weather prediction (NWP), and particularly our global prediction skill, lags between major international centers, such as the European Centre for Medium Range Weather Forecasting (ECMWF), the UKMET office, and the Canadian Meteorological Center (CMC).   I mentioned briefly how the problem extends to high-resolution weather prediction over the U.S. and the use of ensemble (many model runs) weather prediction, both globally and over the U.S.  Our nation is clearly number one in meteorological research and we certainly have the knowledge base to lead the world in numerical weather prediction, but for a number of reasons we are not.  The cost of inferior weather prediction is huge: in lives lost, injuries sustained, and economic impacts unmitigated.  Truly, a national embarrassment. And one we must change.

In this blog, I will describe in some detail one major roadblock in giving the U.S. state-of-the-art weather prediction:  inadequate computer resources.   This situation should clearly have been addressed years ago by leadership in the National Weather Service, NOAA, and the Dept of Commerce, but has not, and I am convinced will not without outside pressure.  It is time for the user community and our congressional representatives to intervene.  To quote Samuel L. Jackson, enough is enough. (…)

In the U.S. we are trying to use less computer resources to do more tasks than the global leaders in numerical weather prediction. (Note: U.S. NWP is done by National Centers for Environmental Prediction’s (NCEP) Environmental Modeling Center (EMC)).  This chart tells the story:
Courtesy of Bill Lapenta, EMC.
ECMWF does global high resolution and ensemble forecasts, and seasonal climate forecasts.  UKMET office also does regional NWP (England is not a big country!) and regional air quality.  NCEP does all of this plus much, much more (high resolution rapid update modeling, hurricane modeling, etc.).   And NCEP has to deal with prediction over a continental-size country.

If you would expect the U.S. has a lot more computer power to balance all these responsibilities and tasks, you would be very wrong.  Right now the U.S. NWS has two IBM supercomputers, each with 4992 processors (IBM Power6 processors).   One computer does the operational work, the other is for back up (research and testing runs are done on the back-up).  About 70 teraflops (trillion floating points operations per second) for each machine.

NCEP (U.S.) Computer
The European Centre has a newer IBM machine with 8192, much faster, processors that gets 182 terraflops (yes, over twice as fast and with far fewer tasks to do).

The UKMET office, serving a far, far smaller country, has two newer IBM machines, each with 7680 processors for 175 teraflops per machine.

Here is a figure, produced at NCEP that compares the relative computer power of NCEP’s machine with the European Centre’s.  The shading indicates computational activity and the x-axis for each represents a 24-h period.  The relative heights allows you to compare computer resources.  Not only does the ECMWF have much more computer power, but they are more efficient in using it…packing useful computations into every available minute.

Courtesy of Bill Lapenta, EMC
Recently, NCEP had a request for proposals for a replacement computer system.  You may not believe this, but the specifications were ONLY for a system at least equal to the one that have.    A report in acomputer magazine suggests that perhaps this new system (IBM got the contract) might be slightly less powerful (around 150 terraflops) than one of the UKMET office systems…but that is not known at this point.

The Canadians?  They have TWO machines like the European Centre’s!

So what kind of system does NCEP require to serve the nation in a reasonable way?

To start, we need to double the resolution of our global model to bring it into line with ECMWF (they are now 15 km global).   Such resolution allows the global model to model regional features (such as our mountains).  Doubling horizontal resolution requires 8 times more computer power.  We need to use better physics (description of things like cloud processes and radiation).  Double again.  And we need better data assimilation (better use of observations to provide an improved starting point for the model).  Double once more.  So we need 32 times more computer power for the high-resolution global runs to allow us to catch up with ECMWF.  Furthermore, we must do the same thing for the ensembles (running many lower resolution global simulations to get probabilistic information).  32 times more computer resources for that (we can use some of the gaps in the schedule of the high resolution runs to fit some of this in…that is what ECMWF does).   There are some potential ways NCEP can work more efficiently as well.  Right now NCEP runs our global model out to 384 hours four times a day (every six hours).  To many of us this seems excessive, perhaps the longest periods (180hr plus) could be done twice a day.  So lets begin with a computer 32 times faster that the current one.

Many workshops and meteorological meetings (such as one on improvements in model physics that was held at NCEP last summer—I was the chair) have made a very strong case that the U.S. requires an ensemble prediction system that runs at 4-km horizontal resolution.  The current national ensemble system has a horizontal resolution about 32 km…and NWS plans to get to about 20 km in a few years…both are inadequate.   Here is an example of the ensemble output (mean of the ensemble members) for the NWS and UW (4km) ensemble systems:  the difference is huge–the NWS system does not even get close to modeling the impacts of the mountains.  It is similarly unable to simulate large convective systems.

Current NWS( NCEP) “high resolution” ensembles (32 km)
4 km ensemble mean from UW system
Let me make one thing clear.  Probabilistic prediction based on ensemble forecasts and reforecasting (running models back for years to get statistics of performance) is the future of weather prediction.  The days of giving a single number for say temperature at day 5 are over.  We need to let people know about uncertainty and probabilities.  The NWS needs a massive increase of computer power to do this. It lacks this computer power now and does not seem destined to get it soon.

A real champion within NOAA of the need for more computer power is Tom Hamill, an expert on data assimilation and model post-processing.   He and colleagues have put together a compelling case for more NWS computer resources for NWP.  Read it here.

Back-of-the-envelope calculations indicates that a good first step– 4km national ensembles–would require about 20,000 processors to do so in a timely manner–but it would revolutionize weather prediction in the U.S., including forecasting convection and in mountainous areas.  This high-resolution ensemble effort would meld with data assimilation over the long-term.

And then there is running super-high resolution numerical weather prediction to get fine-scale details right.  Here in the NW my group runs a 1.3 km horizontal resolution forecast out twice a day for 48h.   Such capability is needed for the entire country.  It does not exist now due to inadequate computer resources.

The bottom line is that the NWS numerical modeling effort needs a huge increase of computer power to serve the needs of the country–and the potential impacts would be transformative.   We could go from having a third-place effort, which is slipping back into the pack, to a world leader.  Furthermore, the added computer power will finally allow NOAA to complete Observing System Simulation Experiments (OSSEs) and Observing System Experiments (OSEs) to make rational decisions about acquisitions of very expensive satellite systems.  The fact that this is barely done today is really amazing and a potential waste of hundreds of millions of dollars on unnecessary satellite systems.

But do to so will require a major jump in computational power, a jump our nation can easily afford.   I would suggest that NWS’s EMC should begin by securing at least a 100,000 processor machine, and down the road something considerably larger.  Keep in mind my department has about 1000 processors in our computational clusters, so this is not as large as you think.

For a country with several billion-dollar weather disasters a year, investment in reasonable computer resrouces for NWP is obvious.
The cost?   Well, I asked Art Mann of Silicon Mechanics (a really wonderful local vendor of computer clusters) to give me rough quote:  using fast AMD chips, you could have such a 100K core machine for 11 million dollars. (this is without any discount!)  OK, this is the U.S. government and they like expensive, heavy metal machines….lets go for 25 million dollars.  The National Center for Atmospheric Research (NCAR) is getting a new machine with around 75,000 processors and the cost will be around 25-35 million dollars.   NCEP will want two machines, so lets budget 60 million dollars. We spend this much money on a single jet fighter, but we can’t invest this amount to greatly improve forecasts and public safety in the U.S.?  We have machines far larger than this for breaking codes, doing simulations of thermonuclear explosions, and simulating climate change.

Yes, a lot of money, but I suspect the cost of the machine would be paid back in a few months from improved forecasts.   Last year we had quite a few (over ten) billion-dollar storms….imagine the benefits of forecasting even a few of them better.  Or the benefits to the wind energy and utility industries, or U.S. aviation, of even modestly improved forecasts.   And there is no doubt such computer resources would improve weather prediction.  The list of benefits is nearly endless.   Recent estimates suggest that  normal weather events cost the U.S. economy nearly 1/2 trillion dollars a year.  Add to that hurricanes, tornadoes, floods, and other extreme weather.  The business case is there.

As someone with an insider’s view of the process, it is clear to me that the current players are not going to move effectively without some external pressure.  In fact, the budgetary pressure on the NWS is very intense right now and they are cutting away muscle and bone at this point (like reducing IT staff in the forecast offices by over 120 people and cutting back on extramural research).  I believe it is time for weather sensitive industries and local government, together with t he general public, to let NOAA management and our congressional representatives know that this acute problem needs to be addressed and addressed soon.   We are acquiring huge computer resources for climate simulations, but only a small fraction of that for weather prediction…which can clearly save lives and help the economy.  Enough is enough.

Posted by Cliff Mass Weather Blog at 8:38 PM

Best Practices Are the Worst (Education Next)

SUMMER 2012 / VOL. 12, NO. 3 – http://educationnext.org/

As reviewed by Jay P. Greene

“Best practices” is the worst practice. The idea that we should examine successful organizations and then imitate what they do if we also want to be successful is something that first took hold in the business world but has now unfortunately spread to the field of education. If imitation were the path to excellence, art museums would be filled with paint-by-number works.

The fundamental flaw of a “best practices” approach, as any student in a half-decent research-design course would know, is that it suffers from what is called “selection on the dependent variable.” If you only look at successful organizations, then you have no variation in the dependent variable: they all have good outcomes. When you look at the things that successful organizations are doing, you have no idea whether each one of those things caused the good outcomes, had no effect on success, or was actually an impediment that held organizations back from being even more successful. An appropriate research design would have variation in the dependent variable; some have good outcomes and some have bad ones. To identify factors that contribute to good outcomes, you would, at a minimum, want to see those factors more likely to be present where there was success and less so where there was not.

“Best practices” lacks scientific credibility, but it has been a proven path to fame and fortune for pop-management gurus like Tom Peters, with In Search of Excellence, and Jim Collins, with Good to Great. The fact that many of the “best” companies they featured subsequently went belly-up—like Atari and Wang Computers, lauded by Peters, and Circuit City and Fannie Mae, by Collins—has done nothing to impede their high-fee lecture tours. Sometimes people just want to hear a confident person with shiny teeth tell them appealing stories about the secrets to success.

With Surpassing Shanghai, Marc Tucker hopes to join the ranks of the “best practices” gurus. He, along with a few of his colleagues at the National Center on Education and the Economy, has examined the education systems in some other countries with successful outcomes so that the U.S. can become similarly successful. Tucker coauthors the chapter on Japan, as well as an introductory and two concluding chapters. Tucker’s collaborators write chapters featuring Shanghai, Finland, Singapore, and Canada. Their approach to greatness in American education, as Linda Darling-Hammond phrases it in the foreword, is to ensure that “our strategies must emulate the best of what has been accomplished in public education both from here and abroad.”

But how do we know what those best practices are? The chapters on high-achieving countries describe some of what those countries are doing, but the characteristics they feature may have nothing to do with success or may even be a hindrance to greater success. Since the authors must pick and choose what characteristics they highlight, it is also quite possible that countries have successful education systems because of factors not mentioned at all. Since there is no scientific method to identifying the critical features of success in the best-practices approach, we simply have to trust the authority of the authors that they have correctly identified the relevant factors and have properly perceived the causal relationships.

But Surpassing Shanghai is even worse than the typical best-practices work, because Tucker’s concluding chapters, in which he summarizes the common best practices and draws policy recommendations, have almost no connection to the preceding chapters on each country. That is, the case studies of Shanghai, Finland, Japan, Singapore, and Canada attempt to identify the secrets to success in each country, a dubious-enough enterprise, and then Tucker promptly ignores all of the other chapters when making his general recommendations.

Tucker does claim to be drawing on the insights of his coauthors, but he never actually references the other chapters in detail. He never names his coauthors or specifically draws on them for his conclusions. In fact, much of what Tucker claims as common lessons of what his coauthors have observed from successful countries is contradicted in chapters that appear earlier in the book. And some of the common lessons they do identify, Tucker chooses to ignore.

For example, every country case study in Surpassing Shanghai, with the exception of the one on Japan coauthored by Marc Tucker, emphasizes the importance of decentralization in producing success. In Shanghai the local school system “received permission to create its own higher education entrance examination. This heralded a trend of exam decentralization, which was key to localized curricula.” The chapter on Finland describes the importance of the decision “to devolve increasing levels of authority and responsibility for education from the Ministry of Education to municipalities and schools…. [T]here were no central initiatives that the government was trying to push through the system.” Singapore is similarly described: “Moving away from the centralized top-down system of control, schools were organized into geographic clusters and given more autonomy…. It was felt that no single accountability model could fit all schools. Each school therefore set its own goals and annually assesses its progress toward meeting them…” And the chapter on Canada teaches us that “the most striking feature of the Canadian system is its decentralization.”

Tucker makes no mention of this common decentralization theme in his conclusions and recommendations. Instead, he claims the opposite as the common lesson of successful countries: “students must all meet a common basic education standard aligned to a national or provincial curriculum… Further, in these countries, the materials prepared by textbook publishers and the publishers of supplementary materials are aligned with the national curriculum framework.” And “every high-performing country…has a unit of government that is clearly in charge of elementary and secondary education…In such countries, the ministry has an obligation to concern itself with the design of the system as a whole…”

Conversely, Tucker emphasizes that “the dominant elements of the American education reform agenda” are noticeably absent from high-performing countries, including “the use of market mechanisms, such as charter schools and vouchers….” But if Tucker had read the chapter on Shanghai, he would have found a description of a system by which “students choose schools in other neighborhoods by paying a sponsorship fee. It is the Chinese version of school choice, a hot issue in the United States.” And although the chapter on Canada fails to make any mention of it, Canada has an extensive system of school choice, offering options that vary by language and religious denomination. According to recently published research by David Card, Martin Dooley, and Abigail Payne, competition among these options is a significant contributor to academic achievement in Canada.

There is a reason that promoters of best-practices approaches are called “gurus.” Their expertise must be derived from a mystical sphere, because it cannot be based on a scientific appraisal of the evidence. Marc Tucker makes no apology for his nonscientific approach. In fact, he denounces “the clinical research model used in medical research” when assessing education policies. The problem, he explains, is that no country would consent to “randomly assigning entire national populations to the education systems of another country or to certain features of the education system of another country.” On the contrary, countries, states, and localities can and do randomly assign “certain features of the education system,” and we have learned quite a lot from that scientific process. In the international arena, Tucker may want to familiarize himself with the excellent work being done by Michael Kremer and Karthik Muralidharan utilizing random assignment around the globe.

In addition, social scientists have developed practices to observe and control for differences in the absence of random assignment that have allowed extensive and productive analyses of the effectiveness of educational practices in different countries. In particular, the recent work of Ludger Woessmann, Martin West, and Eric Hanushek has utilized the PISA and TIMSS international test results that Tucker finds so valuable, but they have done so with the scientific methods that Tucker rejects. Even well-constructed case study research, like that done by Charles Glenn, can draw useful lessons across countries. The problem with the best-practices approach is not entirely that it depends on case studies, but that by avoiding variation in the dependent variable it prevents any scientific identification of causation.

Tucker’s hostility to scientific approaches is more understandable, given that his graduate training was in theater rather than a social science. Perhaps that is also why Tucker’s book reminds me so much of The Music Man. Tucker is like “Professor” Harold Hill come to town to sell us a bill of goods. His expertise is self-appointed, and his method, the equivalent of “the think system,” is obvious quackery. And the Gates Foundation, which has for some reason backed Tucker and his organization with millions of dollars, must be playing the residents of River City, because they have bought this pitch and are pouring their savings into a band that can never play music except in a fantasy finale.

Best practices really are the worst.

Jay P. Greene is professor of education reform at the University of Arkansas and a fellow at the George W. Bush Institute.

Surpassing Shanghai: An Agenda for American Education Built on the World’s Leading Systems
Edited by Marc Tucker
Harvard Education Press, 2011, $49.99; 288 pages.

Pela primeira vez no Brasil, antropólogo Roy Wagner dialoga com indígenas da Amazônia (A Crítica)

Autor de “A Invenção da Cultura”, Roy Wagner conheceu, pela primeira vez, indígenas da América do Sul e participou de ritual

Manaus, 08 de Agosto de 2011
ELAÍZE FARIAS

Antropólogo Norte Americano dialoga com índios do Amazônia – FOTO: ALEXANDRE FONSECA/ACRITICA

Antropólogo norte-americano dialoga com índios da Amazônia. FOTO: ALEXANDRE FONSECA/ACRITICA

Antropólogo norte-americano dialoga com índios da Amazônia. FOTO: ALEXANDRE FONSECA/ACRITICA

Antropólogo norte-americano dialoga com índios da Amazônia. FOTO: ALEXANDRE FONSECA/ACRITICA

Antropólogo norte-americano dialoga com índios da Amazônia. FOTO: ALEXANDRE FONSECA/ACRITICA

“Todo entendimento de uma outra cultura é uma experiência com a sua própria”, diz o norte-americano Roy Wagner, um dos principais nomes da antropologia contemporânea mundial, no livro “A Invenção da Cultura”.

Foi exatamente essa equivalência entre culturas que Roy Wagner vivenciou em sua primeira visita à Amazônia, na semana passada.

Em Manaus, Wagner realizou aula magna de abertura de ano letivo, participou de uma mesa redonda com graduandos e pós-graduandos indígenas da Universidade Federal do Amazonas (Ufam), visitou duas malocas de grupos indígenas que vivem na zona rural da capital amazonense e testemunhou o que ele chamou de “multiperspectivos”.

Autor da teoria sobre “a invenção e a noção da cultura”, que resultou no conceito de “antropologia reversa”, Wagner notabilizou-se pelos estudos que desenvolveu desde os anos 60 na Melanésia e na Nova Guiné (Oceania). Mas, somente agora, aos 73 anos, é que teve oportunidade de conhecer os povos nativos da América do Sul.

No sábado (06), último dia em Manaus, Roy Wagner conheceu e participou de um ritual dos índios tukano, tuyuka e dessana, em uma maloca localizada a quatro horas de Manaus em viagem de barco de recreio.

Na maloca, o indígena tuyuka Higino Tuyuka, que veio de São Gabriel da Cachoeira (a 851 quilômetros de Manaus), cidade onde 90% da população é indígena, apenas para participar das atividades e dialogar nos eventos com Roy Wagner, fez uma demonstração de um ritual de iniciação e apresentou ao antropólogo uma bebida típica chamada kahpí, de efeito alucinógeno e que é destinada apenas aos homens.

Perspectivas

“São muitas perspectivas se encontrando. Não considero um encontro de uma cultura nativa com um antropólogo, mas entre culturas compartilhando os mesmos espaços”, disse Wagner ao portal acrítica.com, ao final da experiência com os indígenas.

Esta foi a primeira vez que Wagner teve contato com os povos nativos da América do Sul, desde que começou seu trabalho como etnográfico e antropólogo.

Nas atividades desenvolvidas em Manaus, ele participou “uma conversa intercambiada sobre as cosmologias” e identificou semelhanças entre os ameríndios e os povos que estudou na Oceania.

A principal delas refere-se à relação entre o humano e os animais. “Na Austrália, os aborígenes têm uma relação, em sua cosmologia, com os corvos. Os animais são incorporados no mundo dos humanos. Aqui, vemos que os indígenas tem uma associação com os peixes. São os peixe-gente”, disse.

No seu diálogo com os indígenas brasileiros, Wagner, contudo, conta que encontrou uma característica específica: a preferência pelas “origens”. “Os povos daqui falam muito sobre o início, sobre a origem, a estrela Dalva, em contraste, por exemplo, com os povos aborígenes, que falam mais do poente, para a morte”, descreveu.

Intelectuais

Roy Wagner veio a Manaus numa articulação do Instituto Brasil Plural, que vincula a Universidade Federal do Amazonas e a Universidade Federal de Santa Catarina.

Sua vinda ao Amazonas não estava prevista inicialmente. Convidado pelos professores do Programa de Pós-Graduação em Antropologia Social (PPGAS) da Ufam, ele aceitou o convite para dialogar com os intelectuais indígenas – professores e estudantes.

A agenda do antropológo inclui palestras em Florianópolis (SC), Brasília (DF), Rio de Janeiro (RJ) e São Paulo (SP).

“Os intelectuais indígenas são aqueles que detêm as suas formas específicas do conhecimento. Alguns não são necessariamente pessoas que passaram pela universidade, mas que detém um profundo conhecimento”, descreveu o professor Carlos Dias, do PPGAS.

Carlos Dias disse que Roy Wagner ficou muito impressionado com a experiência vivenciada no Amazonas, sobretudo pela interlocução com os indígenas com os quais teve oportunidade de conversar.

Dias contou que, no domingo (08), o orientando de Wagner entrou em contato com os professores da Ufam e contou que “o grande momento no Brasil do antropólogo foi sua vinda à Amazônia”.

Conforme Carlos, em seu contato com os indígenas, Wagner encontrou uma grande quantidade de paralelos em termos cosmológicos entre os ameríndios e os povos que estudou, no passado.

“O Roy Wagner cria uma nova teoria de noção da cultura quando leva a sério essas novas formas de pensar. Capturar o outro através de seu conhecimento.

João Paulo Barreto, indígena tukano e mestrando em antropologia da Ufam, comentou que Wagner ficou surpreso com a apresentação de perspectivas na visão indígena. Isto ocorreu quando o líder Higino Tuyuka, durante o ritual, relacionou o cocar utilizado por João Paulo com as estruturas da maloca.

Estévão Barreto, também tukano e mestre em Sociedade e Cultura da Amazônia, destacou que a presença de Roy Wagner indicou a necessidade de promover o diálogo “ciência indígena e o saber científico”.

Igualdade

Roy Wagner tem formação em literatura inglesa, história, astronomia e antropologia.

Seus trabalhos mais conhecidos foram realizados entre os Dabiri, na Nova Guiné, e entre os aborígenes, na Austrália. Sua obra mais conhecida, “A Invenção da Cultura”, foi lançada em 1975 e teve uma revisão em 1981. No Brasil, o livro foi traduzido apenas em 2010.

No Brasil, seu principal interlocutor é o antropólogo Eduardo Viveiros de Castro, autor do conceito de Perspectivismo.

No livro “A Invenção da Cultura”, Wagner diz que “o antropólogo usa sua própria cultura para investigar outras, e para estudar a cultura em geral”. Ou seja, “a idéia de cultura coloca o pesquisador em pé de igualdade com seus objetos de estudo: cada qual ‘pertence a uma cultura’.”.

Para Roy Wagner, “um antropólogo ‘experencia´de um modo ou de outro, seu objeto de estudo; ele o faz através do universo de seus próprios significados, e então se vale dessa experiência carregada de significados para comunicar uma compreensão aos membros de própria cultura”.

*  *  *

Antropólogo autor de “A Invenção da Cultura” ministra aula magna na Ufam nesta quinta

Roy Wagner é um dos maiores importantes antropólogos da atualidade. o norte-americano vem pela primeira vez ao Brasil

Manaus, 03 de Agosto de 2011

ACRITICA.COM

Um dos mais renomados antropólogos da atualidade, o norte-americano Roy Wagner, ministra aula magna de abertura do semestre do curso de mestrado em Antropologia da Universidade Federal do Amazonas (Ufam), nesta quinta-feira (04), às 9h, no auditório Rio Solimões do Instituto de Ciências Humanas e Letras (ICHL/Ufam).

O professor do Programa de Pós-Graduação em Antropologia Social (PPGAS), Gilton Mendes, disse que Roy Wagner interessou-se pelo convite de vir a Manaus estimulado pela ideia de conversar com “conhecedores sobre a antropologia indígena amazônica”.

Autor de “A Invenção da Cultura”, Roy Wagner estudou astronomia, literatura inglesa e história na Universidade de Harvard, e fez sua pós-graduação em antropologia na Universidade de Chicago.

O livro “A Invenção da Cultura” foi lançado em 1975, mas só teve edição no Brasil no ano passado. Era uma das obras mais esperadas pelo meio antropólogo nos últimos anos no país.

No dia 5 de agosto, Roy Wagner participará de uma mesa-redonda intitulada ‘Conversações Melanésias e Amazônia’, com os pesquisadores indígenas. Promovida pelo Programa de Pós-Graduação em Antropologia Social em conjunto com o Núcleo de Estudos da Amazônia Indígena (Neai).

O evento acontecerá às 15h,  na Rua Coronel Sérgio Pessoa, 147, na Praça dos Remédios, Centro de Manaus. A mesa-redonda contará com a participação especial de Justin Shaffner, da Universidade de Cambridge (EUA).

Indígenas

Roy Wagner iniciou seu trabalho de campo entre os Daribi no monte Karimui, na Nova Guiné, sobre quem escreveu e publicou sua monografia dedicada aos princípios daribi de definição de clã e aliança.

A partir da etnografia daribi, Wagner desenvolveu uma teoria geral sobre a invenção de significado e sobre a noção de cultura, publicada em “A invenção da cultura”, que ganhou nova edição revista e ampliada em 1981.

A obra radicaliza uma reflexão sobre o polêmico conceito de cultura em antropologia: a partir da consideração dos modos de conceitualização nativos, ela reformula a própria disciplina antropológica.

Para Wagner, não se trata de entender o que outros povos produzem como “cultura” a partir de um dado universal (a “natureza”), mas antes, o que é concebido como dado por outras populações. Com isto, a própria noção de “natureza” como dado universal e de “cultura” ficam sob suspeição.

Sua vinda ao Brasil faz parte das iniciativas programadas do Instituto Brasil Plural, uma rede de pesquisadores articulada pelos Programas de Pós-Graduação da Universidade de Santa Catarina (UFSC) e da Universidade Federal do Amazonas (Ufam), financiada pelo CNPq, a Fapesc e a Fapeam.

Lead Dust Is Linked to Violence, Study Suggests (Science Daily)

ScienceDaily (Apr. 17, 2012) — Childhood exposure to lead dust has been linked to lasting physical and behavioral effects, and now lead dust from vehicles using leaded gasoline has been linked to instances of aggravated assault two decades after exposure, says Tulane toxicologist Howard W. Mielke.

Vehicles using leaded gasoline that contaminated cities’ air decades ago have increased aggravated assault in urban areas, researchers say.

The new findings are published in the journal Environment International by Mielke, a research professor in the Department of Pharmacology at the Tulane University School of Medicine, and demographer Sammy Zahran at the Center for Disaster and Risk Analysis at Colorado State University.

The researchers compared the amount of lead released in six cities: Atlanta, Chicago, Indianapolis, Minneapolis, New Orleans and San Diego, during the years 1950-1985. This period saw an increase in airborne lead dust exposure due to the use of leaded gasoline. There were correlating spikes in the rates of aggravated assault approximately two decades later, after the exposed children grew up.

After controlling for other possible causes such as community and household income, education, policing effort and incarceration rates, Mielke and Zahran found that for every one percent increase in tonnages of environmental lead released 22 years earlier, the present rate of aggravated assault was raised by 0.46 percent.

“Children are extremely sensitive to lead dust, and lead exposure has latent neuroanatomical effects that severely impact future societal behavior and welfare,” says Mielke. “Up to 90 per cent of the variation in aggravated assault across the cities is explained by the amount of lead dust released 22 years earlier.” Tons of lead dust were released between 1950 and 1985 in urban areas by vehicles using leaded gasoline, and improper handling of lead-based paint also has contributed to contamination.

Violence in Men Caused by Unequal Wealth and Competition, Study Suggests (Science Daily)

ScienceDaily (Apr. 17, 2012) — Violence in men can be explained by traditional theories of sexual selection. In a review of the literature, Professor John Archer from the University of Central Lancashire, a Fellow of the British Psychological Society, points to a range of evidence that suggests that high rates of physical aggression and assaults in men are rooted in inter-male competition.

These findings are presented April 18 at the British Psychological Society Annual Conference held at the Grand Connaught Rooms, London (18-20 April).

Professor Archer describes evidence showing that differences between men and women in the use of physical aggression peak when men and women are in their twenties. In their twenties, men are more likely to report themselves as high in physical aggression, and to be arrested for engaging in assaults and the use of weapons, than at any other age. They also engage in these activities at a phenomenally higher rate than women.

Professor Archer highlights that sex differences in aggression are not observed in relation to indirect forms of aggression but become larger with the severity of violence. Indeed, at the extreme end of violence, there are a minimal number of female-female homicides in the face of a high male-male homicide rate. Interestingly, men are also much more likely to engage in risky behaviour in the presence of other men.

Professor Archer says that a range of male features that develop during adolescence arising from hormonal changes in testosterone accentuate aggressive behaviour. Examples include the growth of facial hair, voice pitch and facial changes such as brow ridge and chin size. He implicates height, weight and strength differences between men and women as further evidence of male adaptation to engage in fighting.

How does the environment influence aggression and violence? Professor Archer suggests there are two key principles — unequal wealth and a high ratio of sexually active men to women — that may increase physical aggression and violence in young men.

Professor Archer says: “The research evidence highlights that societal issues such as inequality of wealth and competition between males may contribute to the violence we see in today’s society.”

Relação entre cientistas e jornalistas é debatida em seminário (FAPESP)

Divulgação científica ganha peso no meio acadêmico e relacionamento entre as duas classe profissionais se torna mais próximo,dizem especialistas em encontro realizado pela FAPESP

18/04/2012

Por Karina Toledo

Agência FAPESP – Com as ações de divulgação científica ganhando cada vez mais peso no meio acadêmico, a relação entre jornalistas e pesquisadores parece mudar para melhor. Mas é preciso ter em mente que cientistas eminentes não são autoridades em todos os assuntos.

O alerta foi feito pelo biólogo Thomas Lewinsohn, professor da Universidade Estadual de Campinas (Unicamp), durante sua participação no seminário Ciência na Mídia, realizado pela FAPESP no dia 16 de abril.

“Antigamente os pesquisadores davam muito peso para publicação em revistas científicas, o que lhes garantia prestígio acadêmico e financiamento, e quase nenhuma atenção à divulgação científica, que servia apenas para aumentar a popularidade. Hoje estamos perto de um equilíbrio entre os dois ramos”, afirmou.

Percebeu-se que além de popularidade, a exposição na mídia afetava também a influência e o poder de decisão no meio acadêmico, aumentando as chances de ter um projeto financiado e, consequentemente, elevando o prestígio acadêmico.

Um exemplo claro do novo paradigma, segundo Lewinsohn, é a mudança no sistema de avaliação dos cursos de pós-graduação pela Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (Capes). “Hoje se dá um peso maior à visibilidade do trabalho dos cientistas que compõem os quadros”, avaliou.

Outro sinal é a transformação pela qual as mais importantes revistas científicas, entre elas ScienceNature, passaram nos últimos anos, ganhando novas seções com conteúdo noticioso e linguagem mais acessível.

“Está se tornando impossível para o cientista ignorar a mídia. Muitos hoje cortejam os jornalistas e isso dá margem a distorções. Existe uma ideia de que o cientista terá sempre uma opinião racional e bem embasada sobre tudo e isso não é verdade”, afirmou o biólogo.

Por esse motivo, recomendou, os jornalistas devem resistir ao impulso de, na correria das redações, recorrer sempre àquela fonte que tem respostas para todos os temas. “Alguns têm uma agenda pessoal, que nem sempre tem a ver com a ciência.”

Durante sua apresentação, o médico Paulo Saldiva, da Faculdade de Medicina da USP, reclamou do fato de que a maioria dos jornalistas que o procura querer falar de temas que não têm relação com sua área de estudo: os efeitos da poluição atmosférica sobre a saúde.

Outro problema abordado por ele foi o pouco tempo dispensado aos temas e o risco da superficialidade. “Você fala durante meia hora e aparece apenas dez segundos. Esse é o maior pavor dos cientistas”, acrescentou Saldiva.

Para o biólogo Fernando Reinach, que se tornou conhecido após participar do Projeto Genoma , financiado pela FAPESP, e hoje mantém uma coluna de divulgação científica no jornal O Estado de S. Paulo, o grande problema do jornalismo científico é “contar o milagre e não contar o santo”.

“Dá-se muita ênfase à descoberta e não se explora bem os métodos usados. Isso dificulta avaliar se o que está sendo dito é verdade”, opinou.

Reinach contou que após deixar a vida acadêmica manteve o hábito de ler artigos científicos e idealizou a coluna no jornal por considerar que havia muitos temas interessantes escondidos atrás de títulos obscuros. “Tenho o cientista como personagem. Tento dar uma dimensão humana à pesquisa”, revelou.

Já o editor de Ciência do jornal Folha de S. Paulo, Reinaldo José Lopes, falou sobre o encolhimento do espaço nos jornais para as notícias em geral e para ciência em particular. “Como empacotar a notícia, a metodologia e o lado humano em meia página? A gente sente uma impaciência do leitor que é assustadora e isso acaba conduzindo à superficialidade”, disse.

O encontro ainda teve a participação de Roberto Wertman, editor do programa Espaço Aberto Ciência & Tecnologia da Globonews, que comentou as limitações da cobertura científica na TV, extremamente dependente da existência de imagens. E de Sonia López, ex-editora do AlphaGalileu, um dos maiores portais de notícias acadêmicas.

A abertura ficou por conta de Clive Cookson, editor de Ciência do jornal Financial Times, que listou os três principais problemas que, em sua opinião, afetam a qualidade do jornalismo científico.

Em primeiro lugar, Cookson mencionou a tendência de abordar os resultados de pesquisas de forma exagerada e sensacionalista. “O repórter precisa convencer seu editor de que vale a pena publicar aqueles dados e a verdade científica às vezes acaba em segunda plano. E quando o subeditor escreve a manchete a notícia fica ainda mais exagerada”, comentou.

Outro problema é a tendência de abordar os dados de forma negativista, o que pode causar distorções. “A ideia é que notícia ruim vende mais”, disse.

Por último Cookson mencionou a divulgação de notícias não objetivas, permeadas de interesses políticos. “Cientistas devem se ater à ciência. Mas mesmo em situações controversas devem aproveitar para passar sua mensagem. Se deixarem um vazio, fontes com motivações políticas podem se aproveitar.”

UK aid helps to fund forced sterilisation of India’s poor [climate change](The Guardian)

Money from the Department for International Development has helped pay for a controversial programme that has led to miscarriages and even deaths after botched operations

Gethin Chamberlain
The Observer, Sunday 15 April 2012

Sterilisation remains the most common method of family planning in India’s bid to curb its burgeoning population of 1.2 billion. Photograph: Mustafa Quraishi/AP

Tens of millions of pounds of UK aid money have been spent on a programme that has forcibly sterilised Indian women and men, theObserver has learned. Many have died as a result of botched operations, while others have been left bleeding and in agony. A number of pregnant women selected for sterilisation suffered miscarriages and lost their babies.

The UK agreed to give India £166m to fund the programme, despite allegations that the money would be used to sterilise the poor in an attempt to curb the country’s burgeoning population of 1.2 billion people.

Sterilisation has been mired in controversy for years. With officials and doctors paid a bonus for every operation, poor and little-educated men and women in rural areas are routinely rounded up and sterilised without having a chance to object. Activists say some are told they are going to health camps for operations that will improve their general wellbeing and only discover the truth after going under the knife.

Court documents filed in India earlier this month claim that many victims have been left in pain, with little or no aftercare. Across the country, there have been numerous reports of deaths and of pregnant women suffering miscarriages after being selected for sterilisation without being warned that they would lose their unborn babies.

Yet a working paper published by the UK’s Department for International Development in 2010 cited the need to fight climate change as one of the key reasons for pressing ahead with such programmes. The document argued that reducing population numbers would cut greenhouse gases, although it warned that there were “complex human rights and ethical issues” involved in forced population control.

The latest allegations centre on the states of Madhya Pradesh and Bihar, both targeted by the UK government for aid after a review of funding last year. In February, the chief minister of Madhya Pradesh had to publicly warn off his officials after widespread reports of forced sterilisation. A few days later, 35-year-old Rekha Wasnik bled to death in the state after doctors sterilised her. The wife of a poor labourer, she was pregnant with twins at the time. She began bleeding on the operating table and a postmortem cited the operation as the cause of death.

Earlier this month, India’s supreme court heard how a surgeon operating in a school building in the Araria district of Bihar in January carried out 53 operations in two hours, assisted by unqualified staff, with no access to running water or equipment to clean the operating equipment. A video shot by activists shows filthy conditions and women lying on the straw-covered ground.

Human rights campaigner Devika Biswas told the court that “inhuman sterilisations, particularly in rural areas, continue with reckless disregard for the lives of poor women”. Biswas said 53 poor and low-caste women were rounded up and sterilised in operations carried out by torchlight that left three bleeding profusely and led to one woman who was three months pregnant miscarrying. “After the surgeries, all 53 women were crying out in pain. Though they were in desperate need of medical care, no one came to assist them,” she said.

The court gave the national and state governments two months to respond to the allegations.

Activists say that it is India’s poor – and particularly tribal people – who are most frequently targeted and who are most vulnerable to pressure to be sterilised. They claim that people have been threatened with losing their ration cards if they do not undergo operations, or bribed with as little as 600 rupees (£7.34) and a sari. Some states run lotteries in which people can win cars and fridges if they agree to be sterilised.

Despite the controversy, an Indian government report shows that sterilisation remains the most common method of family planning used in its Reproductive and Child Health Programme Phase II, launched in 2005 with £166m of UK funding. According to the DfID, the UK is committed to the project until next year and has spent £34m in 2011-12. Most of the money – £162m – has been paid out, but no special conditions have been placed on the funding.

Funding varies from state to state, but in Bihar private clinics receive 1,500 rupees for every sterilisation, with a bonus of 500 rupees a patient if they carry out more than 30 operations on a particular day. NGO workers who convince people to have the operations receive 150 rupees a person, while doctors get 75 rupees for each patient.

A 2009 Indian government report said that nearly half a million sterilisations had been carried out the previous year but warned of problems with quality control and financial management.

In 2006, India’s ministry of health and family welfare published a report into sterilisation, which warned of growing concerns, and the following year an Indian government audit of the programme warned of continuing problems with sterilisation camps. “Quality of sterilisation services in the camps is a matter of concern,” it said. It also said the quality of services was affected because much of the work was crammed into the final part of the financial year.

When it announced changes to aid for India last year, the DfID promised to improve the lives of more than 10 million poor women and girls. It said: “We condemn forced sterilisation and have taken steps to ensure that not a penny of UK aid could support it. The UK does not fund sterilisation centres anywhere.

“The coalition government has completely changed the way that aid is spent in India to focus on three of the poorest states, and our support for this programme is about to end as part of that change. Giving women access to family planning, no matter where they live or how poor they are, is a fundamental tenet of the coalition’s international development policy.”

See Dan read: Baboons can learn to spot real words (Guardian)

AP foreign, Saturday April 14 2012 (The Guardian)

SETH BORENSTEIN

AP Science Writer= WASHINGTON (AP) — Dan the baboon sits in front of a computer screen. The letters BRRU pop up. With a quick and almost dismissive tap, the monkey signals it’s not a word. Correct. Next comes, ITCS. Again, not a word. Finally KITE comes up.

He pauses and hits a green oval to show it’s a word. In the space of just a few seconds, Dan has demonstrated a mastery of what some experts say is a form of pre-reading and walks away rewarded with a treat of dried wheat.

Dan is part of new research that shows baboons are able to pick up the first step in reading — identifying recurring patterns and determining which four-letter combinations are words and which are just gobbledygook.

The study shows that reading’s early steps are far more instinctive than scientists first thought and it also indicates that non-human primates may be smarter than we give them credit for.

“They’ve got the hang of this thing,” said Jonathan Grainger, a French scientist and lead author of the research.

Baboons and other monkeys are good pattern finders and what they are doing may be what we first do in recognizing words.

It’s still a far cry from real reading. They don’t understand what these words mean, and are just breaking them down into parts, said Grainger, a cognitive psychologist at the Aix-Marseille University in France.

In 300,000 tests, the six baboons distinguished between real and fake words about three-out-of-four times, according to the study published in Thursday’s journal Science.

The 4-year-old Dan, the star of the bunch and about the equivalent age of a human teenager, got 80 percent of the words right and learned 308 four-letter words.

The baboons are rewarded with food when they press the right spot on the screen: A blue plus sign for bogus combos or a green oval for real words.

Even though the experiments were done in France, the researchers used English words because it is the language of science, Grainger said.

The key is that these animals not only learned by trial and error which letter combinations were correct, but they also noticed which letters tend to go together to form real words, such as SH but not FX, said Grainger. So even when new words were sprung on them, they did a better job at figuring out which were real.

Grainger said a pre-existing capacity in the brain may allow them to recognize patterns and objects, and perhaps that’s how we humans also first learn to read.

The study’s results were called “extraordinarily exciting” by another language researcher, psychology professor Stanislas Dehaene at the College of France, who wasn’t part of this study. He said Grainger’s finding makes sense. Dehaene’s earlier work says a distinct part of the brain visually recognizes the forms of words. The new work indicates this is also likely in a non-human primate.

This new study also tells us a lot about our distant primate relatives.

“They have shown repeatedly amazing cognitive abilities,” said study co-author Joel Fagot, a researcher at the French National Center for Scientific Research.

Bill Hopkins, a professor of psychology at the Yerkes Primate Center in Atlanta, isn’t surprised.

“We tend to underestimate what their capacities are,” said Hopkins, who wasn’t part of the French research team. “Non-human primates are really specialized in the visual domain and this is an example of that.”

This raises interesting questions about how the complex primate mind works without language or what we think of as language, Hopkins said. While we use language to solve problems in our heads, such as deciphering words, it seems that baboons use a “remarkably sophisticated” method to attack problems without language, he said.

Key to the success of the experiment was a change in the testing technique, the researchers said. The baboons weren’t put in the computer stations and forced to take the test. Instead, they could choose when they wanted to work, going to one of the 10 computer booths at any time, even in the middle of the night.

The most ambitious baboons test 3,000 times a day; the laziest only 400.

The advantage of this type of experiment setup, which can be considered more humane, is that researchers get far more trials in a shorter time period, he said.

“They come because they want to,” Fagot said. “What do they want? They want some food. They want to solve some task.”

Cresce valorização da divulgação científica (FAPESP)

Para Clive Cookson, editor de Ciência do Financial Times, qualidade da cobertura de temas científicos melhorou quando pesquisadores adquiriram consciência de que é importante trabalhar em parceria com jornalistas para divulgar seus trabalhos

17/04/2012 – Por Fábio de Castro

Agência FAPESP – Editor de Ciência do Financial Times há duas décadas, o jornalista britânico Clive Cookson acredita que os temas científicos têm se tornado mais familiares e mais valorizados para o público, graças a uma cobertura jornalística que se revela pouco a pouco mais profunda e mais precisa que no passado.

Essa transformação, de acordo com Cookson, deve-se em parte às novas tecnologias que facilitaram o trabalho do jornalista nos últimos anos. Mas, segundo ele, a principal razão para que o noticiário de ciência ganhasse mais qualidade está em uma mudança de atitude dos próprios cientistas, que perceberam a importância da comunicação.

Cookson, que atua há mais de 30 anos na cobertura dos temas de ciência e tecnologia, em diversos países e diferentes veículos e contextos, participou nesta segunda-feira (16/4) do seminário “Ciência na Mídia”, promovido pela FAPESP na sede da Fundação, em São Paulo.

O evento teve o objetivo de estimular a reflexão, por parte de todos os envolvidos na produção e divulgação científicas, sobre as maneiras de propiciar um espaço para a troca de conhecimentos e a proposição de novos modos de pensar a divulgação desses temas na sociedade. Em entrevista exclusiva à Agência FAPESP, Cookson comentou esses temas.

Agência FAPESP – Como tem evoluído a cobertura jornalística sobre ciência, considerando os seus 30 anos de experiência na área? 
Clive Cookson– Apesar de existirem muitos blogs e sites de ciência, as pessoas continuam obtendo a maior parte de suas informações sobre o que está acontecendo no mundo científico por meio da mídia tradicional: jornais impressos, revistas, TV e rádio. Assim, o cientista se comunica com o público por meio desses veículos não especializados em ciência. Essa não é uma relação trivial. Mas sou muito otimista, porque, olhando com essa perspectiva de 30 anos, percebo que os cientistas estão se tornando muito melhores na tarefa de se comunicar com a mídia.

Agência FAPESP – O que mudou nessa relação, da perspectiva dos cientistas? 
Clive Cookson– Eles estão se tornando muito mais proativos, mais abertos. Perderam o medo do contato com os repórteres. É uma mudança muito grande se você olha em uma perspectiva de longo tempo. E acredito que se trata de algo até certo ponto generalizado. Aqui no Brasil percebi que os cientistas são muito abertos.

Agência FAPESP – Qual pode ter sido a razão para essa transformação? 
Clive Cookson– Os cientistas perceberam – certamente nos Estados Unidos e Europa, mas acho que no Brasil também – que é mais provável conseguir investimentos públicos e auxílios para fazer suas pesquisas na medida em que eles se tornam bons comunicadores. Na Grã-Bretanha os conselhos de pesquisa incluem explicitamente a comunicação dos resultados científicos como um dos critérios importantes para conseguir investimentos. De modo geral, podemos dizer que você tem mais facilidade para conseguir o investimento se você estiver preparado para comunicar. Isso é verdade para os pesquisadores, de forma individual, mas também em uma perspectiva mais geral: os pesquisadores sabem que a ciência como um todo terá mais apoio público se os cientistas gastarem um pouco de tempo e esforço para falar com jornalistas.

Agência FAPESP – Além dessas mudanças do lado da comunidade científica, houve também evolução do lado da produção da notícia? A qualidade do jornalismo melhorou? 
Clive Cookson– Houve melhora, mas nada que justificasse um aumento muito grande da confiança dos pesquisadores nos jornalistas. A qualidade do jornalismo melhorou, mas não acho que isso tenha acontecido porque os jornalistas se tornaram melhores. O que ocorreu é que ficou muito mais fácil escrever uma matéria sobre ciência, agora que podemos ter acesso a artigos científicos na internet, podemos obter comentários por e-mail e coisas assim. Quando eu comecei no ofício, se quiséssemos ter acesso a um artigo era preciso ir às bibliotecas e para um simples comentários era preciso ter muita sorte e localizar os pesquisadores por telefone na hora certa.

Agência FAPESP – No Brasil os jornalistas de ciência, com frequência, têm formação em jornalismo, mas não uma formação científica. Qual é a característica dos divulgadores na Inglaterra? 
Clive Cookson– Na Inglaterra há uma mistura. A maior parte dos jornalistas de ciência tem uma formação em ciência. Eu, por exemplo, sou formado em química. Mas há outros ótimos jornalistas de ciência que têm seu background em artes ou humanidades e depois começaram a trabalhar com ciência e foram excepcionalmente atraídos pela área. Acho que há prós e contras em ambos os casos.

Agência FAPESP – Em uma situação hipotética: se o senhor tivesse que contratar um repórter, iria preferir um indivíduo com uma formação científica, que escreve bem, mas não tem nenhuma experiência prévia em jornalismo, ou alguém que é um jornalista capaz e talentoso, mas sem qualquer envolvimento com ciência, nem experiência em jornalismo científico? 
Clive Cookson– Se eu estivesse contatando essa pessoa para um trabalho de reportagem de ciências em um jornal, por exemplo, não hesitaria: escolheria o jornalista que tem experiência em reportagem, em vez de escolher o cientista. Acho que a capacidade para ser um bom jornalista é de fato o mais importante. Não adianta ser um bom cientista que escreve corretamente. Porque a ciência realmente requer um texto diferente, vívido. Prefiro um excelente jornalista que um excelente cientista para fazer isso.

Agência FAPESP – A percepção do público em relação à importância da ciência também tem mudado?
Clive Cookson– Minha impressão é que o conhecimento sobre ciência em meio ao público geral melhorou sim. Ainda não é o suficiente, mas acho que, em geral, a população ficou mais alfabetizada em ciência que há alguns anos atrás. Muita gente passou a entender melhor as bases da ciência. As pessoas têm mais intimidade com temas e termos centrais no mundo científico. Até certo ponto a internet contribuiu com isso, mas não sei se há grande potencial para melhorar muito mais, porque na rede também temos muito ruído e desinformação.

Agência FAPESP – Os jornalistas procuram fazer a ciência mais atraente para o público. Ao mesmo tempo, tendem a mostrar exclusivamente os resultados de sucesso, deixando em segundo plano o processo de produção da ciência. Com isso não se corre o risco de mistificar a ciência junto ao público? 
Clive Cookson– Tem toda razão, esse é um problema absolutamente fundamental na relação entre jornalismo e ciência. No noticiário não há tempo nem espaço para descrever todos os passos da produção da ciência, mostrando ao público que não se trata de mágica, mas de um processo difícil, pontuado de dificuldades e fracassos momentâneos. O que deixa essa situação pior é que mesmo que você privilegie as pesquisas de qualidade, publicadas em revistas de prestígio, os artigos científicos também não lhe darão pistas sobre o processo de como a ciência funciona. Você só conseguiria dar ao público uma educação científica se fosse possível acompanhar o trabalho por meses a fio no laboratório. Geralmente isso é impossível.

Agência FAPESP – Além disso os insucessos raramente são publicados, não é? 
Clive Cookson– Sim, essa é outra questão. A publicação, em particular na área de saúde, normalmente descreve apenas os resultados positivos. Os resultados negativos quase nunca têm espaço em publicações. É preciso estar atento a isso para não dar uma falsa impressão de que a ciência é feita só de acertos.

Agência FAPESP – Quando se noticia os resultados de um novo estudo, pode ser difícil repercutir a notícia com outros cientistas, porque muitas vezes eles alegam que ainda não tiveram contato com o artigo. Como o senhor lida com essa situação? 
Clive Cookson– É uma situação extremamente difícil. Em primeiro lugar porque os cientistas normalmente não indicam seus competidores que trabalham na mesma área e que poderiam contribuir com um comentário. Além disso, geralmente é difícil conseguir um comentário sobre um artigo que acaba de sair e que não foi lido por quase ninguém. Na Inglaterra temos uma organização é muito útil, nesse sentido, para os jornalistas da área de saúde: o Science Media Centre.

Agência FAPESP – Como funciona? 
Clive Cookson– É um serviço que foi criado há exatos 10 anos e reúne cientistas que atuam como se fosse assessores de imprensa. Eles pegam qualquer estudo e avaliam se é controverso, ou interessante o suficiente para render uma manchete. Então usamos seus contatos, que fazem comentários com grande qualidade. Acho que o SMC fez mais que qualquer outra instituição para melhorar a cobertura jornalística de ciência na Inglaterra. Eles têm excelentes bases de dados e uma incrível lista de contatos especializados. É muito eficiente.

Agência FAPESP – Muita gente vê os repórteres de ciência como tradutores de uma linguagem especializada para a linguagem do senso comum. O que o senhor acha dessa noção?
Clive Cookson– Parte do que fazemos pode ser visto como uma espécie de tradução, mas espero que nosso trabalho seja algo mais criativo e complexo que isso. Acho que os jornalistas são capazes de colocar novas maneiras de se olhar para a ciência que os próprios cientistas não poderiam proporcionar. É algo mais que simplesmente traduzir. Podemos gerar imagens, comparações, que os cientistas não conceberiam. Não se trata apenas de questão de simplificar uma linguagem, mas de fornecer uma interpretação nova de ideias, contextos e visões. E, mesmo no campo da linguagem, acho que esse trabalho extrapola a simples tradução: devemos ser autores capazes de tornar o conhecimento mais vívido, mais interessante para o público.

Agência FAPESP – Como foi sua trajetória? Por que se interessou por ciência?
Clive Cookson– Sempre me interessei por ciência e me formei em Química em Oxford. Mas dois fatos mudaram minha trajetória. Um deles é que notei que o jornalismo científico na Inglaterra não era bom. Ao mesmo tempo, percebi que eu não seria brilhante o suficiente para fazer um bom doutorado em química. Eu sabia que se não fosse tão brilhante, um doutorado em química poderia se transformar em algo não muito criativo, uma espécie de trabalho braçal para um orientador. Eu sabia que não era na verdade bom o suficiente para me tornar um grande cientista. Mas percebi que poderia escrever bem sobre ciência.

Agência FAPESP – E como começou de fato a atuar como jornalista?
Clive Cookson– Fui aceito em um programa de treinamento de um jornal local, em Londres. Depois de dois anos, tive a oportunidade de ir para Washington, nos Estados Unidos, por quatro anos, para trabalhar no suplemento de Educação Superior do Times. Foi uma experiência fantástica, eu escrevia sobre as universidades e institutos de pesquisa norte-americanos. Depois voltei para Londres para me tornar repórter de tecnologia do Times. Comecei, na década de 1980, a trabalhar na rádio BBC, como correspondente da área da saúde. E de lá fui para o Financial Times, onde tenho atuado como editor de ciência nos últimos 20 anos.

A Sharp Rise in Retractions Prompts Calls for Reform (N.Y. Times)

PLEA Dr. Ferric Fang argues that science has changed in worrying ways. Matthew Ryan Williams for The New York Times
By CARL ZIMMER – Published: April 16, 2012

In the fall of 2010, Dr. Ferric C. Fang made an unsettling discovery. Dr. Fang, who is editor in chief of the journal Infection and Immunity, found that one of his authors had doctored several papers.

It was a new experience for him. “Prior to that time,” he said in an interview, “Infection and Immunity had only retracted nine articles over a 40-year period.”

The journal wound up retracting six of the papers from the author, Naoki Mori of the University of the Ryukyus in Japan. And it soon became clear that Infection and Immunity was hardly the only victim of Dr. Mori’s misconduct. Since then, other scientific journals have retracted two dozen of his papers, according to the watchdog blog Retraction Watch.

“Nobody had noticed the whole thing was rotten,” said Dr. Fang, who is a professor at the University of Washington School of Medicine.

Dr. Fang became curious how far the rot extended. To find out, he teamed up with a fellow editor at the journal, Dr. Arturo Casadevall of the Albert Einstein College of Medicine in New York. And before long they reached a troubling conclusion: not only that retractions were rising at an alarming rate, but that retractions were just a manifestation of a much more profound problem — “a symptom of a dysfunctional scientific climate,” as Dr. Fang put it.

Dr. Casadevall, now editor in chief of the journal mBio, said he feared that science had turned into a winner-take-all game with perverse incentives that lead scientists to cut corners and, in some cases, commit acts of misconduct.

“This is a tremendous threat,” he said.

WATCHDOG  Dr. Arturo Casadevall of the Albert Einstein College of Medicine in New York teamed up with Dr. Ferric C. Fang to study a raft of retractions. Ángel Franco/The New York Times

Last month, in a pair of editorials in Infection and Immunity, the two editors issued a pleafor fundamental reforms. They also presented their concerns at the March 27 meeting of the National Academies of Sciences committee on science, technology and the law.

Members of the committee agreed with their assessment. “I think this is really coming to a head,” said Dr. Roberta B. Ness, dean of the University of Texas School of Public Health. And Dr. David Korn of Harvard Medical School agreed that “there are problems all through the system.”

No one claims that science was ever free of misconduct or bad research. Indeed, the scientific method itself is intended to overcome mistakes and misdeeds. When scientists make a new discovery, others review the research skeptically before it is published. And once it is, the scientific community can try to replicate the results to see if they hold up.

Source: Journal of Medical Ethics

But critics like Dr. Fang and Dr. Casadevall argue that science has changed in some worrying ways in recent decades — especially biomedical research, which consumes a larger and larger share of government science spending.

In October 2011, for example, the journal Nature reported that published retractions had increased tenfold over the past decade, while the number of published papers had increased by just 44 percent. In 2010 The Journal of Medical Ethics published a studyfinding the new raft of recent retractions was a mix of misconduct and honest scientific mistakes.

Several factors are at play here, scientists say. One may be that because journals are now online, bad papers are simply reaching a wider audience, making it more likely that errors will be spotted. “You can sit at your laptop and pull a lot of different papers together,” Dr. Fang said.

But other forces are more pernicious. To survive professionally, scientists feel the need to publish as many papers as possible, and to get them into high-profile journals. And sometimes they cut corners or even commit misconduct to get there.

To measure this claim, Dr. Fang and Dr. Casadevall looked at the rate of retractions in 17 journals from 2001 to 2010 and compared it with the journals’ “impact factor,” a score based on how often their papers are cited by scientists. The higher a journal’s impact factor, the two editors found, the higher its retraction rate.

The highest “retraction index” in the study went to one of the world’s leading medical journals, The New England Journal of Medicine. In a statement for this article, it questioned the study’s methodology, noting that it considered only papers with abstracts, which are included in a small fraction of studies published in each issue. “Because our denominator was low, the index was high,” the statement said.

Monica M. Bradford, executive editor of the journal Science, suggested that the extra attention high-impact journals get might be part of the reason for their higher rate of retraction. “Papers making the most dramatic advances will be subject to the most scrutiny,” she said.

Dr. Fang says that may well be true, but adds that it cuts both ways — that the scramble to publish in high-impact journals may be leading to more and more errors. Each year, every laboratory produces a new crop of Ph.D.’s, who must compete for a small number of jobs, and the competition is getting fiercer. In 1973, more than half of biologists had a tenure-track job within six years of getting a Ph.D. By 2006 the figure was down to 15 percent.

Yet labs continue to have an incentive to take on lots of graduate students to produce more research. “I refer to it as a pyramid scheme,” said Paula Stephan, a Georgia State University economist and author of “How Economics Shapes Science,” published in January by Harvard University Press.

In such an environment, a high-profile paper can mean the difference between a career in science or leaving the field. “It’s becoming the price of admission,” Dr. Fang said.

The scramble isn’t over once young scientists get a job. “Everyone feels nervous even when they’re successful,” he continued. “They ask, ‘Will this be the beginning of the decline?’ ”

University laboratories count on a steady stream of grants from the government and other sources. The National Institutes of Health accepts a much lower percentage of grant applications today than in earlier decades. At the same time, many universities expect scientists to draw an increasing part of their salaries from grants, and these pressures have influenced how scientists are promoted.

“What people do is they count papers, and they look at the prestige of the journal in which the research is published, and they see how many grant dollars scientists have, and if they don’t have funding, they don’t get promoted,” Dr. Fang said. “It’s not about the quality of the research.”

Dr. Ness likens scientists today to small-business owners, rather than people trying to satisfy their curiosity about how the world works. “You’re marketing and selling to other scientists,” she said. “To the degree you can market and sell your products better, you’re creating the revenue stream to fund your enterprise.”

Universities want to attract successful scientists, and so they have erected a glut of science buildings, Dr. Stephan said. Some universities have gone into debt, betting that the flow of grant money will eventually pay off the loans. “It’s really going to bite them,” she said.

With all this pressure on scientists, they may lack the extra time to check their own research — to figure out why some of their data doesn’t fit their hypothesis, for example. Instead, they have to be concerned about publishing papers before someone else publishes the same results.

“You can’t afford to fail, to have your hypothesis disproven,” Dr. Fang said. “It’s a small minority of scientists who engage in frank misconduct. It’s a much more insidious thing that you feel compelled to put the best face on everything.”

Adding to the pressure, thousands of new Ph.D. scientists are coming out of countries like China and India. Writing in the April 5 issue of Nature, Dr. Stephan points out that a number of countries — including China, South Korea and Turkey — now offer cash rewards to scientists who get papers into high-profile journals. She has found these incentives set off a flood of extra papers submitted to those journals, with few actually being published in them. “It clearly burdens the system,” she said.

To change the system, Dr. Fang and Dr. Casadevall say, start by giving graduate students a better understanding of science’s ground rules — what Dr. Casadevall calls “the science of how you know what you know.”

They would also move away from the winner-take-all system, in which grants are concentrated among a small fraction of scientists. One way to do that may be to put a cap on the grants any one lab can receive.

Such a shift would require scientists to surrender some of their most cherished practices — the priority rule, for example, which gives all the credit for a scientific discovery to whoever publishes results first. (Three centuries ago, Isaac Newton and Gottfried Leibniz were bickering about who invented calculus.) Dr. Casadevall thinks it leads to rival research teams’ obsessing over secrecy, and rushing out their papers to beat their competitors. “And that can’t be good,” he said.

To ease such cutthroat competition, the two editors would also change the rules for scientific prizes and would have universities take collaboration into account when they decide on promotions.

Ms. Bradford, of Science magazine, agreed. “I would agree that a scientist’s career advancement should not depend solely on the publications listed on his or her C.V.,” she said, “and that there is much room for improvement in how scientific talent in all its diversity can be nurtured.”

Even scientists who are sympathetic to the idea of fundamental change are skeptical that it will happen any time soon. “I don’t think they have much chance of changing what they’re talking about,” said Dr. Korn, of Harvard.

But Dr. Fang worries that the situation could be become much more dire if nothing happens soon. “When our generation goes away, where is the new generation going to be?” he asked. “All the scientists I know are so anxious about their funding that they don’t make inspiring role models. I heard it from my own kids, who went into art and music respectively. They said, ‘You know, we see you, and you don’t look very happy.’ ”

4th CIFAS Field School in Ethnographic Research Methods in Xalapa, Mexico

Summer Field School in Ethnographic Methods in Mexico

July 23 to August 10, 2012 – Xalapa, Mexico

The Comitas Institute for Anthropological Study (CIFAS) is pleased to announce the 4th CIFAS Field School in Ethnographic Research Methods, in Xalapa (Jalapa), Mexico.

The goal of the Field School is to offer training in the foundations and practice of ethnographic methods. The faculty works closely with participants to identify the required field methods needed to address their academic or professional needs. The Field School is designed for people with little or no experience in ethnographic research, or those who want a refresher course. It is suitable for graduate and undergraduate students in social sciences and other fields of study that use qualitative approaches (such as education, communication, cultural studies, health, social work, human ecology, development studies, consumer behavior, among others), applied social scientists, professionals, and researchers who have an interest in learning more about ethnographic methods and their applications.

Program:

·          Foundations of ethnographic research

·          Social theories in the field & research design

·          Planning the logistics of field research

·          Data collection techniques

·          Principles of organization and indexation of field data

·          Analyzing field data

·          Qualitative analysis softwares: basic principles

·          Individual, one-on-one discussion of research projects

·          Field trips

Coordinators:

Renzo Taddei (Assistant Professor, Federal University of Rio de Janeiro/Affiliated Researcher, Columbia University). CV: http://bit.ly/nueNbu.

Ana Laura Gamboggi (Postdoctoral fellow, University of Brasilia). CV: http://bit.ly/psuVyw.

Zulma Amador (Faculty member of the Centro de EcoAlfabetización y Diálogo de Saberes of Universidad Veracruzana, Mexico). CV: http://bit.ly/J1VGVA

Registration and other costs: Places are limited. The registration fee is US$900, which covers the full three weeks of program activities. The registration fee should be paid by July 1, though a deposit to the CIFAS bank account. Pre-registration should be completed online at the link http://bit.ly/Jr0kvU. The deadline for pre-registration is June 30, 2012.

The registration fee does not cover accommodation, meals or transportation. If needed, the organizers of the Field School can recommend reasonably priced hotels and places to eat during the program. In Xalapa, accommodation, meals and local transportation costs should be no more than US$100 per day in total.

Course venue: Classes will take place in the Centro de EcoAlfabetización y Diálogo de Saberes of Universidad Veracruzana (refer to http://www.uv.mx/transdisciplina). For more information on Xalapa, please see “Xalapa: Mexico’s best kept secret

Other information:

Language: The Field School activities will be carried out in English. Special sections of the Field School can be offered in Spanish, depending on the number of interested individuals.

Visa requirements: Citizens of the U.S. and some European and Latin American countries don’t need visas to enter Mexico, but do need valid passports. You can check whether you need a visa here: http://www.inm.gob.mx/index.php/page/Paises_Visa/en.html.

Insurance: Participants are required to have travel insurance that covers medical and repatriation costs. Proof of purchase of travel insurance must be presented at the first day of activities.

The average temperature in Xalapa in July is 25 ºC (77 ºF) during the day and 16 ºC (61 ºF) at night. Xalapa´s rainy season goes from June to November, so participants should expect some rain during the field school.

For more information, please see the link http://bit.ly/Jr0kvU or write to Renzo Taddei at taddei@iri.columbia.edu.

Charting Hybridised Realities (Tactical Media Files)

Posted on April 15, 2012 by 

This text was originally written for the Re-Public on-line journal, which focuses on innovative developments in contemporary political theory and practice, and is published from Greece. As the journal has ground to a (hopefully just temporary) halt under severe austerity pressures we decided to post the current first draft of the text on the Tactical Media Files blog. This posting is one of two, the second of which will follow shortly. Both texts build on my recent Network Notebook on the ‘Legacies of Tactical Media‘.

The second text is a collection of preliminary notes that expand on recent discussions following Marco Deseriis and Jodi Dean’s essay “A Movement Without Demands”. It is conceivable that both texts will merge into a more substantive essay in the future, but I haven’t made up my mind about that as yet.

Hope this will be of interest,
Eric

Charting Hybridised Realities

Tactical Cartographies for a densified present

In the midst of an enquiry into the legacies of Tactical Media – the fusion of art, politics, and media which had been recognised in the middle 1990s as a particularly productive mix for cultural, social and political activism [1], the year 2011 unfolded. The enquiry had started as an extension of the work on the Tactical Media Files, an on-line documentation resource for tactical media practices worldwide [2], which grew out of the physical archives of the infamous Next 5 Minutes festival series on tactical media (1993 – 2003) housed at the International Institute of Social History in Amsterdam. After making much of tactical media’s history accessible again on-line, our question, as editors of the resource, had been what the current significance of the term and the thinking and practices around it might be?

Prior to 2011 this was something emphatically under question. The Next 5 Minutes festival series had been ended with the 2003 edition, following a year that had started on September 11, 2002, convening local activists gatherings named as Tactical Media Labs across six continents. [3] Two questions were at the heart of the fourth and last edition of the Next 5 Minutes: How has the field of media activism diversified since it was first named ‘tactical media’ in the middle 1990s? And what could be significance and efficacy of tactical media’s symbolic interventions in the midst of the semiotic corruption of the media landscape after the 9/11 terrorist attacks?

This ‘crash of symbols’ for obvious reasons took centre stage during this fourth and last edition of the festival. Naomi Klein had famously claimed in her speedy response to the horrific events of 9/11 that the activist lever of symbolic intervention had been contaminated and rendered useless in the face of the overpowering symbolic power of the terrorist attacks and their real-time mediation on a global scale. [4] The attacks left behind an “utterly transformed semiotic landscape” (Klein) in which the accustomed tactics of culture jammers had been ‘blown away’ by the symbolic power of the terrorist atrocities. Instead ‘we’ (Klein appealing to an imaginary community of social activists) should move from symbols to substance. What Klein overlooked in this response in ‘shock and awe’, however, was that while the semiotic landscape had indeed been dramatically transformed (and corrupted) in the wake of the 9/11 attacks, it still remained a semiotic landscape – symbols were still the only lever and entry point into the wider real-time mediated public domain.

Therefore, as unlikely as it may have seemed at the time, the question about the diversification of the terrain and the practices of media activism(s) was ultimately of far greater importance. What the 9/11 crash of symbols and the semiotic corruption debate contributed here was ‘merely’ an added layer of complexity. In a society permeated by media flows, social activism necessarily had to become media activism, and thus had to operate in a significantly more complex and contested environment. The diversification of the media and information landscape, however, also implied that a radical diversification of activist strategies was needed to address these increasingly hybridised conditions.

To name but a few of the emerging concerns: Witnessing of human rights abuses around the world, and creating public visibility and debate around them remained a pivotal concern for many tactical media practitioners, as it had been right from the early days of camcorder activism. But now new concerns over privacy in networked media environments, coupled with security and secrecy regimes of information control entered the scene. Critical media arts spread in different directions, claiming new terrains as diverse as life sciences and bio-engineering, as well as ‘contestational robotics’, interventions into the space of computer games, and even on-line role playing environments. Meanwhile the free software movement made its strides into developing more autonomous toolsets and infrastructures for a variety of social and cultural needs – adding a more strategic dimension to what had hitherto been mostly an interventionist practice. In a parallel movement on-line discussion groups, mailing lists, and activity on various social media platforms started to coalesce slowly into what media theorist Geert Lovink has described as ‘organised networks’. [5] Or finally the rapid development of wireless transmission technologies, smart phones and other wireless network clients, which introduced a paradoxical superimposition of mediated and embodied spatial logics, best be captured in the multilayered concept of Hybrid Space. [6]

Our question was therefore entirely justified, to ask how the term ‘tactical media’ could possibly bring together such a diversified, heterogeneous, and hybridised set of practices in a meaningful way? It had become clear that more sophisticated cartographies would be necessary to begin charting this intensely hybridised landscape.

A digital conversion of public space

If the events in 2011 have made one thing clear it is that the ominous claim of Critical Art Ensemble that “the streets are dead capital” [7] has been declared null and void by an astounding resurgence of street protest, whatever their longer term political significance and fallout might be. These protests staged in the streets and squares, ranging from anti-austerity protests in Southern Europe to the various uprisings in Arab countries in North Africa and the Middle East, to the Occupy protests in the US and Northern Europe, have by no means been staged in physical spaces out of a rejection of the semiotic corruption of the media space. Much rather the streets and squares have acted as a platform for the digital and networked multiplication of protest across a plethora of distribution channels, cutting right across the spectrum of alternative and mainstream, broadcast and networked media outlets.

What remained true to the origin of the term ‘tactical media’ was to build on Michel de Certeau’s insight that the ‘tactics of the weak’ operate on the terrain of strategic power through highly agile displacements and temporary interventions [8], creating a continuous nomadic movement, giving voice to the voiceless by means of ‘any media necessary’ (Critical Art Ensemble). However, the radical dispersal of wireless and mobile media technologies meant that mediated and embodied public spaces increasingly started to coincide, creating a new hybridised logic for social contestation. As witnessed in the remarkable series of public square occupations in 2011, through the digital conversion of public space the streets have become networks and the squares the medium for collective expression in a transnationally interconnected but still highly discontinuous media network.

Horizontal networks / lateral connections

One of the remarkable characteristics of the various protests is not simply the adoption of similar tactics (most notably occupations of public city squares), but the conscious interlinking of events as they unfold. Italian activists of the Unicommons movement physically linked up with revolting students in Tunisia, Egyptian bloggers and occupiers of Tahrir Square linked up with the ‘take the square’ activists in Spain, who in turn expressed solidarity and even co-initiated transnational actions with #occupy activists in the United States and elsewhere. It is the first time that the new organisational logic of transnational horizontal networks that has been theorised for instance in the seminal work “Territory, Authority, Rights” by sociologist Saskia Sassen, has become so evidently visible in activists practices across a set of radically dispersed geographic assemblages.

Horizontal networks by-pass traditional vertically integrated hierarchies of the local / national / international to create specific spatio-temporal transnational linkages around common interests, but also around affective ties. By and large these ties and linkages are still extra-institutional, largely informal, and because of their radically dispersed make up and their ‘affective’ constitution highly unstable. Political institutions have not even begun assembling an adequate response to these new emergent political constellations (other than traditional repressive instruments of strategic power, i.e. evictions, arrests, prohibitions). Given the structural inequalities that fuel the different strands of protest the longer term effectiveness of these measures remains highly uncertain. The institutional linkages at the moment seem mostly limited to anti-institutional contestation on the part of protestors and repressive gestures of strategic authority. The truly challenging proposition these new transnational linkages suggest, however, is their movement to bypass the nested hierarchies of vertically integrated power structures in a horizontal configuration of social organisation. They link up a bewildering array of local groups, sites, networks, geographies, and cultural contexts and sensitivities, taking seriously for the first time the networked space as a new ‘frontier zone’ (Sassen) where the new constellations of lateral transnational politics are going to be constructed.

Charting the layered densities of hybrid space

Hybrid Space is discontinuous. It’s density is always variable, from place to place, from moment to moment. Presence of carrier signals can be interrupted or restored at any moment. Coverage is never guaranteed. The economics of the wireless network space is a matter of continuous contestation, and transmitters are always accompanied by their own forms of electromagnetic pollution (electrosmog). Charting and navigating this discontinuous and unstable space, certainly for social and political activists, is therefore always a challenge. Some prominent elements in this cartography are emerging more clearly, however:

– connectivity: presence or absence of the signal carrier wave is becoming an increasingly important factor in staging and mediating protest. Exclusive reliance on state and corporate controlled infrastructures thus becomes increasingly perilous.

– censorship: censorship these days comes in many guises. Besides the continued forms of overt repression (arrests, confiscations, closures) of media outlets, new forms are the excessive application of intellectual property rights regimes to weed out unwarranted voices from the media landscape, but also highly effective forms of  dis-information and information overflow, something that has called the political efficacy of a project like WikiLeaks emphatically into question.

– circumvention: Great Information Fire Walls and information blockages are obvious forms of censorship, widely used during the Arab protests and common practice in China, now also spreading throughout the EU (under the guise of anti-piracy laws). These necessitate an ever more sophisticated understanding and deployment of internet censorship circumvention techniques, an understanding that should become common practice for contemporary activists. [9]

– attention economies: attention is a sought after commodity in the informational society. It is also fleeting. (Media-) Activists need to become masters at seizing and displacing public attention. Agility and mobility are indispensable here.

– public imagination management: Strategic operators try to manage public opinion. Activists cannot rely on this strategy. They do not have the means to keep and maintain public opinion in favour of their temporary goals. Instead activists should focus on ‘public imagination management’ – the continuous remembrance that another world is possible.

Beyond semiotic corruption: A perverse subjectivity

The immersion in extended networks of affect that now permeate both embodied and mediated spaces introduces a new and inescapable corruption of subjectivity. Critical theory already taught us that we cannot trust subjectivity. However, the excessive self-mediation of protestors on the public square has shown that a deep desire for subjective articulation drives the manifestation in public. The dynamic is underscored further by upload statistics of video platforms such as youtube that continue to outpace the possibility for the global population to actually see and witness these materials.

Rather than dismissing subjectivity it should be embraced. This requires a new attitude ‘beyond good and evil’, beyond critique and submission. A new perverse subjectivity is able to straddle the seemingly impossible divide between willing submission to various forms of corporate, state and social coercion, and vital social and political critique and contestation. It’s maxim here: Relish your own commodification, embrace your perverse subjectivity, in order to escape the perversion of subjectivity.

Eric Kluitenberg
Amsterdam, April 15, 2012.

References:

1 – See: David Garcia & Geert Lovink, The ABC of Tactical Media, May 1997, a.o.:
www.tacticalmediafiles.net/article.jsp?objectnumber=37996

2 – www.tacticalmediafiles.net

3 – Documentation of the Tactical Media Labs events can be found at:
www.n5m4.org

4 – Naomi Klein – Signs of the Times, in The Nation, October 5, 2001.
Archived at: www.tacticalmediafiles.net/article.jsp?objectnumber=46632

5 – Geert Lovink and Ned Rossiter, Dawn of the Organised Networks, in; Fibreculture Journal, Issue 5, 2005.
http://five.fibreculturejournal.org/fcj-029-dawn-of-the-organised-networks/

6 – See my article The Network of Waves, and the theme issue Hybrid Space of Open – Journal for Art and the Public Domain, Amsterdam, 2006;
www.tacticalmediafiles.net/article.jsp?objectnumber=48405
(the complete issue is linked as pdf file to the article).

7 – Critical Art Ensemble, Digital Resistance, Autonomedia, New York, 2001.
www.critical-art.net/books/digital/

8 – Michel de Certeau, The Practice of Everyday Life, University of California Press, 1984.

9 – A useful manual can be found here: www.flossmanuals.net/bypassing-censorship/

Doubtful significance (World Economics Association)

by G M Peter Swann [gmpswann@yahoo.co.uk]
World Economics Association Newsletter 2(2), April.2012, page 6.

In the February issue of this newsletter, Steve Keen (2012) makes some very good points about the use of mathematics in economics. Perhaps we should say that the problem is not so much the use of mathematics as the abuse of mathematics.

A particular issue that worries me is when econometricians make liberal use of assumptions, without realising how strong these are.

Consider the following example. First, you are shown a regression summary of the relationship between Y and X, estimated from 402 observations. The conventional t-statistic for the coefficient on X is 3.0. How would you react to that?

Most economists would remark that t = 3.0 implies significance at the 1% level, which is a strong confirmation of the relationship. Indeed, many researchers mark significance at the 1% level with three stars!

Second, consider the scatter diagram below. This also shows two variables Y and X, and is also based on 402 observations. What does this say about the relationship between Y and X?

Figure 1

I have shown this diagram to several colleagues and students, and typical reactions are either that there is no relationship, or that the relationship could be almost anything.
But the surprising fact is that the data in Figure 1 are exactly the same data as used to estimate the regression summary described earlier. How can such an amorphous scatter of points represent a statistically significant relationship? It is the result of a standard assumption of OLS regression: that the explanatory variable(s) X is/are independent of the noise term u.

So long as this independence assumption is true, we can estimate the relationship with surprising precision. To see this, rewrite the conventional t-statistic as,

, where ψ is a signal to noise ratio (describing the clarity of the scatter-plot) and N-k is the number of degrees of freedom (Swann, 2012). This formula can be used for bivariate and multivariate models.

In Figure 1, ψ is 0.15, which is quite low, but N-k = 400, which is large enough to make t = 3.0. More generally, even if the signal to noise ratio is very low, so that the relationship between Y and X is imperceptible from a scatter-plot, we can always estimate a significant tstatistic – so long as we have a large enough number of observations, and so long as the independence assumption is true. But there is something doubtful about this ‘significance’.

Is the independence assumption justified? In a context where data are noisy, where rough proxy variables are used, where endogeneity is pervasive, and so on, it does seem an exceptionally strong assumption.

What happens if we relax the independence assumption? When the signal to noise ratio is very low, the estimated relationship depends entirely on the assumption that replaces it. Swann (2012) shows that the relationship in Figure 1 could indeed be almost anything – depending on what we assume about the noise variable(s).

Some have suggested that this is not a problem in practice, because signal to noise ratios are usually large enough to avoid this difficulty. But, on the contrary, some evidence suggests the problem is generally worse than indicated by Figure 1.

Swann (2012) examined 100 econometric studies taken from 20 leading economics journals, yielding a sample of 2220 parameter estimates and the corresponding signal to noise ratios. Focussing on the parameter estimates that are significant (at the 5% level or better), we find that almost 80% of those have a signal to noise ratio even lower than that in Figure 1.

In summary, it appears that the problem of ‘doubtful significance’ is pervasive. The great majority of ‘significant relationships’ in this sample would be imperceptible from the corresponding scatter-plot. The ‘significance’ indicated by a high t-statistic derives from the large number of observations and the (very strong) independence assumption.

References

Keen S. (2012) “Maths for Pluralist Economics”, World Economics Association Newsletter 2 (1), 10-11

Swann G.M.P. (2012) Doubtful Significance, Working paper available at: https://sites.google.com/site/gmpswann/doubtful-significance

[Editor’s note: If you are interested in this topic, you may also wish to read D.A. Hollanders, “Five methodological fallacies in applied econometrics”, real-world economics review, issue no. 57, 6 September 2011, pp. 115-126, http://www.paecon.net/PAEReview/issue57/Hollanders57.pdf%5D

Detribalising Economics (World Economics Association)

By Rob Garnett [r.garnett@tcu.edu]
World Economics Association Newsletter 2(2), April.2012, page 4

In “Why Pluralism?” (2011), Stuart Birks calls for “greater discussion, deliberation, and cross-fertilization of ideas” among schools of economic thought as an antidote to each school’s autarkic tendency to “see itself as owning the ‘truth’ for its area.” As a philosophical postscript, I want to underscore the catholic reach of Birks’s remarks — his genial reminder, properly addressed to all economists, of the minimal requirements for academic inquiry.

The case for academic pluralism in economics is motivated by the ubiquity of “myside bias” (Klein 2011). Whether methodological, ideological, paradigmatic, or all of the above, such groupthink fuels intellectual segregation and bigotry. It turns schools into echo chambers, sealed off from the critical feedback loops that check hubris and propel scholarly progress.

Pluralists know that “The causes of faction cannot be removed . . . Relief is only to be sought in the means of containing its effects” (Hamilton, Madison, and Jay [1788] 2001, 45). So even as they celebrate paradigmatic diversity, they insist that scholars observe two liberal precepts:

1. academic discourse is a commons, no ‘area’ of which can be owned by any school; and

2. within these spaces of inquiry, scholars bear certain ethical duties as academic citizens.

Academic pluralism is the duty to practice “methodological awareness and toleration” (Backhouse 2001, 163) and “to constantly [seek] to learn from those who [do] not share [one’s] ideological or methodological perspective” (Boettke 2004, 379). It is “academic” because it coincides with the epistemological and ethical norms of modern academic freedom (American Association of University Professors 1940). It is “pluralist” because it entails a commitment to conduct one’s scholarly business in a non-sectarian manner.

Could a critical mass of economists ever be persuaded to enact these scholarly virtues? Yes! But admirers of these virtues must be prepared to teach by example. When Warren Samuels passed away in last August, he was eulogized as a first-rate scholar who advanced pluralism by enacting it consistently over his long career. As the Austrian economist Peter Boettke recalls:

Prior to meeting Warren, I think it would be accurate to say that I divided the world neatly into those who are stupid, those who are evil, and those who are smart and good enough to agree with me. . . . Warren destroyed that simple intellectual picture of the world. . . . He didn’t overturn my intellectual commitments . . . but he made [me] more selfcritical and less self-satisfied, and hopefully a better scholar [and] teacher (Boettke 2011).

The pluralism Warren Samuels personified can be achieved by most economic scholars, teachers, and students to a reasonable degree. If we want economics to regain its standing as a serious and humane social science, we must find more ways to activate these dormant capabilities.

References

American Association of University Professors (1940) Statement of Principles on Academic Freedom and Tenure. Washington, DC.

Backhouse, R. E. (2001) On the Credentials of Methodological Pluralism. In J. E. Biddle, J. B. Davis, and S. G. Medema (Eds.), Economics Broadly Considered: Essays in Honor of Warren J. Samuels, 161-181. London: Routledge.

Boettke, P. J. (2011) “Warren Samuels (1933-2011)”, http://www.coordinationproblem.org/2011/08/warren-samuels-1933-2011.html Accessed August 18, 2011.

Boettke, P. J. (2004) Obituary: Don Lavoie (1950-2001). Journal of Economic Methodology 11 (3): 377-379.

Birks, S. (2011) “Why Pluralism?” World Economics Association Newsletter, vol. 1, no. 1.

Hamilton, A., Madison, J., and Jay, J. (2001) [1788] The Federalist. Gideon edition. G. W. Carey and J. McClellan (eds.) Indianapolis, IN: Liberty Fund.

Klein, D. B. (2011) “I Was Wrong, and So Are You.” The Atlantic, December.

[Editor’s note: Readers may also be interested in Garnett, R. F. (Ed.). (1999). What do economists know? London: Routledge]

Are You Prepared for Zombies? (American Anthropological Association)

by Joslyn O.

 Today’s guest blog post is by cultural anthropologist and AAA member, Chad Huddleston. He is an Assistant Professor at St. Louis University in the Sociology, Anthropology and Criminal Justice department.

Recently, a host of new shows, such as Doomsday Preppers on NatGeo and Doomsday Bunkers on Discovery Channel, has focused on people with a wide array of concerns about possible events that may threaten their lives. Both of these shows focus on what are called ‘preppers.’ While the people that may have performed these behaviors in the past might have been called ‘survivalists,’ many ‘preppers’ have distanced themselves from that term, due to its cultural baggage: stereotypical anti-government, gun-loving, racist, extremists that are most often associated with the fundamentalist (politically and religiously) right side of the spectrum.

I’ve been doing fieldwork with preppers for the past two years, focusing on a group called Zombie Squad. It is ‘the nation’s premier non-stationary cadaver suppression task force,’ as well as a grassroots, 501(c)3 charity organization. Zombie Squad’s story is that while the zombie removal business is generally slow, there is no reason to be unprepared. So, while it is waiting for the “zombpacolpyse,” it focuses its time on disaster preparedness education for the membership and community.

The group’s position is that being prepared for zombies means that you are prepared for anything, especially those events that are much more likely than a zombie uprising – tornadoes, an interruption in services, ice storms, flooding, fires, and earthquakes.

For many in this group, Hurricane Katrina was the event that solidified their resolve to prep. They saw what we all saw – a natural disaster in which services were not available for most, leading to violence, death and chaos. Their argument is that the more prepared the public is before a disaster occurs, the less resources they will require from first responders and those agencies that come after them.

In fact, instead of being a victim of natural disaster, you can be an active responder yourself, if you are prepared. Prepare they do. Members are active in gaining knowledge of all sorts – first aid, communications, tactical training, self-defense, first responder disaster training, as well as many outdoor survival skills, like making fire, building shelters, hunting and filtering water.

This education is individual, feeding directly into the online forum they maintain (which has just under 30,000 active members from all over the world), and by monthly local meetings all over the country, as well as annual national gatherings in southern Missouri, where they socialize, learn survival skills and practice sharpshooting.

Sound like those survivalists of the past? Emphatically no. Zombie Squad’s message is one of public education and awareness, very successful charity drives for a wide array of organizations, and inclusion of all ethnicities, genders, religions and politics. Yet, the group is adamant on leaving politics and religion out of discussions on the group and prepping. You will not find exclusive language on their forum or in their media. That is not to say that the individuals in the group do not have opinions on one side or the other of these issues, but it is a fact that those issues are not to be discussed within the community of Zombie Squad.

Considering the focus on ‘future doom’ and the types of fears that are being pushed on the shows mentioned above, usually involve protecting yourself from disaster and then other people that have survived the disaster, Zombie Squad is a refreshing twist to the ‘prepper’ discourse. After all, if a natural disaster were to befall your region, whom would you rather be knocking at your door: ‘raiders’ or your neighborhood Zombie Squad member?

And the answer is no: they don’t really believe in zombies.

A internet está cada vez mais política (Folha de S.Paulo)

JC e-mail 4464, de 27 de Março de 2012.

fonte: http://www.jornaldaciencia.org.br/Detalhe.jsp?id=81741

O advogado Marcel Leonardi foi um dos principais colaboradores na discussão pública que elaborou o Marco Civil da Internet, projeto de lei proposto pelo Ministério da Justiça para traçar princípios como neutralidade e privacidade na internet brasileira. Tempos depois, Leonardi foi chamado para assumir o posto de diretor de políticas públicas do Google no Brasil.

Em outras palavras, ele é o responsável por conversar com o governo, articular a defesa dos usuários em casos como o da cobrança do Escritório Central de Arrecadação e Distribuição (Ecad) sobre vídeos do YouTube embedados em blogs e levar à esfera pública princípios básicos da internet.

Tanto é que ele vive entre idas e vindas de Brasília e participa de audiências públicas para expor a opinião do Google – e a sua – sobre projetos de leis em discussão que afetam a maneira como as pessoas usam a internet, como o Código de Defesa do Consumidor, a Lei de Direitos Autorais e o próprio Marco Civil da Internet.

O advogado também responde questionamentos em nome do Google. Recentemente, o Ministério da Justiça exigiu explicações sobre as mudanças das regras de privacidade. A empresa, afinal, é custeada por publicidade – e neste modelo, os dados pessoais dos usuários têm muito valor. E é neste ponto em que os interesses da empresa e os dos usuários se distanciam. Leonardi diz que é uma questão de conscientização dos usuários sobre as novas regras.

Vestindo camiseta e calça jeans, sem o terno habitual, o articulador do Google deixa claro: hoje as empresas também fazem política. Cada vez mais.

O Ministério da Justiça questionou as mudanças na política de privacidade do Google. O que vocês responderam?

A gente está disposto a trabalhar com as autoridades. Há muita apreensão do que a gente faz em relação à privacidade, mas há pouca compreensão. Antes o Google tinha políticas separadas por produtos. Mas todas elas, com exceção de duas, já diziam que dados de um serviço poderiam ser utilizados em outros serviços. Então a unificação não alterou nada. Os dados que a gente coleta são os mesmos. As exceções eram o YouTube, que tinha uma política própria, e o histórico de buscas, que hoje expressamente pode ser usado em outros produtos do Google.

O que é preocupante.

A gente não considera assustador porque damos ao usuário as ferramentas para ele controlar isso. O usuário acessa o painel de controle e diz se quer ou não manter o histórico da busca. A pessoa pode desativar completamente. Seria assustador se acontecesse sem o usuário saber o que está acontecendo. Todas as empresas do setor adotam esse modelo.

Os dados pessoais são valiosos, e as pessoas não têm ideia do que é feito com as informações.

A mudança passou pelo maior esforço de notificação da história do Google. Anunciamos no dia 24 de janeiro, e elas só entraram em vigor no dia 1º de março. Durante todo esse período, tinha um aviso em todas as páginas. A lógica era reduzir o “legalês”, porque a indústria de internet sempre ouviu que as políticas e termos de uso tinham de ser mais claros. Enxugamos radicalmente, só que cai nesse problema: em que momento você consegue forçar alguém a ler? As pessoas sempre dizem que estão preocupadas com a privacidade, mas agem diferente.

O Google foi condenado recentemente por causa de uma postagem no Orkut. A responsabilização de empresas por conteúdo de usuários é recorrente?

É um debate antigo. Mundialmente existe o conceito de que a plataforma não é responsável. Nos EUA e na Europa a lei diz isso expressamente. O Brasil ainda não tem uma lei específica. Uma das propostas é o Marco Civil da Internet, que diz que a responsabilidade só será derivada do descumprimento de uma ordem judicial. Na ausência de leis, os tribunais analisam caso a caso. O Google sempre recorre para mostrar que, pela lógica e pelo bom senso, não existe responsabilidade da plataforma.

Como funciona o processo de remoção de conteúdo, por exemplo, um post de um blog?

Em casos de direito autoral, o Google recebe a notificação de alguém que demonstra que é titular daquele direito e que aquilo não foi autorizado, e existe a verificação se isso viola ou não. Mas existem alguns requisitos. Na lei americana, há os requisitos do DMCA (Digital Millenium Copyright Act, lei de direitos autorais sancionada em 1998). No Brasil, da lei autoral.

O próprio Google verifica?

Existem os times internos que avaliam. Se há infração, a remoção acontece sem intervenção judicial, porque está de acordo com a nossa política de não permitir violação de direito autoral.

Concorda com a proposta do Ministério da Cultura, na nova Lei de Direitos Autorais, de institucionalizar um mecanismo de notificação?

Ainda é controverso. Eles pretendiam incluir o mecanismo que transforma em lei uma prática que muitas empresas adotam. O problema desse modelo é que dá margem para muito abuso. A gente vê muito isso nos EUA. Todo mundo tenta enquadrar própria situação em uma violação para justificar uma remoção.

Por que vocês se posicionaram contra a cobrança do Ecad sobre vídeos do YouTube?

Percebemos uma distorção na postura do Ecad. Achamos importantíssimo deixar pública a nossa posição de que não compactuávamos com aquilo, de que a interpretação da lei estava errada. O grande problema é que os novos modelos de negócio querem florescer, mas eles vêem uma interpretação antiga da lei autoral e isso impede que eles cresçam. O Spotify é um exemplo. O sujeito paga 10 euros e tem acesso a milhões de músicas. Muitas vezes a pirataria nada mais é que uma demanda reprimida que o mercado não está cumprindo.

A reforma da lei de direitos autorais é um avanço?

É uma incógnita. Tenho a impressão de que a versão intermediária é um pouco mais aberta e amigável para esses modelos. Tinha a licença compulsória, que era interessante, e uma linguagem que permitiria um uso mais flexível.

Vocês opinaram nesse texto?

A gente participa dos debates, mas depois da consulta pública a coisa fica fechada. No Congresso dá para conversar. É importante. Inclusive, se não fossem os ativistas, muita coisa de regulação de internet no Brasil teria sido diferente. Toda a oposição à lei Azeredo, toda a pressão para o Marco Civil, é fruto do engajamento. Nos EUA, a o caso Sopa foi interessante. O fato da Wikipedia ter saído do ar apavorou muita gente. Foi só aí que houve conscientização sobre os riscos da lei.

Essa lei nos EUA provocou um movimento em defesa dos princípios da internet. As empresas estão assumindo uma postura política?

Não tem como a gente não pensar politicamente hoje. Não dá para olhar para o próprio umbigo e pensar que enquanto o negócio vai bem não é preciso conversar. Porque existem questões acima. Quando a gente pensa politicamente é isso, todas as empresas do setor tendem a conversar e entender melhor como isso funciona.

Há necessidade de uma lei atualizada de cibercrimes?

Existe a necessidade do juiz ou de quem trabalha com direito criminal entender melhor a internet. Porque a maior parte do que está na lei já funciona. Não podemos correr o risco de adotar um texto tão genérico ao ponto de você estar lá fuçando no celular, sem querer você invade um sistema e vão dizer que você cometeu um crime.

O Brasil ainda é líder nos pedidos de remoção de conteúdo?

Sim. No nosso relatório de transparência constam todas as requisições do governo ou da Justiça de remoção de conteúdo. O Brasil é líder em remoções porque aqui é fácil. Você pode ir sem custo e sem advogado a um tribunal de pequenas causas e pedir uma liminar para tirar um blog do ar. Além disso, muita gente está acostumada com a cultura de “na dúvida, vamos pedir para remover”.

O que pode instituir a censura.

É. A gente já se deparou com casos assustadores. Está crescendo o número de empresas criticadas por consumidores que entram com uma ação para remover qualquer referência negativa.

(Folha de São Paulo)