Arquivo da tag: Metodologias de pesquisa

Ethnography: A Scientist Discovers the Value of the Social Sciences (The Scholarly Kitchen)


Picture from an early ethnographic study

I have always liked to think of myself as a good listener. Whether you are in therapy (or should be), conversing with colleagues, working with customers, embarking on strategic planning, or collaborating on a task, a dose of emotional intelligence – that is, embracing patience and the willingness to listen — is essential.

At the American Mathematical Society, we recently embarked on ambitious strategic planning effort across the organization. On the publishing side we have a number of electronic products, pushing us to consider how we position these products for the next generation of mathematician. We quickly realized that it is easy to be complacent. In our case we have a rich history online, and yet – have we really moved with the times? Does a young mathematician need our products?

We came to a sobering and rather exciting realization: In fact, we do not have a clear idea how mathematicians use online resources to do their research, teaching, hiring, and job hunting. We of course have opinions, but these are not informed by anything other than anecdotal evidence from conversations here and there.

To gain a sense of how mathematicians are using online resources, we embarked on an effort to gather more systematic intelligence embracing a qualitative approach to the research – ethhnography. The concept of ethnographic qualitative research was a new one to me – and it felt right. I quickly felt like I was back in school and a graduate student in ethnography, reading the literature, and thinking through with colleagues how we might apply qualitative research methods to understanding mathematicians’ behavior. It is worth taking a look at two excellent books: Just Enough Research by Erika Hall, and Practical Ethnography: A Guide to Doing Ethnography in the Private Sector by Sam Ladner.

What do we mean by ethnographic research? In essence we are talking about a rich, multi-factorial descriptive approach. While quantitative research uses pre-existing categories in its analysis, qualitative research is open to new ways of categorizing data – in this case, mathematicians’ behavior in using information. The idea is that one observes the subject (“key informant” in technical jargon) in their natural habitat. Imagine you are David Attenborough, exploring an “absolutely marvelous” new species – the mathematician – as they operate in the field. The concept is really quite simple. You just want to understand what your key informants are doing, and preferably why they are doing it. One has to do it in a setting that allows for them to behave naturally – this really requires an interview with one person not a group (because group members may influence each other’s actions).

Perhaps the hardest part is the interview itself. If you are anything like me, you will go charging in saying something along the lines of “look at these great things we are doing. What do you think? Great right?” Well, of course this is plain wrong. While you have a goal going in, perhaps to see how an individual is behaving with respect to a specific product, your questions need to be agnostic in flavor. The idea is to have the key informant do what they normally do, not just say what they think they do – the two things may be quite different. The questions need to be carefully crafted so as not to lead, but to enable gentle probing and discussion as the interview progresses. It is a good idea to record the interview – both in audio form, and ideally with screen capture technology such as Camtasia. When I was involved with this I went out and bought a good, but inexpensive audio recorder.

We decided that rather than approach mathematicians directly, we should work with the library at an academic institution. Libraries are our customers. The remarkable thing about academic libraries is that ethnography is becoming part of the service they provide to their stakeholders at many institutions. We actually began with a remarkable librarian, based at Rice University – Debra Kolah. She is the head of the user experience office at the Fondren Library of Rice University in Texas. She also happens to be the physics, math and statistics librarian at Rice. Debra is remarkable, and has become an expert in ethnographic study of academic user experience. She has multiple projects underway at Rice, working with a range of stakeholders, aiming to foster the activity of the library in the academic community she directly serves. She is a picture of enthusiasm when it comes to serving her community and to gaining insights into the cultural patterns of academic user behavior. Debra was our key to understanding how important it is to work with the library to reach the mathematical community at an institution. The relationship is trusted and symbiotic. This triangle of an institution’s library, academic, and outside entity, such as a society, or publisher, may represent the future of the library.

So the interviews are done – then what? Analysis. You have to try to make sense of all of this material you’ve gathered. First, transcribing audio interviews is no easy task. You have a range of voices and much technical jargon. The best bet is to get one of the many services out there to take the files and do a first pass transcription. They will get most of it right. Perhaps they will write “archive instead of arXiv, but that can be dealt with later. Once you have all this interview text, you need to group it into meaningful categories – what’s called “coding”. The idea is that you try to look at the material with a fresh, unbiased eye, to see what themes emerge from the data. Once these themes are coded, you can then start to think about patterns in the data. Interestingly, qualitative researchers have developed a host of software programs to aid the researcher in doing this. We settled for a relatively simple, web based solution – Dedoose.

With some 62 interviews under our belt, we are beginning to see patterns emerge in the ways that mathematicians behave online. I am not going to reveal our preliminary findings here – I must save that up for when the full results are in – but I am confident that the results will show a number of consistent threads that will help us think through how to better serve our community.

In summary, this experience has been a fascinating one – a new world for me. I have been trained as a scientist. As a scientist, I have ideas about what scientific method is, and what evidence is. I now understand the value of the qualitative approach – hard for a scientist to say. Qualitative research opens a window to descriptive data and analysis. As our markets change, understanding who constitutes our market, and how users behave is more important than ever.

Carry on listening!

On Surveys (Medium)

Erika Hall

Feb 23, 2015

Surveys are the most dangerous research tool — misunderstood and misused. They frequently straddle the qualitative and quantitative, and at their worst represent the worst of both.

In tort law the attractive nuisance doctrine refers to a hazardous object likely to attract those who are unable to appreciate the risk posed by the object. In the world of design research, surveys can be just such a nuisance.

Easy Feels True

It is too easy to run a survey. That is why surveys are so dangerous. They are so easy to create and so easy to distribute, and the results are so easy to tally. And our poor human brains are such that information that is easier for us to process and comprehend feels more true. This is our cognitive bias. This ease makes survey results feel true and valid, no matter how false and misleading. And that ease is hard to argue with.

A lot of important decisions are made based on surveys. When faced with a choice, or a group of disparate opinions, running a survey can feel like the most efficient way to find a direction or to settle arguments (and to shirk responsibility for the outcome). Which feature should we build next? We can’t decide ourselves, so let’s run a survey. What should we call our product? We can’t decide ourselves, so let’s run a survey.

Easy Feels Right

The problem posed by this ease is that other ways of finding an answer that seem more difficult get shut out. Talking to real people and analyzing the results? That sounds time consuming and messy and hard. Coming up with a set of questions and blasting it out to thousands of people gets you quantifiable responses with no human contact. Easy!

In my opinion it’s much much harder to write a good survey than to conduct good qualitative user research. Given a decently representative research participant, you could sit down, shut up, turn on the recorder, and get good data just by letting them talk. (The screening process that gets you that participant is a topic for another day.) But if you write bad survey questions, you get bad data at scale with no chance of recovery. This is why I completely sidestepped surveys in writing Just Enough Research.

What makes a survey bad? If the data you get back isn’t actually useful input to the decision you need to make or if doesn’t reflect reality, that is a bad survey. This could happen if respondents didn’t give true answers, or if the questions are impossible to answer truthfully, or if the questions don’t map to the information you need, or if you ask leading or confusing questions.

Often asking a question directly is the worst way to get a true and useful answer to that question. Because humans.

Bad Surveys Don’t Smell

A bad survey won’t tell you it’s bad. It’s actually really hard to find out that a bad survey is bad — or to tell whether you have written a good or bad set of questions. Bad code will have bugs. A bad interface design will fail a usability test. It’s possible to tell whether you are having a bad user interview right away. Feedback from a bad survey can only come in the form of a second source of information contradicting your analysis of the survey results.

Most seductively, surveys yield responses that are easy to count and counting things feels so certain and objective and truthful.

Even if you are counting lies.

And once a statistic gets out — such as “75% of users surveyed said that they love videos that autoplay on page load” —that simple “fact” will burrow into the brains of decision-makers and set up shop.

From time to time, people write to me with their questions about research. Usually these questions are more about politics than methodologies. A while back this showed up in my inbox:

“Direct interaction with users is prohibited by my organization, but I have been allowed to conduct a simple survey by email to identify usability issues.”

Tears, tears of sympathy and frustration streamed down my face. This is so emblematic, so typical, so counterproductive. The rest of the question was of course, “What do I do?”

User research and usability are about observed human behavior. The way to identify usability issues is to usability test. I mean, if you need to maintain a sterile barrier between your staff and your customers, at least use The allowable solution is like using surveys as a way to pass notes through a wall, between the designers and the actual users. This doesn’t increase empathy.

Too many organizations treat direct user research like a breach of protocol. I understand that there are very sensitive situations, often involving health data or financial data. But you can do user research and never interact with actual customers. If you actually care about getting real data rather than covering some corporate ass, you can recruit people who are a behavioral match for the target and never reveal your identity.

A survey is a survey. A survey shouldn’t be a fallback for when you can’t do the right type of research.

Sometimes we treat data gathering like a child in a fairy tale who has been sent out to gather mushrooms for dinner. It’s getting late and the mushrooms are far away on the other side of the river. And you don’t want to get your feet wet. But look, there are all these rocks right here. The rocks look kind of like mushrooms. So maybe no one will notice. And then you’re all sitting around the table pretending you’re eating mushroom soup and crunching on rocks.

A lot of people in a lot of conference rooms are pretending that the easiest way to gather data is the most useful. And choking down the results.

Customer Satisfaction Is A Lie

A popular topic for surveys is “satisfaction.” Customer satisfaction has become the most widely used metric in companies’ efforts to measure and manage customer loyalty.

A customer satisfaction score is an abstraction, and an inaccurate one. According to the MIT Sloan Management Review, changes in customers’ satisfaction levels explain less than 1% of the variation in changes in their share of spending in a given category. Now, 1% is statistically significant, but not huge.

And Bloomberg Businessweek wrote that “Customer-service scores have no relevance to stock market returns…the most-hated companies perform better than their beloved peers.” So much of the evidence indicates this is just not a meaningful business metric, rather a very satisfying one to measure.

And now, a new company has made a business out of helping businesses with websites quantify a fuzzy, possibly meaningless metric.

“My boss is a convert to Foresee. She was apparently very skeptical of it at first, but she’s a very analytical person and was converted by its promise of being able to quantify unquantifiable data — like ‘satisfaction’.”

This is another cry for help I received not too long ago.

The boss in question is “a very analytical person.” This means that she is a person with a bias towards quantitative data. The designer who wrote to me was concerned about the potential of pop-up surveys to wreck the very customer experience they were trying to measure.

There’s a whole industry based on customer satisfaction. And when there is an industry that makes money from the existence of a metric, that makes me skeptical of a metric. Because as a customer, I find this a fairly unsatisfying use of space.

Here is a Foresee customer satisfaction survey (NOT for my correspondent’s employer). These are the questions that sounded good to ask, and that seem to map to best practices.

But this is complete hogwash.

Rate the options available for navigating? What does that mean? What actual business success metric does that map to. Rate the number of clicks–on a ten point scale? I couldn’t do that. I suspect many people choose the number of clicks they remember rather than a rating.

And accuracy of information? How is a site user not currently operating in god mode supposed to rate how accurate the information is? What does a “7″ for information accuracy even mean? None of this speaks to what the website is actually for or how actual humans think or make decisions.

And, most importantly, the sleight of hand here is that these customer satisfaction questions are qualitative questions presented in a quantitative style. This is some customer research alchemy right here. So, you are counting on the uncountable while the folks selling these surveys are counting their money. Enjoy your phlogiston.

I am not advising anyone to run a jerk company with terrible service. I want everyone making products to make great products, and to know which things to measure in order to do that.

I want everyone to see customer loyalty for what it is — habit. And to be more successful creating loyalty, you need to measure the things that build habit.

Approach with Caution

When you are choosing research methods, and are considering surveys, there is one key question you need to answer for yourself:

Will the people I’m surveying be willing and able to provide a truthful answer to my question?

And as I say again and again, and will never tire of repeating, never ask people what they like or don’t like. Liking is a reported mental state and that doesn’t necessarily correspond to any behavior.

Avoid asking people to remember anything further back than a few days. I mean, we’ve all been listening to Serial, right? People are lazy forgetful creatures of habit. If you ask about something that happened too far back in time, you are going to get a low quality answer.

And especially, never ask people to make a prediction of future behavior. They will make that prediction based on wishful thinking or social desireability. And this is the most popular survey question of all, I think:

How likely are you to purchase the thing I am selling in the next 6 months?

No one can answer that. At best you could get 1)Possibly 2)Not at all.

So, yeah, surveys are great because you can quantify the results.

But you have to ask, what are you quantifying? Is it an actual quantity of something, e.g. how many, how often — or is it a stealth quality like appeal, ease, or appropriateness, trying to pass itself off as something measurable?

In order to make any sort of decisions, and to gather information to inform decisions, the first thing you have to do is define success. You cannot derive that definition from a bunch of numbers.

To write a good survey. You need to be very clear on what you want to know and why a survey is the right way to get that information. And then you have to write very clear questions.

If you are using a survey to ask for qualitative information be clear about that and know that you’ll be getting thin information with no context. You won’t be able to probe into the all important “why” behind a response.

If you are treating a survey like a quantitative input, you can only ask questions that the respondents can be relied on to count. You must be honest about the type of data you are able to collect, or don’t bother.

And stay away from those weird 10-point scales. They do not reflect reality.

How to put together a good survey is a topic worthy of a book, or a graduate degree. Right here, I just want to get you to swear you aren’t going to be casual about them if you are going to be basing important decisions on them.

“At its core, all business is about making bets on human behavior.”

— Ben Wiseman, Wall Street Journal

The whole reason to bother going to the trouble of gathering information to inform decisions is that ultimately you want those decisions to lead to some sort of measurable success.

Making bets based on insights from observed human behavior can be far more effective that basing bets on bad surveys. So go forth, be better, and be careful about your data gathering. The most measurable data might not be the most valuable.