Wednesday, January 7, 2015

MORTALITY AND THE CHOICE PROBLEM IN RESEARCH METHODS



Mortality
and the Choice Problem
in Research Methods


Where have I been?
When I last posted an entry to this blog about a year ago, I was feeling poorly, and I got worse over the coming months.  In the spring semester, I staggered through the closing weeks of the course I was teaching and barely managed to finish the page proofs of my new book.[1]  After that, I was no longer capable of effective work.  In May 2014, following remarkable incompetence by those reading my X-rays, who twice missed seeing a large tumor, I was diagnosed with late-stage lung cancer.  Now, after many weeks of treatment and almost as many weeks recovering from treatment, my strength has returned to the point that I can resume writing in this blog—with perhaps a better understanding of certain issues and topics.

What does my glimpse of death have to do with research methods?
Nothing focuses the attention on the choice problem like confronting your own mortality; you gain an enhanced appreciation for judging alternatives.  You often have to make decisions, with varying degrees of complexity, about crucial matters, potentially matters of life and death.  There is no assurance that you’ve made the right decision.  Even after you have made the decision and your doctors have acted on it and evaluated the consequences, you still cannot know whether another option might have been better. You can speculate, of course, but you can’t ever be sure.  The choices I had to make, and the principles for making them, are remarkably parallel to issues of methodological decision making. 

Choice is unavoidable & uncertainty is certain
The first principle of methodological choice is unavoidable uncertainty.  The same is true of medical treatment choices.  Patients usually want certainty from their medical advisors.  And those who seek the advice of methodologists usually do too.  Certainty, being able to give unambiguous advice, is often taken as a sign of competence.  But claiming more certainty than is merited can be dishonest and professionally unethical. 

Uncertainty does not equal ignorance or weakness.  Just the opposite is often true.  Unwarranted certainty on the part of advisors may cause you to breathe a sign of relief, but your confidence is based more on faith than knowledge.  Hunch-based medicine hardly seems like a good idea. 

The same ideas—necessary choices made in the face of uncertainty—have been the focus of my research and writing for many years, but making decisions about my own health put the theme of decision making while struggling with unavoidable uncertainty in a clearer, though harsher, light.

Uncertainty in planning research
Consider the kinds of questions you might ask yourself when approaching a research project.  Should you pursue your research question with surveys or interviews or some combination of the two?  If you combine them, should you use the interviews to help construct the survey questions, to aid in interpreting the answers to the survey questions, or both?  There is no way to know in advance.  And, even after the study is completed, there is not even any way to know in retrospect.  You might be able to say, “OK, this research turned out pretty well,” or that “this study would have been more persuasive had I taken a different approach.”  But you can’t really know; “do-overs” are rarely possible.  What you can do when following a research agenda is to think of your work as cyclical.  You make a choice when selecting a research strategy.  You take action using that strategy.  Then you evaluate the results of that action and alter your future choices accordingly, as illustrated below.

                           Choice                   Action            
                                      Evaluation


Atul Gawande’s take on these issues
In the abstract, such a graphic makes it all seem pretty simple.  But the complications that arise when applying the choice-action-evaluation-choice . . . feedback loop are challenging.   An excellent source for examining the relations in this cycle—in both medical and social research contexts—are the writings of Atul Gawande, most recently his book about end-of-life choices, Being Mortal (2014).  By stressing the social and human contexts of decision making in health care, Gawande humanizes it and highlights the natural links between it and methodological decision making.

The first point is that there is no invariably correct decision.  The right decision depends on your goals.  Do you want to give up some privacy and autonomy by entering a nursing home in order to live longer than you would in a less restrictive environment?  Or do you prefer to maintain your autonomy at all costs, being able, for example, to sleep, eat, and lock your door when you want?  Is that autonomy worth increased risks to your safety and longevity?  Assisted living is an intermediate option, but choosing the right assisted-living facility for you is no simple matter.  And when the end gets nearer, as it inevitably must, do you want hospital or hospice?  Everything depends on what you value, and, therefore, on deciding what you actually value.  Different values will lead to different “correct” choices.  And if you have, in Gawande’s terms, “priorities beyond merely being safe and living longer,” such as privacy and autonomy, you may find yourself in conflict with your loved ones.  Of course, you might like to be safe, to live longer, and also to retain your privacy and autonomy.  But there are usually unavoidable tradeoffs among these goals.  There is no way to maximize them all. 

What to do—and how to do it.
Dying isn’t curable, but many diseases are.  And it is comparatively easier to prolong life with good interventions than it ever has been.  Medicine can be highly effective; but practitioners applying the same techniques often have very different levels of success.  Gawande is well known for his earlier works uncovering the reasons for differences between excellent and mediocre practice.  Again, there are strong parallels between medical and research practice. 

Gawande’s Checklist Manifesto (2011) is a good example.  His argument is that in any field where the work is complex the quality of work is improved if you use checklists.  There are two basic reasons to use checklists:  to be sure (1) that you don’t forget something important; and (2) that you consider all the options available to you.  Gawande emphasizes the first, but the second is arguably the more important.  In medicine, for example, do you choose chemo, radiation, surgery, or some combination?  Only then, after you have chosen, can you focus on how to implement your choices most effectively.  In social research, do you use interviews, experiments, ethnographic observation, or some combination—and if a combination, how much of each and to what end? 

I first encountered Gawande’s writings in an article in the New Yorker magazine[2] in which he examined the widely varying success of clinical treatments for cystic fibrosis.  All the clinics he studied followed the same procedures.  Indeed they had to do so to maintain their status as approved clinics. But their success rates in dealing with this debilitating disease differed greatly. To find out why some clinics were much more effective than others Gawande conducted intensive case studies of the most successful clinics.  What did they do that set them apart?   Essentially, the people working in them were relentless in implementing the protocol, the same protocol that all other clinics followed, but not as rigorously. 

Excellence in implementation
After you decide what to do, how do you do it, how do you implement your treatment plan—or your research design?  What makes one physician or researcher superb and another mediocre?  The answer may be remarkably simple: relentlessness in applying the plan, laser-like focus, and fastidious attention to detail.  The same is true, I think, in the practice of social research.  For example, in my work with students conducting ethnographic observations for their doctoral research, one distinguishing characteristic between the students who make major contributions to their fields and those who barely get through is unyielding attention to detail and inexhaustible effort—first at observation, then at recording observations, then at coding and re-coding, and re-re-coding those observations.  When you take these steps you are making invisible choices, what you do when no one is looking.  It can be hard to realize that recording, initially coding, and then recoding your fieldnotes may require much more time—perhaps three to five times as much—as the time spent doing the observations on which the subsequent work is based.  And, unlike with more quantitative forms of analysis, there are few routine methods, recipes, algorithms, or step-by-step guidelines to fall back on.

The same kind of variety occurs in experimental research.  Some experiments are good enough to get published, but fairly soon pass unnoticed into oblivion.  Others set the standard for their field.  There are many reasons for this, of course.  But even experimental research by investigators studying the same phenomena—and using methods comparable enough that they can be summarized in a meta-analysis—vary markedly in outcomes, as measured by effect sizes.  One difference has been called “super-realization.”  Small experiments, especially those coming early in the research on a topic, usually obtain higher effect sizes; this has been widely noticed in both medical and educational research.  There are several explanations for the differences between small experiments and large ones.  One is that small-scale, proof-of-concept experiments are often conducted by enthusiastic pioneers.  Others who follow the paths set by pioneers might be less relentless with the attention to detail or less rigorous in applying the treatment or independent variable. 

Deciding on a method or set of methods is important, but so is deciding how much effort you will exert when you implement them.  Not only do you have to make good decisions trying to implement the original plan well.  Sometimes you need to alter the plan, maybe even going back to the beginning on the basis of what you’ve learned along the way.  It’s easier, of course, to gloss over problems and forge ahead so as to meet a deadline.

Responsibility for making choices—and for implementing them
Do you make your own choices, or do you follow tradition or, do you let others (who are often following tradition) make them for you?  And once you’ve decided what to do, how energetically do you implement your decisions?

Decision making is hard because there is never any guarantee that you will make or you have made the right choices.  Uncertainty is inevitable, and if you care about the results, such uncertainty can be very stressful.  But that is the nature of things.  There is no one best method, nor is there any one best method for choosing among methods.

The “decision problem” pervades even pure mathematics, the most abstract of scholarly disciplines where you could expect human values and foibles to have limited play.  About 100 years ago, one question was whether there was a definite method that could be used to correctly decide whether an assertion was true.  Alan Turing showed that the answer, in a word, was No.  At about the same time Kurt Gödel drew similar conclusions about the unprovability of mathematical assumptions.  If in the most abstract of fields, pure mathematics, there is uncertainty about the truth of any assertion, it is surely true in more messy fields like medical and social research, where human goals, foibles, and limitations will always intrude.  Still, we cannot stop.  Mathematics did not end with Gödel and Turing.  Nor will social and medical research cease as we confront the fact that there are no unerring ways to choose among the best options.  Decisions are inevitable and inevitably uncertain.




[1] Vogt, Vogt, Gardner, & Haeffele (2014).  Selecting the Right Analyses for Your Data (New York: Guilford Press).

Wednesday, November 6, 2013

QUALITATIVE & QUANTITATIVE METHODS ARE ANALOGOUS TO THE SENSES OF SIGHT & HEARING



Qualitative & Quantitative Data Methods
Are Analogous to the Senses of Sight & Hearing


Qualitative and quantitative methods of analysis work better together, just as the senses of hearing and sight do.  Of course, a person can get by with just one.  Blind people often develop especially acute hearing and can use Braille for reading.  Deaf people learn to read lips and tend to be particularly adept at interpreting visual cues.  But generally, a person can do better using the two senses together—for example, to notice facial expressions and tones of voice as well as listening to the words that are spoken.  Just as the senses of hearing and seeing generally work better together, so too do quantitative and qualitative methods of data coding, analysis, and interpretation. 

There are more than the two senses of seeing and hearing, of course; touch and smell are obvious additions to the list.  And there are more than two categories of data and analysis.  Graphic/visual data and analyses constitute another category as do combined/mixed data and methods.  The analogies can only be pushed so far, but the point is clear:  one gets a richer, fuller understanding by combining information from all the senses rather than relying one just one.  Likewise you get a fuller, richer understanding by using all data sources and methods of analysis, rather than using only one.

For a researcher to say that I am only going to study quantitative data or only qualitative evidence is akin to saying I’m intentionally going to plug my ears or wear blinders.  This can lead to what psychologists call “learned helplessness.”  Self-inflicted injury might be a more accurate term.

Of course, a researcher might want to isolate one approach for analytic purposes.  For example, I have sometimes looked at video evidence with the sound off, and then listened to the sound track while not looking at the video, and then read transcripts describing the actions and words on the video.  But this kind of analytic “taking apart” is usually done with the goal putting together a better understanding of the whole.


Monday, October 7, 2013

WHAT STATISTICS DO PRACTITIONERS NEED?

What Statistics Do Practitioners Need?

Graduate professional programs seldom provide practitioners—from M.D.s to Ed.D.s—what they need to understand quantitative research relevant to their work.  Future practitioners rarely take more than one or two courses in quantitative methods, and given the way these courses are usually taught, this is not sufficient.  But one or two courses, taught in an effective way, with the needs of practitioners in mind, could suffice. 

What would effective statistics courses for practitioners look like?  It’s easy to specify what they would not be. They would not emphasize explanations of the mathematical foundations of statistical theory.  Nor would courses for practitioners devote much time to how to calculate the statistics they might use in their research, should they ever do any. 

What do they need?  Chiefly, they need to be able to comprehend and critically interpret the research findings in their fields.  That means they require a good understanding of the ways findings are reported by researchers using advanced methods.  The instructors of future practitioners have to be able to explain highly sophisticated techniques in lay terms, and in very abridged ways.  This is a form of teaching that has much in common with translating from one language, the statistician’s, to another, the practitioner’s.  In other words, it is a form of translational research.  The teachers and students need to focus on sufficient understanding of a broad range of concepts rather than learning any statistical method in great depth.  


The Traditional Approach
By contrast, instructors who use a more traditional approach, and insist on a firm grasp of theoretical fundamentals and computational know-how, will not get very far in a course or two.  It will be difficult to go beyond a handful of basics, such as the normal curve, sampling distributions, t-tests, ANOVAs, standard scores, p-values, confidence intervals, and rudimentary correlation and regression.  These are all fine subjects, and it is important for future statisticians to probe them deeply, but if these topics constitute the whole of the armamentarium of future practitioners, those practitioners will not be able to read most of the research in their fields.  And that means they won't be able to engage in evidence-based practice.


The Translational Approach
The main instructional method of the translational approach is working with students to help them understand research articles reporting findings in their fields. If you want future practitioners to be able to read research with sufficient comprehension that they can apply it to practice, teach them how to read the articles—don’t revel in the fine points of the probability theory behind sampling distributions.  First, help students to decipher research articles in their fields, and then practice with them how discuss, with critical awareness, the outcomes presented in those articles.  Instructors should supply some of the articles students study, but they should also encourage students to find articles that they would like to learn how to read.

The articles will of course vary considerably by field, and so too will the advanced statistical methods most useful to practitioners.  Regardless of the specifics, the method of instruction will be the same: work with students to enhance their critical understanding of quantitative research in their fields—its limitations as well as its applications to their professional practice—by teaching them how to decipher and evaluate it.


Practitioner-Researchers
What about practitioners who become involved in research?  Some surely will, and they should be encouraged.  But the traditional approach, stressing mathematical foundations of statistics and computational details, is unlikely to be a way to stimulate practitioners’ interest in doing research on topics relevant to their fields of practice. 

Most practitioner-researchers using quantitative methods will probably work with co-authors who have methodological expertise that can supplement the practitioners’ substantive knowledge.  Successful practitioner-researchers will usually pay more attention to research design rather than analysis.  When thinking about analysis they will focus more on selecting the right analysis methods and less on the details of how to crunch the numbers.  Learning how to design research and write analysis plans that can successfully address research questions is an attainable goal for many thoughtful practitioners.  In graduate and professional education, it can be fostered by the careful perusal of successful (and not-so-successful) research in students’ fields. 


Raising Standards
This approach raises standards.  It does not lower them, as many statisticians might fear.  Investigating the theoretical foundations of statistics is interesting to many of us, but it is largely irrelevant to the education of future practitioners.  Does it truly maintain standards to insist on teaching topics that are irrelevant to students?  Doing computations is comparatively easy (with software assistance).   But, understanding and reasoning about evidence so as to try to solve real problems is hard.  And it is very important. 

  
Incentives for Instructors
        Why should statisticians who teach quantitative research methods courses take the approach advocated here?  What’s the incentive to change traditional ways?  One motivation is that every semester you get to read cutting-edge real research with your students, rather than trying to interest them in the same old textbook descriptions of basic statistical concepts.  That makes class preparation much more interesting.  

        Of greater importance, by teaching subjects more relevant to students’ futures, you will be better fulfilling your responsibilities as an instructor.  If you don’t teach students about quantitative methods in ways relevant to them, you make it hard for them to incorporate research into their professional practice, and you contribute to growing problem of the separation of research and practice.  As one physician put it to me, “I wish I had had more time to learn statistical methods, but the press of content courses was too great—so I just read the abstracts and hope that’s good enough.”  That’s a sad commentary on his education.  I changed doctors.


Further reading
The following books deal with quite advanced topics in quantitative data analysis and do so assuming little if any statistical knowledge on the part of the reader. 

For biomedical fields, two good books that exemplify the approach argued for in this blog are: Motulsky’s, Intuitive Biostatistics: A Nonmathematical Guide to Statistical Thinking (2nd edition, 2010) and Bailar & Hoaglin’s, Medical Uses of Statistics (3rd edition, 2009). 

For the social sciences see Spicer, Making Sense of Multivariate Data Analysis (2005), Vogt, Quantitative Research Methods for Professionals (2007), Vogt et al., When to Use What Research Design (2012), and Vogt et al., Selecting the Right Analyses for Your Data (2014, in production).


More elementary discussions are available in the field of “analytics.”  This is a new term for statistics.  It is usually as applied to studying big data (often Web data) in order to make business decisions. A popular example is Keeping Up with the Quants (2013) by Davenport and Kim.

Tuesday, August 20, 2013

EFFECTIVE RESEARCH PROPOSALS


Effective Research Proposals


What is an effective research proposal?  It is a proposal that (1) gets accepted and (2) is a useful guide for conducting your study after it is accepted.  The basic steps of an effective proposal are almost always the same whether the proposal is for a grant application, a doctoral dissertation, or an evaluation of a project.  And, the criteria for effectiveness are virtually identical whether it is called a plan, design, or proposal.  The basic outlines of the are very similar.

In virtually all cases, it is much better to have a detailed plan, which you have to revise as you go along, than to start a research project with only a vague idea of what you are going to do.  Your time is too valuable to waste in aimless wandering.

There are 7 basic components or steps of a good research proposal.  I have listed those steps in a logical order below.  And, this is the order you would probably use to outline your proposal--and to communicate the results of your research.  But in the actual practice of conducting your research, you might need to revisit earlier steps, often more than once.  For example, when something goes wrong in the sampling plan (step 4) a way to get ideas for fixing it is to re-review previous research (step 2) to see how other investigators have dealt with your problem.

1.  A research question. 
2.  A review of previous research. 
3.  A plan for collecting data/evidence.
4.  A sampling and/or recruiting plan 
5.  A research ethics plan. 
6.  A coding and/or measurement plan.
7.  An analysis and interpretation plan


1.  A research question.  A good research question has to be researchable, meaning you could conceivably answer it with research.  Considerable explanation about why it is a good research question—it’s researchable and it’s important—is needed in an effective research proposal. 

2. A review of previous research.  This helps you avoid reinventing the wheel, or even worse, the flat tire.  A review is also a source of many ideas about the subsequent stages of the proposal: on how to collect data, from whom to collect it, the ethical implications of your data collection plan, and finally approaches to coding and analysis. 

3.  A plan for collecting data/evidence.  There are 6 basic ways to collect data: (1) surveying, (2) interviewing, (3) experimenting, (4) observing in natural settings, (5) collecting archival/secondary data, and (6) combining ways (1) through (5) in various ways.  You also need to explain why your choice of a data collection plan is a good one for answering your research question and, implicitly, why one of the others would not be better.

4.  Sampling and/or recruiting plan.  This describes from whom, how, where, and how much evidence you are going to collect.  In other terms:  Who or what are you going to study?  How many of them?  How much data from each of them? How will they be selected?

5.  A plan for conducting research ethically.  This plan tries to anticipate any ethical problems and prepares for how to deal with them.  Once you know what you will gather, how, and from whom, then before you go ahead, you need to review your plan to see if there are any ethical constraints arising from participants’ privacy and consent and potential harms.  At this stage, your plan includes preparing for IRB review.

6.  Coding.  Coding is assigning labels (words, numbers, or other symbols) to your data so that you can define, index, and sort your evidence.  It is in coding phase that the issues of distinctions of quant/qual/mixed become most prominent.  You may have made this coding decision in mind early on—perhaps you are phobic about numbers, or maybe you find verbal data annoyingly vague.  You may actually start with this as your first divider, but it would not be effective to write your proposal that way—by saying, for example:  “I like to interview people and numbers give me the creeps, so I don’t want to do survey research” or “I’m shy and I don’t want to have to interact in face-to-face interviews, so I want to do secondary analysis of data.”


7.  Analysis and interpretation.  This tends to be the skimpiest part of a research proposal.  But if you know what you are going to collect from whom and how you will code it, your first 6 steps really do shape (not completely determine) the analysis options open to you.  

Friday, August 16, 2013

CITATION SYSTEMS: WHICH DO YOU PREFER?


Citation Systems
Which is the best?

There are several distinct conventions authors can use to cite the sources they use in their research.  Individuals often have very strong beliefs about which convention is best.  And professional organizations have issued lengthy guidelines.  Among the best known and most widely used are those of the American Psychological Association, the Modern Language Association, and the University of Chicago Press.  Some research journals have their own systems. 

Is one of these better than the others?  No, they are all just fine.  They are all merely conventions.  Saying that one is better than another would be like claiming that driving on the left side of the road is better than driving on the right.  Both are fine as long as everyone knows and abides by the rules.   

There is only one criterion for excellence in a citation system.  If your reader can easily check your sources for accuracy, the system is good.  If your reader cannot do so, the system is bad.  Specific format does not matter at all if it meets this criterion. 

But tastes differ, people have preferences.  As an author of text books and reference works in research methods, I wanted to know what my readers prefer.  So I did some market research among potential readers—students in research methods courses in the social sciences and applied disciplines such as nursing, social work, and education.  I prepared two versions of a short paragraph, one citing sources in parentheses in the text of the paragraph and the other citing the sources in footnotes or endnotes. 

The results were overwhelming.  In the first group of 47 students surveyed, 42 preferred the endnote system, 5 didn’t care, and not a single student opted for the in-text citation system.  In psychology courses, the in-text citation system did better, probably because the American Psychological Association uses an in-text system, and it is a powerful presence in the fields of psychology and education.  But in no group of respondents did more than 20% ever opt for an in-text system.  Readers who offered an explanation said that in-text citations were “annoying,” that they “got in the way,” and that they “cluttered the text.” 

Convinced by this market research, I have used footnotes or endnotes for citations whenever possible. 


HERE ARE THE SURVEY INSTRUCTIONS FOLLOWED BY THE TWO VERSIONS OF THE PARAGRAPH

I would appreciate your help with the following survey.  I am writing text books and
reference works for graduate students in research methods, and I would like to learn from potential readers your preferences about the format of the text.

The survey is anonymous.  You are under no obligation to participate.  If you choose not to participate just return these pages blank.

Thank you,
W. Paul Vogt
---------------------------------------------------------------------------------------------

The passages on the next page are identical in content, but they differ in form.  The first version of the passage includes the citations in the text.  The second version provides the citations as endnotes.

WHICH DO YOU PREFER?
If you were consulting a reference book or text book, which of the two passages would you rather read?


_____Version 1 with citations in the text


_____Version 2 with citations as endnotes


_____No preference


If you would be willing to share the reasons for your preferences, please explain them below.  I would appreciate learning about them. 


Version 1—Citations in Text

While internet surveys are becoming more common, many scholars (Baker, Curtice & Sparrow, 2002; Schoen & Fass, 2005; see also Couper, 2000; Dillman, 2000) continue to express skepticism about their value, especially as concerns sampling bias.  On the other hand, several survey experiments comparing Internet surveying to more traditional modes (Krosnick & Chang, 2001; VanBeselaere, 2002; Alvarez, Sherman & VanBeselaere, 2003; Chang & Krosnik, 2003; Sanders, Clarke, Stewart, & Whiteley, 2007) have shown that well-conducted Internet surveys can be as effective as other methods of sampling and surveying.

Version 2—Citations in Endnotes

While internet surveys are becoming more common, many scholars[1] continue to express skepticism about their value, especially as concerns sampling bias.  On the other hand, several survey experiments comparing Internet surveying to more traditional modes[2]  have shown that well-conducted Internet surveys can be as effective as other methods of sampling and surveying.




[1] Baker, Curtice & Sparrow, 2002;  Schoen & Fass, 2005.  See also Couper, 2000;  Dillman, 2000.

[2] Krosnick & Chang, 2001; VanBeselaere, 2002; Alvarez, Sherman & VanBeselaere, 2003; Chang & Krosnik, 2003;  Sanders, Clarke, Stewart, & Whiteley, 2007.



IF YOU HAVE A REASON FOR PREFERRING ONE VERSION OR THE OTHER, AND YOU WOULD LIKE TO SHARE IT, PLEASE POST IT BELOW.