Wednesday, January 7, 2015

MORTALITY AND THE CHOICE PROBLEM IN RESEARCH METHODS



Mortality
and the Choice Problem
in Research Methods


Where have I been?
When I last posted an entry to this blog about a year ago, I was feeling poorly, and I got worse over the coming months.  In the spring semester, I staggered through the closing weeks of the course I was teaching and barely managed to finish the page proofs of my new book.[1]  After that, I was no longer capable of effective work.  In May 2014, following remarkable incompetence by those reading my X-rays, who twice missed seeing a large tumor, I was diagnosed with late-stage lung cancer.  Now, after many weeks of treatment and almost as many weeks recovering from treatment, my strength has returned to the point that I can resume writing in this blog—with perhaps a better understanding of certain issues and topics.

What does my glimpse of death have to do with research methods?
Nothing focuses the attention on the choice problem like confronting your own mortality; you gain an enhanced appreciation for judging alternatives.  You often have to make decisions, with varying degrees of complexity, about crucial matters, potentially matters of life and death.  There is no assurance that you’ve made the right decision.  Even after you have made the decision and your doctors have acted on it and evaluated the consequences, you still cannot know whether another option might have been better. You can speculate, of course, but you can’t ever be sure.  The choices I had to make, and the principles for making them, are remarkably parallel to issues of methodological decision making. 

Choice is unavoidable & uncertainty is certain
The first principle of methodological choice is unavoidable uncertainty.  The same is true of medical treatment choices.  Patients usually want certainty from their medical advisors.  And those who seek the advice of methodologists usually do too.  Certainty, being able to give unambiguous advice, is often taken as a sign of competence.  But claiming more certainty than is merited can be dishonest and professionally unethical. 

Uncertainty does not equal ignorance or weakness.  Just the opposite is often true.  Unwarranted certainty on the part of advisors may cause you to breathe a sign of relief, but your confidence is based more on faith than knowledge.  Hunch-based medicine hardly seems like a good idea. 

The same ideas—necessary choices made in the face of uncertainty—have been the focus of my research and writing for many years, but making decisions about my own health put the theme of decision making while struggling with unavoidable uncertainty in a clearer, though harsher, light.

Uncertainty in planning research
Consider the kinds of questions you might ask yourself when approaching a research project.  Should you pursue your research question with surveys or interviews or some combination of the two?  If you combine them, should you use the interviews to help construct the survey questions, to aid in interpreting the answers to the survey questions, or both?  There is no way to know in advance.  And, even after the study is completed, there is not even any way to know in retrospect.  You might be able to say, “OK, this research turned out pretty well,” or that “this study would have been more persuasive had I taken a different approach.”  But you can’t really know; “do-overs” are rarely possible.  What you can do when following a research agenda is to think of your work as cyclical.  You make a choice when selecting a research strategy.  You take action using that strategy.  Then you evaluate the results of that action and alter your future choices accordingly, as illustrated below.

                           Choice                   Action            
                                      Evaluation


Atul Gawande’s take on these issues
In the abstract, such a graphic makes it all seem pretty simple.  But the complications that arise when applying the choice-action-evaluation-choice . . . feedback loop are challenging.   An excellent source for examining the relations in this cycle—in both medical and social research contexts—are the writings of Atul Gawande, most recently his book about end-of-life choices, Being Mortal (2014).  By stressing the social and human contexts of decision making in health care, Gawande humanizes it and highlights the natural links between it and methodological decision making.

The first point is that there is no invariably correct decision.  The right decision depends on your goals.  Do you want to give up some privacy and autonomy by entering a nursing home in order to live longer than you would in a less restrictive environment?  Or do you prefer to maintain your autonomy at all costs, being able, for example, to sleep, eat, and lock your door when you want?  Is that autonomy worth increased risks to your safety and longevity?  Assisted living is an intermediate option, but choosing the right assisted-living facility for you is no simple matter.  And when the end gets nearer, as it inevitably must, do you want hospital or hospice?  Everything depends on what you value, and, therefore, on deciding what you actually value.  Different values will lead to different “correct” choices.  And if you have, in Gawande’s terms, “priorities beyond merely being safe and living longer,” such as privacy and autonomy, you may find yourself in conflict with your loved ones.  Of course, you might like to be safe, to live longer, and also to retain your privacy and autonomy.  But there are usually unavoidable tradeoffs among these goals.  There is no way to maximize them all. 

What to do—and how to do it.
Dying isn’t curable, but many diseases are.  And it is comparatively easier to prolong life with good interventions than it ever has been.  Medicine can be highly effective; but practitioners applying the same techniques often have very different levels of success.  Gawande is well known for his earlier works uncovering the reasons for differences between excellent and mediocre practice.  Again, there are strong parallels between medical and research practice. 

Gawande’s Checklist Manifesto (2011) is a good example.  His argument is that in any field where the work is complex the quality of work is improved if you use checklists.  There are two basic reasons to use checklists:  to be sure (1) that you don’t forget something important; and (2) that you consider all the options available to you.  Gawande emphasizes the first, but the second is arguably the more important.  In medicine, for example, do you choose chemo, radiation, surgery, or some combination?  Only then, after you have chosen, can you focus on how to implement your choices most effectively.  In social research, do you use interviews, experiments, ethnographic observation, or some combination—and if a combination, how much of each and to what end? 

I first encountered Gawande’s writings in an article in the New Yorker magazine[2] in which he examined the widely varying success of clinical treatments for cystic fibrosis.  All the clinics he studied followed the same procedures.  Indeed they had to do so to maintain their status as approved clinics. But their success rates in dealing with this debilitating disease differed greatly. To find out why some clinics were much more effective than others Gawande conducted intensive case studies of the most successful clinics.  What did they do that set them apart?   Essentially, the people working in them were relentless in implementing the protocol, the same protocol that all other clinics followed, but not as rigorously. 

Excellence in implementation
After you decide what to do, how do you do it, how do you implement your treatment plan—or your research design?  What makes one physician or researcher superb and another mediocre?  The answer may be remarkably simple: relentlessness in applying the plan, laser-like focus, and fastidious attention to detail.  The same is true, I think, in the practice of social research.  For example, in my work with students conducting ethnographic observations for their doctoral research, one distinguishing characteristic between the students who make major contributions to their fields and those who barely get through is unyielding attention to detail and inexhaustible effort—first at observation, then at recording observations, then at coding and re-coding, and re-re-coding those observations.  When you take these steps you are making invisible choices, what you do when no one is looking.  It can be hard to realize that recording, initially coding, and then recoding your fieldnotes may require much more time—perhaps three to five times as much—as the time spent doing the observations on which the subsequent work is based.  And, unlike with more quantitative forms of analysis, there are few routine methods, recipes, algorithms, or step-by-step guidelines to fall back on.

The same kind of variety occurs in experimental research.  Some experiments are good enough to get published, but fairly soon pass unnoticed into oblivion.  Others set the standard for their field.  There are many reasons for this, of course.  But even experimental research by investigators studying the same phenomena—and using methods comparable enough that they can be summarized in a meta-analysis—vary markedly in outcomes, as measured by effect sizes.  One difference has been called “super-realization.”  Small experiments, especially those coming early in the research on a topic, usually obtain higher effect sizes; this has been widely noticed in both medical and educational research.  There are several explanations for the differences between small experiments and large ones.  One is that small-scale, proof-of-concept experiments are often conducted by enthusiastic pioneers.  Others who follow the paths set by pioneers might be less relentless with the attention to detail or less rigorous in applying the treatment or independent variable. 

Deciding on a method or set of methods is important, but so is deciding how much effort you will exert when you implement them.  Not only do you have to make good decisions trying to implement the original plan well.  Sometimes you need to alter the plan, maybe even going back to the beginning on the basis of what you’ve learned along the way.  It’s easier, of course, to gloss over problems and forge ahead so as to meet a deadline.

Responsibility for making choices—and for implementing them
Do you make your own choices, or do you follow tradition or, do you let others (who are often following tradition) make them for you?  And once you’ve decided what to do, how energetically do you implement your decisions?

Decision making is hard because there is never any guarantee that you will make or you have made the right choices.  Uncertainty is inevitable, and if you care about the results, such uncertainty can be very stressful.  But that is the nature of things.  There is no one best method, nor is there any one best method for choosing among methods.

The “decision problem” pervades even pure mathematics, the most abstract of scholarly disciplines where you could expect human values and foibles to have limited play.  About 100 years ago, one question was whether there was a definite method that could be used to correctly decide whether an assertion was true.  Alan Turing showed that the answer, in a word, was No.  At about the same time Kurt Gödel drew similar conclusions about the unprovability of mathematical assumptions.  If in the most abstract of fields, pure mathematics, there is uncertainty about the truth of any assertion, it is surely true in more messy fields like medical and social research, where human goals, foibles, and limitations will always intrude.  Still, we cannot stop.  Mathematics did not end with Gödel and Turing.  Nor will social and medical research cease as we confront the fact that there are no unerring ways to choose among the best options.  Decisions are inevitable and inevitably uncertain.




[1] Vogt, Vogt, Gardner, & Haeffele (2014).  Selecting the Right Analyses for Your Data (New York: Guilford Press).

18 comments:

  1. Paul, I'm so thrilled to hear that you are feeling better, regaining your strength and writing again! Clearly you have been thinking deeply about these connections and I've enjoyed reading your insights here.

    ReplyDelete
  2. Thanks Sheila,
    I wrote this a while ago, but hesitated to post it; it seems like self-indulgent existentialism--though there were a few methodological points too.

    ReplyDelete
  3. Visualizing data with graphics can be more precise and revealing than conventional statistics. If you do not use statistical graphics, then you forfeit a deeper understanding of a dataset's structure. See more coding quantitative data

    ReplyDelete
  4. Dear Pimentel,
    Thanks for your comment
    Data visualization is often very useful as you say. There remains a choice problem of course, to wit: what kinds of visualization are most useful for what types of research problem?
    Paul

    ReplyDelete
  5. I am glad to see this post. I have the site of paraphrasing help whcih surely helps you in writing and speaking thesis

    ReplyDelete

  6. It is really very excellent,i find all articles was amazing.awesome way to get exert tips from everyone,
    not only i like that post all peoples like that post,because of all given information was wonderful and it's very helpful for me.
    Dotnet training in Chennai

    ReplyDelete
  7. Good post. I learn something totally new and challenging on blogs I stumbleupon on a daily basis. It will always be interesting to read articles from other authors and practice something from their websites...

    Android Training in Chennai

    ReplyDelete
  8. Dear Readers,
    I am sorry to tell you that Paul Vogt passed away on April 27, 2016.

    ReplyDelete
  9. Great post! I am actually getting ready to across this information, It's very helpful for this blog.Also great with all of the valuable information you have Keep up the good work you are doing well.

    Branding Services in Chennai

    ReplyDelete
  10. I have definitely picked up anything new from right here. I did however expertise a few technical points using this site, since I experienced to reload the web site a lot of times previous to I could get it to load correctly.
    Office Interiors in Chennai
    Interior Decorators in Chennai

    ReplyDelete

  11. Wonderful blog.. Thanks for sharing informative Post. Its very useful to me.

    Installment loans
    Payday loans
    Title loans

    ReplyDelete
  12. The blog is very useful and interesting your way of writing is making your blog is most likeable for viewer.

    Selenium Training in Chennai

    ReplyDelete
  13. This blog is having the general information.Got a creative work and this is very different one.We have to develop our creativity mind.This blog helps for this. Thank you for this blog.This is very interesting and useful.

    Bigdata Training in Chennai

    ReplyDelete
  14. The Interior Designer is a plans, researches, coordinates, and manages the projects. Interior design is a multifaceted profession that includes conceptual development, space planning, site inspections, programming, research, communicating with the stakeholders of a project, construction management, and execution of the design.

    Interior Designers in OMR

    ReplyDelete
  15. Thanks for the informative article. This is one of the best resources I have found in quite some time. Nicely written and great info. I really cannot thank you enough for sharing.

    Restaurant in OMR
    Apartments in OMR
    Villas in OMR
    Resorts in OMR

    ReplyDelete
  16. This is a very good writing. kindly share more such useful information cheap dissertation writing services india

    ReplyDelete
  17. Thank You For Sharing this Information. Phd Assistance Provide Phd Topic selection help PhD Thesis writing services
    Contact Us
    India : +91 8754446690
    UK : +44-1143520021
    Email: info@phdassistance.com
    Visit : http://bit.ly/2tXFzJU

    ReplyDelete
  18. Very much helpful information, Thanks for sharing.
    Get Research Methodology information from here Research Methodology

    ReplyDelete