Wednesday, November 6, 2013

QUALITATIVE & QUANTITATIVE METHODS ARE ANALOGOUS TO THE SENSES OF SIGHT & HEARING



Qualitative & Quantitative Data Methods
Are Analogous to the Senses of Sight & Hearing


Qualitative and quantitative methods of analysis work better together, just as the senses of hearing and sight do.  Of course, a person can get by with just one.  Blind people often develop especially acute hearing and can use Braille for reading.  Deaf people learn to read lips and tend to be particularly adept at interpreting visual cues.  But generally, a person can do better using the two senses together—for example, to notice facial expressions and tones of voice as well as listening to the words that are spoken.  Just as the senses of hearing and seeing generally work better together, so too do quantitative and qualitative methods of data coding, analysis, and interpretation. 

There are more than the two senses of seeing and hearing, of course; touch and smell are obvious additions to the list.  And there are more than two categories of data and analysis.  Graphic/visual data and analyses constitute another category as do combined/mixed data and methods.  The analogies can only be pushed so far, but the point is clear:  one gets a richer, fuller understanding by combining information from all the senses rather than relying one just one.  Likewise you get a fuller, richer understanding by using all data sources and methods of analysis, rather than using only one.

For a researcher to say that I am only going to study quantitative data or only qualitative evidence is akin to saying I’m intentionally going to plug my ears or wear blinders.  This can lead to what psychologists call “learned helplessness.”  Self-inflicted injury might be a more accurate term.

Of course, a researcher might want to isolate one approach for analytic purposes.  For example, I have sometimes looked at video evidence with the sound off, and then listened to the sound track while not looking at the video, and then read transcripts describing the actions and words on the video.  But this kind of analytic “taking apart” is usually done with the goal putting together a better understanding of the whole.


Monday, October 7, 2013

WHAT STATISTICS DO PRACTITIONERS NEED?

What Statistics Do Practitioners Need?

Graduate professional programs seldom provide practitioners—from M.D.s to Ed.D.s—what they need to understand quantitative research relevant to their work.  Future practitioners rarely take more than one or two courses in quantitative methods, and given the way these courses are usually taught, this is not sufficient.  But one or two courses, taught in an effective way, with the needs of practitioners in mind, could suffice. 

What would effective statistics courses for practitioners look like?  It’s easy to specify what they would not be. They would not emphasize explanations of the mathematical foundations of statistical theory.  Nor would courses for practitioners devote much time to how to calculate the statistics they might use in their research, should they ever do any. 

What do they need?  Chiefly, they need to be able to comprehend and critically interpret the research findings in their fields.  That means they require a good understanding of the ways findings are reported by researchers using advanced methods.  The instructors of future practitioners have to be able to explain highly sophisticated techniques in lay terms, and in very abridged ways.  This is a form of teaching that has much in common with translating from one language, the statistician’s, to another, the practitioner’s.  In other words, it is a form of translational research.  The teachers and students need to focus on sufficient understanding of a broad range of concepts rather than learning any statistical method in great depth.  


The Traditional Approach
By contrast, instructors who use a more traditional approach, and insist on a firm grasp of theoretical fundamentals and computational know-how, will not get very far in a course or two.  It will be difficult to go beyond a handful of basics, such as the normal curve, sampling distributions, t-tests, ANOVAs, standard scores, p-values, confidence intervals, and rudimentary correlation and regression.  These are all fine subjects, and it is important for future statisticians to probe them deeply, but if these topics constitute the whole of the armamentarium of future practitioners, those practitioners will not be able to read most of the research in their fields.  And that means they won't be able to engage in evidence-based practice.


The Translational Approach
The main instructional method of the translational approach is working with students to help them understand research articles reporting findings in their fields. If you want future practitioners to be able to read research with sufficient comprehension that they can apply it to practice, teach them how to read the articles—don’t revel in the fine points of the probability theory behind sampling distributions.  First, help students to decipher research articles in their fields, and then practice with them how discuss, with critical awareness, the outcomes presented in those articles.  Instructors should supply some of the articles students study, but they should also encourage students to find articles that they would like to learn how to read.

The articles will of course vary considerably by field, and so too will the advanced statistical methods most useful to practitioners.  Regardless of the specifics, the method of instruction will be the same: work with students to enhance their critical understanding of quantitative research in their fields—its limitations as well as its applications to their professional practice—by teaching them how to decipher and evaluate it.


Practitioner-Researchers
What about practitioners who become involved in research?  Some surely will, and they should be encouraged.  But the traditional approach, stressing mathematical foundations of statistics and computational details, is unlikely to be a way to stimulate practitioners’ interest in doing research on topics relevant to their fields of practice. 

Most practitioner-researchers using quantitative methods will probably work with co-authors who have methodological expertise that can supplement the practitioners’ substantive knowledge.  Successful practitioner-researchers will usually pay more attention to research design rather than analysis.  When thinking about analysis they will focus more on selecting the right analysis methods and less on the details of how to crunch the numbers.  Learning how to design research and write analysis plans that can successfully address research questions is an attainable goal for many thoughtful practitioners.  In graduate and professional education, it can be fostered by the careful perusal of successful (and not-so-successful) research in students’ fields. 


Raising Standards
This approach raises standards.  It does not lower them, as many statisticians might fear.  Investigating the theoretical foundations of statistics is interesting to many of us, but it is largely irrelevant to the education of future practitioners.  Does it truly maintain standards to insist on teaching topics that are irrelevant to students?  Doing computations is comparatively easy (with software assistance).   But, understanding and reasoning about evidence so as to try to solve real problems is hard.  And it is very important. 

  
Incentives for Instructors
        Why should statisticians who teach quantitative research methods courses take the approach advocated here?  What’s the incentive to change traditional ways?  One motivation is that every semester you get to read cutting-edge real research with your students, rather than trying to interest them in the same old textbook descriptions of basic statistical concepts.  That makes class preparation much more interesting.  

        Of greater importance, by teaching subjects more relevant to students’ futures, you will be better fulfilling your responsibilities as an instructor.  If you don’t teach students about quantitative methods in ways relevant to them, you make it hard for them to incorporate research into their professional practice, and you contribute to growing problem of the separation of research and practice.  As one physician put it to me, “I wish I had had more time to learn statistical methods, but the press of content courses was too great—so I just read the abstracts and hope that’s good enough.”  That’s a sad commentary on his education.  I changed doctors.


Further reading
The following books deal with quite advanced topics in quantitative data analysis and do so assuming little if any statistical knowledge on the part of the reader. 

For biomedical fields, two good books that exemplify the approach argued for in this blog are: Motulsky’s, Intuitive Biostatistics: A Nonmathematical Guide to Statistical Thinking (2nd edition, 2010) and Bailar & Hoaglin’s, Medical Uses of Statistics (3rd edition, 2009). 

For the social sciences see Spicer, Making Sense of Multivariate Data Analysis (2005), Vogt, Quantitative Research Methods for Professionals (2007), Vogt et al., When to Use What Research Design (2012), and Vogt et al., Selecting the Right Analyses for Your Data (2014, in production).


More elementary discussions are available in the field of “analytics.”  This is a new term for statistics.  It is usually as applied to studying big data (often Web data) in order to make business decisions. A popular example is Keeping Up with the Quants (2013) by Davenport and Kim.

Tuesday, August 20, 2013

EFFECTIVE RESEARCH PROPOSALS


Effective Research Proposals


What is an effective research proposal?  It is a proposal that (1) gets accepted and (2) is a useful guide for conducting your study after it is accepted.  The basic steps of an effective proposal are almost always the same whether the proposal is for a grant application, a doctoral dissertation, or an evaluation of a project.  And, the criteria for effectiveness are virtually identical whether it is called a plan, design, or proposal.  The basic outlines of the are very similar.

In virtually all cases, it is much better to have a detailed plan, which you have to revise as you go along, than to start a research project with only a vague idea of what you are going to do.  Your time is too valuable to waste in aimless wandering.

There are 7 basic components or steps of a good research proposal.  I have listed those steps in a logical order below.  And, this is the order you would probably use to outline your proposal--and to communicate the results of your research.  But in the actual practice of conducting your research, you might need to revisit earlier steps, often more than once.  For example, when something goes wrong in the sampling plan (step 4) a way to get ideas for fixing it is to re-review previous research (step 2) to see how other investigators have dealt with your problem.

1.  A research question. 
2.  A review of previous research. 
3.  A plan for collecting data/evidence.
4.  A sampling and/or recruiting plan 
5.  A research ethics plan. 
6.  A coding and/or measurement plan.
7.  An analysis and interpretation plan


1.  A research question.  A good research question has to be researchable, meaning you could conceivably answer it with research.  Considerable explanation about why it is a good research question—it’s researchable and it’s important—is needed in an effective research proposal. 

2. A review of previous research.  This helps you avoid reinventing the wheel, or even worse, the flat tire.  A review is also a source of many ideas about the subsequent stages of the proposal: on how to collect data, from whom to collect it, the ethical implications of your data collection plan, and finally approaches to coding and analysis. 

3.  A plan for collecting data/evidence.  There are 6 basic ways to collect data: (1) surveying, (2) interviewing, (3) experimenting, (4) observing in natural settings, (5) collecting archival/secondary data, and (6) combining ways (1) through (5) in various ways.  You also need to explain why your choice of a data collection plan is a good one for answering your research question and, implicitly, why one of the others would not be better.

4.  Sampling and/or recruiting plan.  This describes from whom, how, where, and how much evidence you are going to collect.  In other terms:  Who or what are you going to study?  How many of them?  How much data from each of them? How will they be selected?

5.  A plan for conducting research ethically.  This plan tries to anticipate any ethical problems and prepares for how to deal with them.  Once you know what you will gather, how, and from whom, then before you go ahead, you need to review your plan to see if there are any ethical constraints arising from participants’ privacy and consent and potential harms.  At this stage, your plan includes preparing for IRB review.

6.  Coding.  Coding is assigning labels (words, numbers, or other symbols) to your data so that you can define, index, and sort your evidence.  It is in coding phase that the issues of distinctions of quant/qual/mixed become most prominent.  You may have made this coding decision in mind early on—perhaps you are phobic about numbers, or maybe you find verbal data annoyingly vague.  You may actually start with this as your first divider, but it would not be effective to write your proposal that way—by saying, for example:  “I like to interview people and numbers give me the creeps, so I don’t want to do survey research” or “I’m shy and I don’t want to have to interact in face-to-face interviews, so I want to do secondary analysis of data.”


7.  Analysis and interpretation.  This tends to be the skimpiest part of a research proposal.  But if you know what you are going to collect from whom and how you will code it, your first 6 steps really do shape (not completely determine) the analysis options open to you.  

Friday, August 16, 2013

CITATION SYSTEMS: WHICH DO YOU PREFER?


Citation Systems
Which is the best?

There are several distinct conventions authors can use to cite the sources they use in their research.  Individuals often have very strong beliefs about which convention is best.  And professional organizations have issued lengthy guidelines.  Among the best known and most widely used are those of the American Psychological Association, the Modern Language Association, and the University of Chicago Press.  Some research journals have their own systems. 

Is one of these better than the others?  No, they are all just fine.  They are all merely conventions.  Saying that one is better than another would be like claiming that driving on the left side of the road is better than driving on the right.  Both are fine as long as everyone knows and abides by the rules.   

There is only one criterion for excellence in a citation system.  If your reader can easily check your sources for accuracy, the system is good.  If your reader cannot do so, the system is bad.  Specific format does not matter at all if it meets this criterion. 

But tastes differ, people have preferences.  As an author of text books and reference works in research methods, I wanted to know what my readers prefer.  So I did some market research among potential readers—students in research methods courses in the social sciences and applied disciplines such as nursing, social work, and education.  I prepared two versions of a short paragraph, one citing sources in parentheses in the text of the paragraph and the other citing the sources in footnotes or endnotes. 

The results were overwhelming.  In the first group of 47 students surveyed, 42 preferred the endnote system, 5 didn’t care, and not a single student opted for the in-text citation system.  In psychology courses, the in-text citation system did better, probably because the American Psychological Association uses an in-text system, and it is a powerful presence in the fields of psychology and education.  But in no group of respondents did more than 20% ever opt for an in-text system.  Readers who offered an explanation said that in-text citations were “annoying,” that they “got in the way,” and that they “cluttered the text.” 

Convinced by this market research, I have used footnotes or endnotes for citations whenever possible. 


HERE ARE THE SURVEY INSTRUCTIONS FOLLOWED BY THE TWO VERSIONS OF THE PARAGRAPH

I would appreciate your help with the following survey.  I am writing text books and
reference works for graduate students in research methods, and I would like to learn from potential readers your preferences about the format of the text.

The survey is anonymous.  You are under no obligation to participate.  If you choose not to participate just return these pages blank.

Thank you,
W. Paul Vogt
---------------------------------------------------------------------------------------------

The passages on the next page are identical in content, but they differ in form.  The first version of the passage includes the citations in the text.  The second version provides the citations as endnotes.

WHICH DO YOU PREFER?
If you were consulting a reference book or text book, which of the two passages would you rather read?


_____Version 1 with citations in the text


_____Version 2 with citations as endnotes


_____No preference


If you would be willing to share the reasons for your preferences, please explain them below.  I would appreciate learning about them. 


Version 1—Citations in Text

While internet surveys are becoming more common, many scholars (Baker, Curtice & Sparrow, 2002; Schoen & Fass, 2005; see also Couper, 2000; Dillman, 2000) continue to express skepticism about their value, especially as concerns sampling bias.  On the other hand, several survey experiments comparing Internet surveying to more traditional modes (Krosnick & Chang, 2001; VanBeselaere, 2002; Alvarez, Sherman & VanBeselaere, 2003; Chang & Krosnik, 2003; Sanders, Clarke, Stewart, & Whiteley, 2007) have shown that well-conducted Internet surveys can be as effective as other methods of sampling and surveying.

Version 2—Citations in Endnotes

While internet surveys are becoming more common, many scholars[1] continue to express skepticism about their value, especially as concerns sampling bias.  On the other hand, several survey experiments comparing Internet surveying to more traditional modes[2]  have shown that well-conducted Internet surveys can be as effective as other methods of sampling and surveying.




[1] Baker, Curtice & Sparrow, 2002;  Schoen & Fass, 2005.  See also Couper, 2000;  Dillman, 2000.

[2] Krosnick & Chang, 2001; VanBeselaere, 2002; Alvarez, Sherman & VanBeselaere, 2003; Chang & Krosnik, 2003;  Sanders, Clarke, Stewart, & Whiteley, 2007.



IF YOU HAVE A REASON FOR PREFERRING ONE VERSION OR THE OTHER, AND YOU WOULD LIKE TO SHARE IT, PLEASE POST IT BELOW.   

Monday, July 29, 2013

BASICS OF A RESEARCH DESIGN OR PROPOSAL


Basics of a research design or proposal

 Research designs and proposals tell the reader what you intend to learn, why you think it is a good thing to know, and, most important,  how you plan to accomplish acquiring knowledge about the topic.  Proposals and designs differ mainly in their readers or audiences.   Sources of funding and dissertation committees read proposals, while the main audiences for designs are other researchers.  This is not a firm distinction; it is mostly a matter of emphasis.

Your design/proposal for this course should be about 10 double-spaced pages long (15 pages, or 4000 words, is the absolute upper limit).  The paper will be submitted electronically: and MS Word file attached to an e-mail.  Comments will be returned to you by e-mail. 

A research design/proposal should contain the following elements:

1.      Introduction, in which you explain what you want to study and why that is important.  A key feature of this section will be your research question.  Here you might also say a few words about how you propose to conduct the study. 
·         This should be a page or two in length.

2.      Review of the literature on your subject.  In this review, you review what we already know about your subject and how your study would add to it.  As part of this explanation, your review will also often talk about the methods used by previous research on the subject.
      This review should discuss at least 10 recent and representative research articles on the subject and should explain how they were selected.  Try to find an article on your subject that is a review of the literature or a meta-analysis, and don’t hesitate to use secondary sources in addition to (not instead of) research articles. 
      Your review should be organized by theme, ideas, concepts, conclusions, or types of evidence—anything but by article (“in the first article I read it said . . . in the second article I read it said . .  .”). 
·         This section should roughly 2 to 4 pages long.

3.      The methods section is, of course, the heart of the matter.  It should be a plan of work for learning what you want to know about your subject.  This should include sufficient details about 1. where you'll get your evidence, 2. how you'll gather it, 3. how much of it you need, and 4. how you'll analyze it once you've gathered it.  "Sufficient" in this case means enough details that you (or someone else) really could use your plan to guide the research activities.
      You can use any of the designs discussed in our texts.  Be sure to use the texts’ discussions when constructing your design.  For example, if the text explains potential problems—e.g., with validity (internal or external), with reliability, or with unwanted variance that needs to be controlled--of a particular design, be sure to address these. 
  • This section should be approximately 4 to 5 pages long.


Notes:  Your proposal/paper probably will not have much of a conclusion since you won't actually be the doing research; it is a proposed study.  Your design does not have to be confined to something you could do this semester.  It may be best for you to think of it as a design or a proposal for a dissertation; this would be a plan of work that would extend over a few semesters.

Saturday, April 13, 2013

MISSING ANSWERS TO YOUR SURVEY QUESTIONS



Missing Data:
When people don’t answer survey questions

Elsewhere I’ve called missing data “the silent killer of valid inference.”  It is a silent killer, and a problem that is easy to overlook, because, well . . . it’s missing, not there.

Missing data is a problem all forms of research—from interviews to experiments—but it is an especially tricky and often overlooked issue in survey research.  Kathy Godfrey recently posted (in an e-mail discussion) a very clear and concise description of the problem and how to handle it. Her comments are so helpful that, with her permission, I’m re-posting them here.

 What you need to do when writing, conducting, and analyzing a survey is make sure to get the maximum amount of information from the people-- probably a large number of people--who do not respond the way you had hoped.  Kathy’s “four flavors” of non-response are actually four important variables.  A researcher can learn a lot by coding and analyzing them. 


Original Message:
Sent: 04-10-2013
From: Katherine Godfrey
Subject: Multiple imputation of "I don't know/ I don't remember"

Depending on the situation, there are at least four "flavors" of non-response, any one of which might be what's behind someone failing to answer a question (and this ignores the people who meant to respond, but simply goofed):

1. No Answer (deliberately not answering, and telling you so)
2. Does Not Apply (the question does not apply to respondent, and thus can't be answered)
3. Don't Know/Don't Remember (would answer if could, but can't)
4. No Preference/Don't Care (this is actually a real answer)

Here's an example to hopefully clarify:
Imagine a pollster asking people on the street, "Who will you vote for in the Senatorial election next week, Smith or Jones?"  The following answers are all possible:

1. "None of your business!" (No Answer)
2. "I don't live in this state." (Does Not Apply)
3. "I haven't decided yet; I'm still sorting through the candidates' stands on the issues." (Don't Know)
4. "They're both the same; I may just toss a coin in the voting booth--or stay home." (No Preference)
Not to mention the people that just walk past the pollster, leaving him to wonder if they're deliberately ignoring him or simply didn't hear him.

The second category (Does Not Apply) can turn up as "structural zeroes" in frequency analysis contexts. The difference between "Don't Know" and "No Preference" is subtle, but I think it's real.  The former says that there is (or was, or will be) an answer, but the respondent can't give it now.  The latter says that there is a known answer, and the answer is not to have a particular feeling/opinion.

The "don't know/don't remember" answer is actually more informative than a missing (non-response) answer, since a non-response could be in any of these non-response categories.  If possible, and if the sample size supports it, it could be used as a third category beyond a yes/no binary.  (For example, perhaps people who say they don't remember ever driving drunk are definitely more likely to have had an auto accident in the last year than those who say "no," but also substantially less likely to have done so than those who say "yes.")

I first learned about these types of non-response from my mother, who worked in survey research.  She was adamant that her surveys should include Don't Know, and DNA (does not apply) options for questions along with NA (no answer), to remove at least some of the reasons for people to feel that they had to leave a question blank because they couldn't or wouldn't answer.  Partial information is better than the dreaded DNR (Did Not Reply), which tells you nothing.

By the same token, I cringe whenever I see computer-administered Likert-scale survey questions that not only have no option for not answering (or indicating non-applicability), but have an even number of response options, thus not even allowing for "no preference."