Sunday, September 23, 2012

Quantitative Chapter 8


I know, I know, quantitative is boring and everyone hates it. However, I still found this chapter interesting for several reasons.

  1. Many of the concepts and questions that the chapter raises when it comes to quantitative research could apply to qualitative as well. While the qualitative chapter goes on to raise similar questions/concerns/emphases, if you take the points this chapter raises about surveys and experiments and apply them to things like interviews and observations, they get you to think about the qualitative research in a way you might not have before.

  1. The word “experiment” has immediate connotations in my mind, and so while reading this chapter I imagined popular psychological experiments that are popular in class discussions. I started thinking about the moving parts of experiments like Little Albert, the Stanford Prison Experiments, etc. While a lot of these experiments were deeply flawed, and, technically, used either mixed methods or qualitative approaches, it was still fun to identify the possible instrumentation, populations, sampling, and other factors within the experiments.

Speaking of sampling, I think that's one area where the subjectivity of quantitative experiments becomes immediate. For example, with a survey, you are automatically drawn to a population sample that is willing to take a survey at all. What segment of the population is willing to stay on the phone when a robotic voice tells them it's a survey? What segment of the population clicks on links asking them to answer questions? Of course, pollsters and other people who conduct these surveys would probably argue that as long as the sample is big enough it will be representative of the population at large, and maybe there is evidence for that, but I can't help but feel that a voluntary survey is by design a process that will gravitate toward some populations and away from others. As an example, on 8.1 they show a survey that was given to about 10% of the student population at a small liberal arts college. They wanted to study retention rates, and they used a questionnaire of 116 questions to figure out why so many freshmen and sophomores were dropping out. My immediate concern would be that anyone who was considering dropping out would not fill out a 116 question survey without some serious incentives. The example goes on to mention that the average ACT score of the sample was higher than the population, but it does not appear to address the implications of this, although this is just an excerpt from the paper so maybe it does elsewhere.

Of course, this is me thinking out loud. I'm sure this argument has been discussed and beaten into submission by much smarter people than me.

3 comments:

  1. I'm actually curious about that as well Graham. Especially surveys that aren't conducted in person, it would be interesting to understand what kinds of people are actually willing to take the survey. I, personally, at least sometimes feel bad when someone is looking for people to take a survey and everyone keeps passing them. I feel bad saying no. But online say, there isn't really that feeling. Length, time, situation, etc. all would play a role in whether anyone would volunteer to do the survey. I wonder in essence if that's something that should be taken into account when discussing the validity of the survey method.

    ReplyDelete
  2. I think you make a good point, Graham. However, not every type of research is always done that way. I recall a friend of mine in Chicago, who, when attending the University of Chicago, did quantitative psychological research in which she essentially "stalked" people to get their responses. In order to avoid the exact type of bias that you mention (the idea that you're only studying people who want to take surveys), her department chose people randomly out of the phone book, and then basically harassed them until they would take the questionnaire. This was in the late 1990s, though so it was before facebook. ;)

    It also helped that the school had a huge budget to work with, so they could send undergraduate students after people in the street. Another such method is to force people to do it, as in the case where as undergraduate myself, I was forced in a psychology class to fill out some number of surveys. In order to receive a final grade in the course, all the students of the class had to submit to several surveys from the graduate psychology department. I actually ended up doing more of them than needed, because they were really fun. The topic was whether or not shows like Law and Order and CSI, which focus largely on physical evidence have corrupted people into being weaker witnesses on the stand in criminal trials.

    Anyways, those are just some of the ways I have personally seen departments try and prevent the type of bias you're talking about. Whether or not those methods are sound, I'm not sure (I can't imagine most of the other undergraduates in a core psychology course were as geeky about Law and Order as I am, for example).

    ReplyDelete
  3. Interesting point! I'm guilty of clicking "cancel" virtually every time I am asked to do an online survey, and I typically only fill out surveys when I am upset with a company.

    ReplyDelete