I
know, I know, quantitative is boring and everyone hates it. However,
I still found this chapter interesting for several reasons.
- Many of the concepts and questions that the chapter raises when it comes to quantitative research could apply to qualitative as well. While the qualitative chapter goes on to raise similar questions/concerns/emphases, if you take the points this chapter raises about surveys and experiments and apply them to things like interviews and observations, they get you to think about the qualitative research in a way you might not have before.
- The word “experiment” has immediate connotations in my mind, and so while reading this chapter I imagined popular psychological experiments that are popular in class discussions. I started thinking about the moving parts of experiments like Little Albert, the Stanford Prison Experiments, etc. While a lot of these experiments were deeply flawed, and, technically, used either mixed methods or qualitative approaches, it was still fun to identify the possible instrumentation, populations, sampling, and other factors within the experiments.
Speaking
of sampling, I think that's one area where the subjectivity of
quantitative experiments becomes immediate. For example, with a
survey, you are automatically drawn to a population sample that is
willing to take a survey at all. What segment of the population is
willing to stay on the phone when a robotic voice tells them it's a
survey? What segment of the population clicks on links asking them
to answer questions? Of course, pollsters and other people who
conduct these surveys would probably argue that as long as the sample
is big enough it will be representative of the population at large,
and maybe there is evidence for that, but I can't help but feel that
a voluntary survey is by design a process that will gravitate toward
some populations and away from others. As an example, on 8.1 they
show a survey that was given to about 10% of the student population
at a small liberal arts college. They wanted to study retention
rates, and they used a questionnaire of 116 questions to figure out
why so many freshmen and sophomores were dropping out. My immediate
concern would be that anyone who was considering dropping out would
not fill out a 116 question survey without some serious incentives.
The example goes on to mention that the average ACT score of the
sample was higher than the population, but it does not appear to
address the implications of this, although this is just an excerpt
from the paper so maybe it does elsewhere.
Of
course, this is me thinking out loud. I'm sure this argument has
been discussed and beaten into submission by much smarter people than
me.
I'm actually curious about that as well Graham. Especially surveys that aren't conducted in person, it would be interesting to understand what kinds of people are actually willing to take the survey. I, personally, at least sometimes feel bad when someone is looking for people to take a survey and everyone keeps passing them. I feel bad saying no. But online say, there isn't really that feeling. Length, time, situation, etc. all would play a role in whether anyone would volunteer to do the survey. I wonder in essence if that's something that should be taken into account when discussing the validity of the survey method.
ReplyDeleteInteresting point! I'm guilty of clicking "cancel" virtually every time I am asked to do an online survey, and I typically only fill out surveys when I am upset with a company.
ReplyDelete