Uncategorized

Adventures in Data Collection

I took a lot of shit when I had the handle “DataDiva” on Twitter. A fair amount of it, I brought on myself by not considering the impact of 140 on a fellow human or by being clueless about a particular topic. The rest of it came from the first word in my handle.  More than once, someone tweeted at me out of the blue about how data was killing public education. I never quite figured out how to respond to those tweets so … I changed my handle. I picked the name a decade ago for a project because it was funny and I like alliteration and was mostly fine with letting it go.

I’ve seen the light that “data” is troublesome. I remain fully team “evidence of student learning”, on the side of documentation for learning, and will wave whatever flag I need to wave to get student work to the table without using that word. Regardless, I’m still about doing data collection right. I’m not talking here about corraling numbers into columns in Excel, which is a noble pursuit unto itself, but the lovely, messy noisy stuff that is people’s opinions.

If you want to collect people’s opinions about a topic or issue, you can design a survey. So far, so easy. If your area of interest is people’s voting plans, you ask a straightforward question such as “Which of the following candidates do you plan on voting for?” Anything beyond that forced-choice, neutral question and you get into the art and science of survey design.

Consider these two simple demographic questions.

The one on the right comes from a group of researchers based at a large university looking to collect national evidence about a particular movement in education. Their project likely went through the Institutional Review Board (IRB) at their university. The first page of their survey explains their purpose, goals, and intent. The demographic question comes at the end of the survey, after the important questions have been asked.

The one of the left comes from a grass-roots organization seeking to collect information from members. There is no research statement. There is no information about purpose, goals, or intent. The survey is framed in an email to members but once the survey is opened, there’s no context, just a question about the participant’s age and then Q2.

The demographic question on the left should raise red flags*. And to be clear, this isn’t about the nature of that survey, the project’s goals, or the organization. Rather, it’s about the tension of trusting the results of a survey when it’s clear the survey is lacking traits of quality found on the right. So, as a reformed DataDiva, I’d gently ask that before providing a response to a survey, look for hallmarks of quality.

  • Is there a statement of purpose, research, or intent?
  • Are the questions as neutral as possible?
  • Can you ascertain how your response will be used?
  • Do you detect a bias in the survey? Are the desired results telegraphed in the questions?
  • Will they use your data to support their claim or study the issue?
  • If a question strikes you as unfair or biased or if you have a question, do you know who the author is or who to contact?

Your opinion has weight. If you’re going to give your voice to someone else to use, there’s no harm in taking a beat to be an informed gifter. On the flip side, it’s helpful to understand why professional survey designers and academic researchers do what they do. In many cases, the respondent is forced into a choice without a chance to explain context. This design feature isn’t a flaw in the survey, rather it’s a function of the purpose, goals, and data collection needs. Often, research surveys will end with a “can we contact you for more information?” If you want to explain more, make sure you provide accurate and current contact information.

* The red flags?

  1. Male/female are sex-related words, not gender.
  2. Transgender isn’t a separate gender. A woman may identify as transgender but she’s still a woman. Note that the survey on the right leaves the choice up to the user.
  3. “Identify” is an extraneous word. In survey design, every word should be there because its inclusion helps the researcher.
  4. It’s the second question of the survey. There are different opinions on if demographic questions should go at the beginning or the end. What comes first in a survey is incredibly important in terms of getting a complete responses from participants. In other words, why did the designers put them first?

 

 

Standard

Leave a Reply

Your email address will not be published. Required fields are marked *