H809 – Electronic surveys vs. pencil and paper surveys (A11.1)
Alan Woodley and Adam Joinson talked in weeks 11 podcast about electronic surveys and how they might differ from paper and pen surveys. Research indicates that results from e-surveys are normally exaggerated, whereby it is not always clear whether the sample might be different, or if the media leads to different types of responses, or if the interaction between the sample and the media leads a different results.
Conducting an e-survey means that researchers needs to take issues in terms of people who have internet access, security settings and the impact of the media on people’s responses into consideration. Context issues (i.e. where are people completing the survey’s, e.g. cyber cafe) are also important.
However , online surveys offer much more variety – e.g. the inclusion of photographs, video, different response format. They are cheap, fast, reach people around the world and the data’s digital when it comes back. That all sounds promising. There are also claims that people who complete online surveys are truer volunteers than people who traditionally have been sampled, and that people were more candid in online surveys.
Yet, this changed, because with all the tracking possibilities in the internet, people’s IP addresses can be easily traced. Therefore as Adam puts it – ‘if you do complete something online it’s confidential rather than anonymous’, and paper and pencil responses are seen as more candid now.
Nevertheless, response rates are constantly dropping for all kinds of reason, mainly because people claim they lack the time to finish survey and people fear that their responses are simply used to generate sales leads. Privacy concerns increased, which I think stands somehow in contrast to all the openness displayed on social network sites, blogs and wikis, etc. Germany is conducting a census this year and there is a ferocious debate about it. Yet, participation is mandatory – so much to truly volunteers – otherwise they will be penalized. Opinions are opposing and only a part of the German citizens view the completing the survey as their responsibility and that it is a public good they do.
Yet, Adam talks about a big shift in terms of online research, that the research process is shifting into much more of a public sphere now, that it becomes more transparent. Adam also points out that it is important that researchers are willing to listen to their respondents and treat them with respect, not exploit them as data sources. He argues that survey’s could be used a communication tool, to start a conversation with the respondents. Yet, that requires well designed surveys, that match the experience of the respondents. Adam suggest that ‘we can actually try and draw in our potential respondents and get them involved, get them critiquing, get them generating knowledge, get them suggesting questions, all the kinds of things that technology allows us to do, then we might see some kind of shift where actually the ownership of the data goes back to the respondent.’
However, Adam cautions to humanize the process too much. He claims it is a balancing act, between attempting to humanize a process, which people want to do because it increases people’s participation rates, it reduces dropout and increases and interviewer bias. Researchers found out the more realistic the avatar (the little, kind of 3D representation of an interviewer) the less self-disclosure and candid responses you get from the respondent.
Alan Woodley concludes that it ‘seems that e-surveys has got a future but it has to be used carefully and we’ve got to learn a new set of lessons with this new technology.’
The discussion in the now modulewide forum – thanks to the course team :-)- revealed as well differing views, but also consistent views.
One discussion regarded punitive actions, respectively amelioration and that survey are often used as one fellow student put it, that ‘senior leaders at the national level use the survey results to scorn the institutions whose learners responded negatively.’ Talking about punitive actions, I wonder how the OU uses the results from survey and in case a tutor receives bad response rates, if they will be not hired the next time.
Although I am normally willing to take responsibility, to ‘contributing to the good’ as Adam puts it, and finish the course survey, I always ask myself why so late, my contributions may help the next courses, but it does not really improve anything for me. Therefore, some mid-term surveys would be more effective, and would probably increase response rates if people have the feeling that they can change anything.
Being responsible for the computer labs of my last school and working closely with the schools computer administrator, I lost all faith in Internet anonymity. I find it in particular tricky, knowing that your tutor will grade your final EMA. To be honest in my case, knowing that the response can be traced back to me, that does not really contribute to absolute honesty, especially in case that I would have to say something negative. Because at the time we are asked to provide feedback the grading process just starts.
However, I would have the same reservation being asked to fill out a paper and pen response while sitting in a classroom, knowing that my tutor/teacher can trace me back by my writing. Therefore I think safeguarding anonymity is from great importance to receive honest responses and transparency is another important issue that I experienced in my classes. Not only collecting responses, but feeding them back to the classes, letting them know what you found out and what you intend to change. That, led so far to the best results.