H809 – Ardalan et al. 2007
I am now making use of the
The wisdom of crowds (James Surowiecki,2004)
and draw on
We think (Charles Leadbeater, 2008)
to write a summary of the main issues mention with regard to the Ardalan et al. (2007) reading.
Charles Leadbeater, 2008 ideas about the power of mass creativity are pictured pretty good in his images.
I went yesterday night through all the contributions made in the modulewide forum Activity 11.7: Discussion the papers and extracted the following statements and findings with regard to Ardelan et al. (2007)
- lacking background information
- about how the paper-based survey was distributed
- without background information how the paper-based survey was distributed it is difficult to lock at a low rating of lecturer performance
- lack of detail regarding the conditions under which the paper-based surveys were completed
- different environments where students finish their survey – in class, or at home, alone in a group, with a glass of wine
- no control over the enviroment in web-based surveys likely to influence the responses from at least some of the students
- we don’t know enough about the factors surrounding the study and the specific contextual variables that could have influenced the results
- confounding factors (e.g. age, gender, previous experience with computers and/or web-based surveys) – unadjusted numbers of both groups might lead to misleading results
- reservations about the method of comparing different years and 2003 being quite early for students to be competent/familiar with online response – might explain drop in response rate
- fundamental flaw to compare two different years when comparing ratings of teachers – how many things influence how teaching differs from year to year, even day-to-day, such as personal situation, politics, class dynamics, etc.
- comparing 2003 (paper-based) with 2004 (web-based) is not really valid as the whole method is novel for students and faculty
- samples were not comparable – response rate, different environments and time available to complete survey, different media
- participants are self-selected – implications on the results
- profile fo the participants is likely to be different and therefore it is problematic to directly compare the results and in making generalisations about how people complete paper and web-based surveys differently
- important to determine the characteristics of those participating student and the necessary type of stimuli and that would convince non-respondents to participate
- response rate
- issues of high responses to paper surveys is not addressed
- low response rate for the web-based survey (31%) is not representative
- offering incentives to achieve higher participation
- ethical issues?
- paid for participation (Bos) – definitely effects/skew results
- Ardalan considers offering extra-credits as incentive
- self-reported – but what makes a good teacher definitely differs between students, is s/he good because students do not get much homework, or is he good because they learn a lot?!
- design of questionnaire can contribute or undermine validity – number of questions asked are low (8), ambiguous and too vague, omit experience from students
- there is no question asked about teaching methods – nevertheless refers Ardalan to results (p.1088)
- coding (quantifying) of the qualitative answers
- the categorisation in e.g. positive, mixed and negative response seems pretty simplistic
- categories seemed a bit vague and subjective
- qualitative coding was a little crude
- theoretical framework?
- paper aims at a more empirical, positivist audience
- positivist (simplistic?) approach (Sharon) – ignore contextual variable
- Ardalan’s research is based on positivism – quantifying data as well the qualitative feedback
- not much of an attempt to ground the study in any theoretical framework – e.g. linking with student-centred learning
- Discussion of results / lacking depth of study / lacking evidence?!
- Hypothesis 3 – measuring performance of faculty
- Hypo 3 and 4 – methodology used in the study cannot explain or find any causes for such a difference – study does not delve deep enough into the situation to understand/analyse what is actually happening
- they don’t have sufficient evidence to explain why students would rate faculty significantly lower
- why did they skip Autumn surveys (the first web-based survey) due to ‘inevitable teething troubles’ – does that result in low response rate in the spring survey?
- cross-sectional study provides only a snapshot, but give no indication of the sequence of events
- paper-based survey – completed and collected in class in a single session
- web-based survey – completed at leisure time, at any time or place, availability of survey at least a whole week
- students did not have the opportunity to take the paper surveys home and return them a few days later – would increase comparability of both surveys
- some literature had been reviewed, I wasn’t convinced it was as complete a review as possible
- it is difficult to generalize from this study to other places or time because of the flaws within the study
- lacking reliability, because test was not repeated
- convincing, well-referenced and interesting
- readable and believable, though…
Well, it really looks like we all look pretty critical at the study and emphasise on the weaknesses of the study and not necessarily on the strength.
I guess the next step would be to suggest solutions, although some suggestions were already made in the forum how matters could be improved.