Ooodles of old stuff including How-To's, Writers Resources, Directories of all Types, Technology Reviews, Health Information, Marketing Statisitcs, Affluent Markets, Hospital and Medical Market Data and more

Monday, September 04, 2006

Eh? You Call THAT Research? Junk I've had to read to get through grad school


Instructional technology…especially the use of the Internet…has grown rapidly in recent years but impact on student learning and performance has not been conclusively demonstrated. Issues of design, access, and perception influence the impact of technology on student learning and performance. Three primary questions were identified; "is the web site effective in enhancing learning; what features are most effective for learners; and how can we best evaluate learning features contained on web sites?".

The "mixed method" evaluation survey consisted of 20 questions "designed for brevity and efficiency" using Likert scaling, one ranking question, two qualitative questions, and a demographic section. The survey was administered during class to a "convenience sample" of undergraduate Cornell students enrolled in a research methods course. Participation was strictly voluntary. Of 95 surveys distributed, 68 were collected (n=68) with approximately 71% female/ 29% male/; 82% social science majors; mean age of 23 years, and approximately 71% junior/senior status.
Likert scale questions were analyzed using a mean and standard deviation for responses. ANOVA analysis between Likert questions and demographic information determined significance on items indicating potential gender differentiation/etcetera. Regression modeling was used to determine predictors of performance and reported enjoyment/other factors measured by the Likert scale.
Conclusions of the study indicated "students did perceive that the web site significantly helped them to learn the course material" and "it is apparent that at least one of the benefits of utilizing a mixed method approach to evaluation is the greater sophistication of measurement questions generated".

Problem #1…It's not generalizable.
This study is not generalizable for several reasons.
· By the authors own admission, Cornell students are not representative of most college students due to high admission standards of the school.
· Selection was not random but consisted of a "convenience sample" of students from the authors class. Demographics, variations in skill level, and other factors contribute to the problem.
· There was no control or matched sample group, no information on non-participants, variations in skill level, etcetera.
· Content information of the course, web site, skills and knowledge to be impacted was not provided nor measured.

Problem #2…It doesn't know what it is trying to measure
For example,
· The title; Survey Evaluation of Web Site Instructional Technology: Does it Increase Student Learning? (Indicates an overall/general question to the concept of on-line learning efficacy, not specific to one course).
· The abstract states the purpose of the study is to determine "whether the web site enhanced student perceptions of learning" (Now the question is student perception of the usefulness of a specific web site rather than objective measurement of learning).
· The introduction states "this research project is a response to the urgent need to conduct research on the efficacy of educational web sites (general need)…and later…. is the web site effective in enhancing learning, what features are most effective, and how can we best evaluate learning features contained on web sites?" (Once again, general questions, not specific to one site)
· The results state; "whether the web site significantly contributed to student learning in the course". (Specific to one course, does not indicate perception but implies objective criteria).
· The summary states; "the survey found that students did perceive that the web site significantly helped them to learn the course material". (Specific course, not standardized criteria but rather perception of learning or usefulness).

Problem #3…The study lacks evidence/proof of validity
· The author created a mixed design survey that has never been tested, was not reportedly based upon established criteria, and lacked a control group or other measure of comparison.
· Although a human subjects review and consent was obtained, the author used students enrolled in his/her own; creating problems of familiarity and potential coercion…each threats to internal validity.
· The first three Likert scale questions "asked in alternative ways whether the web site contributed to their learning in the course". The author failed to confirm the interpretation of " significantly enhanced learning", "significantly helped me", and "helped me do well" in measuring contribution to actual learning, student perception of learning, or something else entirely.

Problem #4…It is not reliable and cannot be duplicated
This survey would provide little useful evaluation criteria to be used in another classroom, another course, or another setting. The author did not compare findings against actual results or standards, did not compare against learning objectives, nor provide content/skills to be addressed in the course. The sample was not representative of typical college students, content of the web site, coursework, design issues, and other factors were not provided. Basically this survey provides little to no opportunity for duplication of results or support/rejection of findings.

Problem #5…Findings were inconclusive and failed to support anything
I state the findings failed to support anything because the study seemed to have trouble identifying what the study was attempting to measure (as discussed above), and failed to state a hypothesis or alternative hypothesis. Instead, a "mixed design" was used with a relatively small number of questions based on highly subjective self reporting with no correlation to actual learning objectives or performance measures. The survey could be considered descriptive if just the mean and SD were provided, but using the ANOVA and regression analysis indicates inferential statistics. (Doesn't it?)
The mean of each response was calculated but the standard deviation of responses proved to be highly variable for most questions (ranging from SD=.69 to SD=1.18 for a four item response). This high degree of variability on a four point scale indicates far from conclusive evidence of the site enhancing student perception of learning and provides no evidence of the ability of the web to enhance learning in general. An ANOVA test was used to determine potential gender differences on selected questions but failed to find significance, which was reported. Regression analysis' were conducted to predict performance based on enjoyment and pace ratings.

What I would like to have seen…
· Random assignment or matched groups…some type of control group!
· Comparison of treatment (in this case the web site) versus non-treatment against an objective measurement criteria.
· Stating the null, something like "there will be no difference in scores between those who use web supplement versus those who do not" and the alternative "there will be significant difference…".
· Explicit information regarding design and content issues for standard course and web supplement….perhaps testing impact of different designs on objective criteria.
Despite the authors claim the "mixed design" provided "greater sophistication of measurement questions", I felt confused. I recognize sophistication is not a strong personal attribute…especially statistical sophistication…however, I had difficulty determining what type of design the author was using, who the study could be generalized to, if it could not be generalized then the point of certain analysis, etcetera. A control group would have enhanced the overall effectiveness of the study as would objective criteria against which to measure impact. If random selection was not possible then matching, measuring against previous data, or even correlation may have provided duplication or generalizable results. (At minimum the site could be made available to students in a corresponding course from another school for a more objective evaluation). Operationally defining what was being measured, skills and knowledge to be impacted, and content of the course/web site would have provided valuable information as would objective criteria to measure impact.


At 10:11 PM, Anonymous Anonymous said...

I'm trying to finish college at MSU and wanted to make a couple of extra bucks. I got scammed quite a few times, but found something that actually makes me about $200 bucks a month now. I know it isn't much, but at least it is legit. You can check it out and I hope it helps you out too.
Paid Survey Program That Actually Works


Post a Comment

Links to this post:

Create a Link

<< Home