This research sheds some light on possible answers to the interesting question of: will changing course contents and difficulties have impact on the level of students’ satisfaction and perception of effectiveness in online courses? The paper focuses on presenting the detailed analyses and findings of indirect assessment techniques. Two courses (groups) are compared in this research, an introductory programming class versus a computer literacy one. The paper employs two different data sets and implements an experimental, in-depth analysis procedure to answer the stated research question. The first set uses data collected from students that express their perception of the effectiveness of seven important online course performance indicators. An example of these indicators is how relevant the course is to the students. The second data set relies on data taken from traditional student evaluation instrument to evaluate the level of students’ satisfaction with the course and its instruction. This later set uses two measures (course-satisfaction and instruction-satisfaction). The obtained results for the majority of studied performance measures denote that there are no statistically significant differences between the two groups. On the other hand, the results identify a couple of performance measures (interactivity and peer-support) in which data in the two groups show statistically significant differences. Possible explanations of the obtained results are discussed. Lastly, brief results of direct assessment methods are also presented.
Keywords: Computer literacy, Online programming courses, Evaluating student’s perception, Measuring students’ satisfaction;