February 2009 Index
 
Home Page


Editor’s Note
: It occurs to this editor (who has extensive teaching experience in classroom and distance learning) that two vibrant areas of concern are 1) open book and 2) time. The immediate collection of tests from both classroom and online learners is a key component to assess learning. The editors would be interested to compare knowledge retention each group.

Traditional versus Online Content Delivery and Assessment

Margaret D. Anderson and Mark Connell

USA

Abstract

The present study compared several dimensions of a traditional course with those of a parallel course offered online. The elements under consideration included: 1) self selection versus assignment to courses; 2) student attrition; 3) performance related to entering knowledge; 4) performance variability in the 2 formats; 5) the effect of open versus closed book tests on scores; 6) performance variability in proctored and unproctored settings; and 7) academic integrity in online assignments. The results indicated no difference in attrition between the 2 sections. Similarly, there was no difference in entering knowledge between the 2 sections, and no relationship between entering knowledge and performance on either chapter tests or final exam for either section. However there was a strong relationship between chapter test and final exam performance in both sections. In addition, students taking unproctored open book tests outperformed those taking the proctored, closed book tests and this advantage remained even when both groups completed the same proctored open book final exam. Results suggest a possible training effect of taking multiple short open book tests. They further indicate that frequent low stakes assessments may encourage higher levels of academic integrity in online students.

Keywords: on-line tests, on-line academic integrity, proctored tests, open-book tests.

Introduction

Distance education, particularly online courses, is increasing at an exponential rate. The most current report from the Sloan Consortium (Allen & Seaman, 2007) indicates that in the fall of 2006 nearly 20% of all U.S. higher education students, or approximately 3.5 million students, were taking at least one online course, representing a nearly 10% increase from the previous year. In addition, nearly all (83%) of the institutions offering online courses expect their enrollments to increase over the next year. With the increase in these offerings come concerns over the quality of the courses and the integrity of the related assessment instruments. While many of the concerns are those with which academics have traditionally grappled, the new delivery medium has introduced a host of new dilemmas for instructors and academic institutions. The present paper addresses seven interrelated areas of concern.

Self Selection versus Assignment to Courses

Regardless of the delivery medium, researchers have explored the effect on academic performance of allowing students to self-select rather than assigned to sections of courses. The concerns surrounding this variable are exacerbated when one of the sections is taught in the traditional face to face lecture method and the other is offered asynchronously online. Waschull (2001) compared traditional and distance courses and reported that regardless of the method of placement (self selection or assignment) attrition rates and performance were similar. Collins and Pascarella (2003) reinforced this finding, concluding that students can learn equally well in traditional or distance classes whether they self-select or are randomly assigned. However, Collins and Pascarella include the caveat that distance students who self select do perform slightly better than other groups, thus possibly compromising the body of research on students’ self selection into distance courses.

Attrition

Of equal concern to educators is the attrition rate of students from courses offered in traditional formats compared to online models coupled with self selection or assignment into those courses. Waschull (2001) and Collins and Pascarella (2003) both noted no difference in attrition from on-campus to distance courses, regardless of whether students self selected or were assigned to the section.

Entering Knowledge and Course Performance

Kruck and Lending (2003) studied a question which is of concern to academics and administrators alike, the ability to predict student performance. They noted the earlier contradictory data, with some researchers demonstrating a relationship to performance in prior related courses (Eskew & Farley, as cited in Kruck & Lending, 2003), while others (Marcal & Roberts, as cited in Kruck & Lending, 2003) found that a prerequisite course was not associated with subsequent performance. Thus they undertook a study to investigate the ability to predict academic performance in a college level course. Their findings supported the hypothesis that while motivation and overall GPA do predict performance, prior related courses or background knowledge was not significantly related to subsequent performance.

Performance in Traditional and Online Courses

The recent growth in online offerings naturally leads researchers to study students’ performance in these new courses as compared to that in a more traditional course.  Once again, findings from these studies are inconclusive. Students in both conditions of the two Waschull (2001) studies demonstrated similar course and final exam scores. However, Liu (2005) reports that there is a significant difference in learning outcomes between students enrolled in equivalent online and traditional sections of a graduate level course, with the online students outperforming classroom based students on both quizzes and final tests.

Open versus Closed Book Tests

One pedagogical element now frequently at the heart of the difference between traditional versus online courses is the use of open or closed book tests and the effect that difference might have on student learning. In comparing open and closed book tests Brightwell, Daniel, and Stewart (2004) conclude that well developed questions are equally effective at discriminating student abilities in either administration modality. Results from the Agarwal, Karpicke, Kang, Roediger, and McDermott (in press) study indicate that while performance on open-book tests was superior to student performance on closed-book tests, the benefit did not persist on delayed tests. However, students do report liking the open book online test better than an open book in-class test because of the immediate feedback (Liu, 2005), and Agarwal et al. (in press) claim that feedback does enhance long-term retention for either type of test. Rakes (2008) points out that, while open book testing may more closely resemble authentic assessments in the work environment, most students may not necessarily be adept in these types of tests. She endeavored to ascertain whether training in taking open book tests would improve student performance on those measures. Her results indicate that the training results, while significant if training is administered immediately prior to the assessment, were not sustained over time.

Proctored versus Unproctored Tests

An issue related to the one discussed above is the effect proctoring might play in student performance. In the traditional classroom it is a simple matter for the instructor to decide to, or not to, proctor an examination. However, this option is more difficult for the online instructor. Lamenting the paucity of research on the effect this dimension has on testing, Wellman (2005) devised a study specifically to address this issue. He administered similar online quizzes in proctored and unproctored settings to assess students’ mastery of material which had previously been presented in an online format. His results revealed superior performance for students in the proctored situation over those in the unproctored setting.

Academic Integrity in Online Assignments

Regardless of the method of assessment, one overriding concern centers on the issue of academic integrity. How does the instructor of the online course assure the authenticity of the individual completing the assignments?  Rovai (2000) advocates the development of assessment instruments better suited to the constructivist orientation than the traditional tests as a means of dealing with the dilemma. Similarly, Olt (2002) offers strategies for minimizing academic dishonesty in online courses. She points out that the pervasiveness of cheating in the schools is not restricted to online courses. However, she does concede that one of the most difficult issues for the online instructor is to ascertain who is actually taking the assessment and what resources they may take advantage of during the assessment. She advocates using several short assessments during the semester as a possible means of dealing with the first concern, and making all tests open book to address the second.

Method

Participants 

A total of 130 students from the State University of New York at Cortland, a small comprehensive college in upstate New York, participated in the study. Forty-nine students were enrolled in the online section and 81 were enrolled in the traditional section. Students in the online section were 100% psychology majors, 90% female, 80% freshmen, 6% sophomores and 14% juniors. In the traditional section 88% were other than psychology majors (most from professional studies), 52% male, 15% freshmen, 58% sophomores, 15% juniors and 11% seniors.

Materials

The pre-test and final exam consisted of 50 multiple choice objective questions drawn from the textbook’s accompanying test bank. The 13 chapter tests consisted of 15 multiple choice questions also drawn from the questions provided in the companion test bank.

Procedure

The study was conducted in CAP 100, the introduction to computer applications. Students in the online section were freshmen and transfer psychology majors who were assigned to that section as a part of their program requirements. Students in the traditional section were also completing the course as part of the requirements for their respective programs, however they self selected that section. A common pre-test was administered to students in both sections of the course. Online students were unproctored, and were instructed not to use any materials as the test did not count toward their grades, but was only to inform instructors of their incoming knowledge. Students in the traditional section were given the same instructions, however their test was administered in a proctored class setting. Both sections employed the same text book and associated ancillary support materials, including web based support activities, audio PowerPoint lectures, and online testing modules. Students in the online section received instruction via the assigned text book, the audio PowerPoint lectures, and other materials from the companion web site. Students in the traditional section had access to all those materials, and attended weekly lectures on the assigned subjects. Students in the online section completed online, unproctored, untimed multiple choice tests for each of the assigned chapters. They were required to complete the chapters in a specified sequence, and by certain benchmarks, however they could move ahead of the specified dates. Students in the traditional section completed the same untimed online tests, however they took the tests in a proctored setting on a specified schedule. All students completed the same online comprehensive final exam in a proctored setting with set time limits.

Results

Attrition 

Of the 49 students who enrolled in the online section, 3 (6%) subsequently withdrew from the course while 16 (20%) of the original 81 enrolled in the traditional course failed to complete the course. The Continuity Correction for the Chi-Square analysis indicated that this difference did not quite reach significance (.061).

Performance

Scores on the pre-test, 13 chapter tests, and final exam were analyzed using independent samples t-tests to compare performance between students enrolled in the online and traditional sections of the course. Results presented in Table 1 indicate that there was no significant difference in student performance on the pre-test (t = 1.54, p = .125). However, students in the online section did outperform those in the traditional section on both the chapter tests (t = 3.19, p = .002) and the final exam (t = 2.01, p = .046). While the magnitude of the differences in the means for the chapter tests was moderate (eta squared = .073) it was small for the differences in the means for the final exam (eta squared = .032).

Table 1
Test Scores for Online and Traditional Sections

                                    Online                          Traditional

Test                       M         SD               M         SD               df           t          p      

Pre-test                 55        15                50        16               103       1.54      .125

Chapter tests          83        12                76        9                 80        3.19      .002

Final exam              84          8                80        8                119       2.01      .046


Pearson product-moment correlations were calculated to analyze the relationship between student performance on the pre-test, 13 chapter tests, and final exam. Data presented in Table 2 suggest that there is no relationship between students’ performance on the pre-test and the chapter tests in either the online (r = .008, p = .236) or the traditional (r = .093, p = .438) sections. Similarly no relationship was detected between performance on the pre-test and the final for the online section (r = .117, p = .515) or the traditional section (r = .035, p = .782). However, there was a highly significant relationship (p < .000) between the chapter tests and the final exam scores for both the online section (r = .501) and the traditional section (r = .489).

Table 2
Relationship between Pre-test, Chapter Tests and Final Exam Scores

                  pre x tests                    pre x final                    tests x final

Sections        r           p                   r           p                    r           p

Online          008     .236                 .117     .515                 .501     .000

Traditional    093     .438                 .035     .782                 .489     .000
Discussion


The rapid growth of online courses offers researchers a challenge and an opportunity to reexamine traditional course design issues as well as the related pedagogical questions that arise from the new delivery medium.

With respect to the effects that self-selection versus assignment to courses might have in student performance, the results from the present study were similar to those of Waschull (2001) and Collins and Pascarella (2003), indicating that despite the differing demographics of the two groups, incoming knowledge of the groups was equivalent. Thus, prior knowledge of the subject matter does not seem to affect students’ preference for course delivery format. Further, the attrition data are similar to those discussed by Waschull and suggest that completion rates for students in the online and traditional sections do not differ significantly. These findings in combination are of value to administrators who are frequently required to assign students to courses and hesitate when the sections may employ varying delivery methods.

As with Kruck and Lending’s (2003) studies, students in the present study showed no difference in performance on chapter tests or final exam based on incoming knowledge. While this finding may seem counterintuitive, it should be encouraging to students and instructors alike in that it seems to indicate that students’ performance is more dependent on inherent course factors than on their entering domain knowledge.

Previous studies yield different results concerning the overall performance comparisons between students in online and traditional sections of courses. While Waschull (2001) reports no statistical difference in performance between the groups, Liu (2005) suggests that students in the online section significantly outperformed those in the traditional section on most quizzes and the final exam. The findings from the present study also reveal a significant difference in chapter tests and final exam grades, with the online students once again surpassing those in the traditional section. However, in the present study the performance data may be confounded by the test administration format of the two sections.

In the present study students in the online section completed their online chapter tests in an unproctored and possibly open book format, while the students in the traditional section completed the same online chapter tests in a proctored, closed book setting. Again, previous findings along this dimension are inconclusive, with Brightwell, Daniel, and Stewart (2004) reporting no difference in performance based on open and closed book tests, and Agarwal et al. (in press) demonstrating that initial performance on open book tests was superior to that of closed book tests but, the benefit was not evidenced in delayed testing. Thus, the fact that online students in the present study outperformed those in the traditional section on the chapter tests might be due to the fact that they were able to take the tests open book. With respect to the effect proctoring might play on student test performance, the present study is in disagreement with Wellman (2005) who reported superior performance in a proctored setting. The present unproctored students significantly outperformed the proctored ones on chapter tests.

The performance results for the final exam are a little easier to unravel as both the online and the traditional groups completed the same online exam in a proctored, open book setting. While the difference in final exam grades was not as high as that on the chapter tests, it was still statistically significant, with the online students outperforming those in the traditional group. It is possible that this difference in scores may be a result of training over time. Rakes (2008) contends that most students do not know how to take open book tests and benefit from training administered immediately prior to the test. While no explicit training was offered to students in the online section of the present study, it is possible that there was a practice effect which contributed implicit training in taking open book tests and carried over to enhance their final exam performance.

A final issue addressed in the present study, and one which is unique to the online medium, is the ethical consideration of identity security, particularly ensuring that students who are receiving credit for the course are actually completing their own work. Both Rovai (2000) and Olt (2002) suggest that confirming the identity of the test taker is particularly critical if the course involves high-stakes or only summative tests. Olt recommends using short assessments throughout the duration of the course to lower the value of each test and thus hopefully reduce students’ likelihood of cheating. The use of the 13 chapter tests in the present study follows Olt’s prescription. The highly significant correlation between student’s chapter test scores (both those who were proctored and those who were not) and the proctored final exam grades suggests that the same individuals completed both the chapter tests and the final exam.

Overall, results from the present study suggest that neither method of placement, nor entering domain knowledge affect course performance. Unproctored, open book online tests yield superior performance to similar online tests administered in a closed book, proctored setting. Data further reveal that this performance advantage persists when students are all administered the same online final exam in an open book, proctored format. Finally, it would appear from the present study that the same student who takes the chapter tests also takes the final exam regardless of the format of the course. While this is in no way conclusive, it does suggest that if the online tests are administered frequently and in an open book format, students are likely to complete their own assignments.

Future Research

Further studies are planned to examine the effect of proctored versus unproctored testing in a more controlled manner. In the future two sections of the computers applications course will be offered in which all students receive the same instruction online. Students will also receive the same online chapter tests. However, the tests will be administered to one group in a proctored setting while the other group will complete tests in an unproctored and untimed environment. In addition possible predictors of performance will also be explored using time on test as an indication of effort and overall GPA as a measure of general academic ability.

References

Agarwal, P. K., Karpicke, J. D., Kang, S. H. K., Roediger, H. L., & McDermott, K. B. (in press). Examining the testing effect with open- and closed-book tests. Applied Cognitive Psychology.

Allen, I. E., & Seaman, J. (2007). Online Nation: Five years of growth in online learning. Needham, MA: The Sloan Consortium. Retrieved September 20, 2008, from http://www.sloan-c.org/publications/survey/pdf/online_nation.pdf

Brightwell, R., Daniel, J., & Stewart. A. (2004). Evaluation: is an open book examination easier? Proceedings of Improving Flexible Learning Outcomes Through Flexible Science Teaching symposium. October 3, 2003. Retrieved July 24, 2008, from http://www.bioscience.heacademy.ac.uk/journal/vol3/Beej-3-3.pdf

Collins, J., & Pascarella, E. T. (2003). Learning on campus and learning at a distance: a randomized instructional experiment. Research in Higher Education, 44(3), 315-326. Retrieved July 20, 2008, from http://www.springerlink.com/content/un74220tx647wx60/

Kruck, S. E., & Lending, D. (2003). Predicting academic performance in an introductory college-level IS course. Information Technology, Learning, and Performance, 21(2), 9-14. Retrieved August 4, 2008, from http://www.osra.org/itlpj/krucklendingfall2003.pdf

Liu, Y. (2005). Effects of online instruction vs. traditional instruction on students’ learning. International Journal of Instructional Technology and Distance Learning, 2(3), 57-66. Retrieved July 15, 2008, from http://www.itdl.org/Journal/Mar_05/article06.htm

Olt, M. R. (2002). Ethics and distance education: Strategies for minimizing academic dishonesty in online assessment. Online Journal of Distance Learning Administration, 5(3). Retrieved July 15, 2008, from http://www.westga.edu/~distance/ojdla/fall53/olt53.html

Rakes, G. C. (2008). Open book testing in online learning environments. Journal of Interactive Online Learning, 7(1). Retrieved September 10, 2008, from http://www.ncolr.org/jiol/issues/PDF/7.1.1.pdf

Rovai, A. P. (2000). Online and traditional assessments: what is the difference?  Internet and Higher Education, 3, 141-151.

Waschull, S. B. (2001). The online delivery of psychology courses: Attrition, performance, and evaluation. Teaching of Psychology, 28(2), 143-147.

Wellman, G. S. (2005). Comparing learning style to performance in on-line teaching: Impact of proctored v. un-proctored testing. Journal of Interactive Online Learning, 4(1). Retrieved July 24, 2008, from http://www.ncolr.org/jiol/issues/PDF/4.1.2.pdf

About the Author

Dr. Margaret D. Anderson received her PhD in Educational Technology from Concordia University in Montreal in 1995.  She has been a faculty member in the Psychology Department at SUNY Cortland since 1994, prior to that she was an adjunct professor at SUNY Plattsburgh.  Dr. Anderson focuses on the different approaches college students take to learning and how technology interacts with those learning styles, particularly in a distance environment.

margaret.anderson@cortland.ed

go top
February 2009 Index
Home Page