Mar 2005 Index

Home Page


Editor’s Note: Each study adds to the storehouse of knowledge about distance learning. This data is of value to instructional designers, instructors, administrators, and other researchers. Some studies confirm what we already know – or thought we knew; some findings challenge our previous positions, others provide reinforcement.
 

Dr. Liu provides additional building blocks that support areas of significant difference and areas with no significant difference. The latter is as important as the former, because it gives assurance that little or nothing is lost if we implement learning programs online using appropriate pedagogy and technology. The flip side of the coin is ability to serve many learners who could not otherwise participate, and know that losses are not significant and significant gains can be achieved.
 

Effects of Online Instruction vs. Traditional Instruction
on Students’ Learning

Yuliang Liu

Abstract

This quasi-experimental study was designed to compare the effects of online vs. traditional instruction on students’ learning in two different sections (online vs. traditional section) of a graduate course for K-12 school teachers on Research Methods in Education in the summer of 2003. The experimental group involved twenty-two graduate students who received online instruction on WebCT; the control group involved twenty-one students who received traditional instruction. Participants in both groups completed the same chapter quizzes and a final test, as well as other essay writings, peer critiques, and group projects during the 10-week summer semester. Results indicated the experimental group significantly outperformed the control group in most quizzes and the final test.

Keywords: online instruction, learning outcomes, significant difference, no significant difference.

Introduction

Distance education has grown fast in recent years. In the 2000-2001 academic year, 56% of all 2-year and 4-year institutions in the United States offered distance education courses for various learners. In addition, 12% of all institutions planned to offer distance education courses in the next 3 years (Waits & Lewis, 2003). Currently, online instruction is a primary method for distance education. With online instruction, the student is separated from the teacher and connected through the use of a computer and the Internet. More and more institutions are offering online courses and/or programs to their students in order to meet various learners’ needs. Online learning and instruction, as an integral part of the teaching and learning process in higher education, is growing as fast as the technology itself. On the other hand, traditional classroom instruction is face-to-face instruction, typically conducted in a classroom setting in a lecture/discussion/note taking mode. 

Recent research has indicated that online education has positively influenced many aspects of education both directly and indirectly (CEO Forum, 2000; Phipps & Merisotis, 1999). Until recently, the viability of online learning was not proven. On one hand, Clark (1983, 1994) maintained that media do not influence learning in any condition. On the other hand, Kozma (1994) debated that educational technologies influence learning by interacting with an individual’s cognitive and social processes in constructing knowledge. These earlier debates are still relevant since newly emerging technologies respond to the earlier criticisms and enable learners to use them more efficiently.

According to Phipps and Merisotis (1999) and Russell (1999), there have been two lines of research comparing students’ end-of-semester grades or learning outcomes for online and traditional sections. The first line of research focused on the “significant phenomenon” and cited significant increases in learning outcomes for online learners over their traditional counterparts. The most widely cited literature in this line is McCollum’s (1997) report. McCollum cited a sociology professor who divided his statistics class into two groups: one in online format and one in face-to-face (FtF) format. According to McCollum, online students had more collaboration and their performance outscored their traditional counterparts by an average of 20 percent.

Later studies also supported the “significant phenomenon”. Day, Raven, and Newman (1998) compared and studied the effects of web-based vs. traditional instruction on students’ achievement in undergraduate technical writing in an agricommunication course. They found that online students attained significantly higher achievement scores in the major class project and essay assignments than those in the traditional course. In addition, Day, Raven, and Newman found that online students obtained a higher mean gain in attitudes toward writing.

Nesler, Hanner, Melburg, and McGowan (2001) studied a large sample from 30 institutions and found that nursing students in distance programs had higher scores in professional socialization outcomes than their campus-based counterparts. Al-Jarf and Sado (2002) investigated two groups of freshman students in their first ESL writing course and found the experimental group (web-based instruction) made more gains in writing, became more efficient, made fewer errors, and communicated more easily and fluently, compared with the traditional classroom control group.

The second line of research supported the “no significant phenomenon.” These studies cited no differences in learning outcomes between online and traditional groups. Navarro and Shoemaker (1999) found that about 90% online learners in a graduate MBA class believed that they learned as much as or more than they would have in a traditional classroom. Schulman and Sims (1999) did not find any significant differences on the posttest scores between the online and traditional students in an undergraduate course. Jones (1999) conducted a comparison study of an all web-based class to a traditional class and also found no significant differences in GPA between online and traditional learners.

More recently several other studies have found no differences in learning outcomes in various courses between online and traditional learner. Johnson, Aragon, Shaik, and Palma-Rivas (2000) compared a graduate online course with an equivalent course taught in a traditional format on outcome measures such as course grades and student self-assessment of their performance in the course.  They found no significant differences between the online and traditional student groups. Less than significant, traditional students had slightly more positive perceptions about the instructor and overall course quality.

Ryan (2000) compared online and traditional student performance in construction, equipment and methods classes and found no significant differences in performance between the two groups. Student evaluations of the course were also similar. Similar results of no significant differences in performance were also found by Gagne and Shepherd (2001) in their graduate accounting class, as well as by Johnson (2002) in an introductory biology class.

Review of the above studies indicates most studies in this area found no significant differences in learning outcomes between online and traditional courses in various subjects. Fewer studies have been conducted at the graduate level.. This exploratory study was designed to investigate whether online instruction affects learners’ learning during a semester-long graduate course in teacher education. Learners’ progress in the online and traditional sections is assessed by chapter quizzes and final grades, as well as essay writings, peer critiques, and group projects. A pre-course assessment was conducted and analyzed to ensure that both sections were equivalent. Based on the above literature review, the major research hypothesis in this study was:

Research hypothesis

There was no significant difference in learners’ learning performance, as measured
by chapter quizzes and final grades, between the online section and the traditional section, at the completion of a semester-long graduate course.

Method

Participants

All students who self-selected to enroll in EDUC501 (Research methods in Education) in both online and traditional sections for 10 weeks in the summer semester of 2003 were solicited in the first week for participation in this study. EDUC501 is a required core course in education at the master’s level at a midwestern state university in the United States. Students in this course were from different graduate programs in education. Twenty-four students enrolled in the online section, but two of them withdrew within the first two weeks due to time commitment and unexpected family issues. Thus twenty-two students in the online section were included for final analysis and twenty-one students enrolled in the traditional section. Thus, a totals of 43 participants in both sections were recruited to participate in the study. Participants in both sections were asked to complete consent forms and demographic surveys in the first week. A pretest of course content in both sections was administered. A preliminary analysis of the pretest revealed that the control group scored a little higher than the experimental group. No significant differences were detected in pretest performance between online and traditional sections.

Instruments

Formative and summative assessments of participant learning were conducted in two major domains: knowledge and application. Knowledge assessment focused on individual learning and included seven chapter quizzes and one final test. Application assessment focused on collaborative learning and included a combination of essay writings, peer critiques, and a group research project. The application assessment is consistent with Wade’s (1999) perspective. That is, writing is a unique indicator of student’s learning including communication between student and student, as well as between student and teacher. The final grades of students were assigned based on these two major assessments. Both sections had the same quizzes, essay writings, and group research paper every week. Each chapter quiz was administered as an individual open-book test, but without peer discussion in both sections. Each quiz had 25 objective multiple-choice items regarding each chapter to be completed within 40 minutes. The quizzes in the online section were only available during a specific week and were graded instantly after the completion. Online learners were delighted to have immediate quiz results and feedback; quiz results and feedback in the traditional section were reported back to the class in the following week.

Experimental Design

This study used a non-equivalent control group design. In both the experimental group (online via WebCT) and control group (traditional classroom), the dependent variables of learning performance were pretested and posttested. The independent variable was online vs. traditional instruction in a graduate course. Based on recommendations from the Institute for Higher Education Policy (2000) and Kearsley (2000), a hybrid of instructional techniques was employed in the online section. Specifically, several major features of WebCT were used throughout the semester such as weekly online writing, peer critiquing, bulletin board discussion, online testing, and e-mail. Constructivist learning theory was the major theoretical foundations for online instruction in this course. Instructional design was based on the ADDIE model (Analysis, Design, Development, Implementation, and Evaluation) proposed by Dick, Carey, and Carey (2001). For additional information on design, development, and instructional strategies used in  this course, see other recent publications by the author (Liu, 2003a; 2003b).

To reduce learner anxiety and maximize learning, one FtF orientation was conducted in the first week for the online section. The traditional section met once a week for 3 hours and was primarily taught FtF throughout the semester. Both sections were taught simultaneously by the lead investigator in the summer semester of 2003. In order to make both sections as equivalent as possible, the instructional objectives, content, requirements, assignments, and assessments in both sections were the same.

Procedure

The pretest was administered in paper-and-pencil format to both sections in the first week to determine initial learning and performance. Next, participants in the online section were introduced to the online WebCT environment from the second through the final week. Ongoing posttests, including chapter quizzes and final test, were administered online for the online section and administered in paper-and-pencil format for the traditional classroom.

Results and Discussion

Pretests and posttests of learning performance in both sections were coded and analyzed using SPSS 12. Descriptive statistics of all quizzes and tests in online and traditional sections are presented in Table 1. Results of participants’ seven chapter quizzes and one final test in both sections were analyzed using independent samples t test and presented in Table 2.

Table 1
Descriptive Statistics of all Quizzes and Tests
in Online and Traditional Sections

 

Groups

N

Mean

Standard Deviation

Standard Error Mean

ch1 quiz

experimental group

22

96.82

5.68

1.21

 

control group

21

83.10

10.18

2.22

ch2 quiz

experimental group

22

92.73

4.81

1.03

 

control group

21

88.33

8.56

1.87

ch3 quiz

experimental group

22

91.36

7.27

1.55

 

control group

21

86.67

5.99

1.31

ch4 quiz

experimental group

22

90.23

7.48

1.59

 

control group

21

83.10

7.33

1.60

ch5 quiz

experimental group

22

85.23

9.19

1.96

 

control group

21

81.19

8.79

1.92

ch6 quiz

experimental group

22

89.77

8.66

1.85

 

control group

21

86.90

4.60

1.01

ch13 quiz

experimental group

22

90.23

8.38

1.79

 

control group

21

84.76

8.87

1.94

Pre-assessment

experimental group

22

41.4545

12.07

2.57

 

control group

21

44.9524

8.66

1.89

Final test

experimental group

22

87.6364

7.24

1.54

 

control group

21

77.7143

9.68

2.11

Final grade

experimental group

22

4.0000

.00

.00

 

control group

21

3.8095

.40

.09

 
Table 2
Results of t Test in Various Assessments
between the Experimental and Control Groups

 

 

 

 

t-test for Equality of Means

t

df

Sig.
(2-tailed)

Mean Difference

Std. Error Difference

95% Confidence Interval of the Difference

 

 

 

 

 

Lower

Upper

ch1 quiz

Equal variances assumed

5.491

41

.000

13.72

2.50

8.68

18.77

 

Equal variances not assumed

5.423

31.033

.000

13.72

2.53

8.56

18.88

ch2 quiz

Equal variances assumed

2.087

41

.043

4.39

2.11

.142

8.65

 

Equal variances not assumed

2.061

31.178

.048

4.39

2.13

.05

8.74

ch3 quiz

Equal variances assumed

2.307

41

.026

4.70

2.04

.59

8.81

 

Equal variances not assumed

2.318

40.159

.026

4.70

2.03

.60

8.79

ch4 quiz

Equal variances assumed

3.157

41

.003

7.13

2.26

2.57

11.69

 

Equal variances not assumed

3.159

40.969

.003

7.13

2.26

2.57

11.69

ch5 quiz

Equal variances assumed

1.471

41

.149

4.04

2.75

-1.51

9.58

 

Equal variances not assumed

1.472

41.000

.149

4.04

2.74

-1.50

9.57

ch6 quiz

Equal variances assumed

1.347

41

.185

2.87

2.13

-1.43

7.17

 

Equal variances not assumed

1.365

32.307

.182

2.87

2.10

-1.41

7.15

ch13 quiz

Equal variances assumed

2.078

41

.044

5.47

2.63

.15

10.78

 

Equal variances not assumed

2.075

40.555

.044

5.47

2.63

.14

10.79

pre-assessment

Equal variances assumed

-1.087

41

.283

-3.4978

3.22

-9.99

3.00

 

Equal variances not assumed

-1.096

38.129

.280

-3.4978

3.19

-9.96

2.96

Final test

Equal variances assumed

3.818

41

.000

9.9221

2.60

4.67

15.17

 

Equal variances not assumed

3.792

37.013

.001

9.9221

2.62

4.62

15.22

Final Grade

Equal variances assumed

2.222

41

.032

.1905

.09

.02

.36

 

Equal variances not assumed

2.169

20.000

.042

.1905

.09

.007

.37

Results in Table 2 revealed that between online and traditional sections, no significant differences were found in chapter 5 (t (41) = 1.47, p = .15) or chapter 6 quizzes (t (41) = 1.35, p = .18). However, significant differences between both sections were found in all other five quizzes, including chapters 1, 2, 3, 4, 13, and the final test. Specifically, in chapter 1 quiz, t (41) = 5.49, p = .00; in chapter 2 quiz, t (41) = 2.09, p = .04; in chapter 3 quiz, t (41) = 2.31, p = .03; in chapter 4 quiz, t (41) = 3.16, p = .00; in chapter 13 quiz, t (41) = 2.08, p = .04; in the final test, t (41) = 3.82, p = .00. In terms of learners’ final grades, t (41) = 2.22, p = .03.  Thus, overall, the null research hypothesis described previously in this study was not supported.

In addition, regarding the students’ perceptions and satisfactions of the course, the same students’ evaluation form including 18 evaluation items was used by the lead investigator’s department in both sections at the end of the course. Students’ quantitative evaluation results revealed that the average in both sections was about the same (4.5 on a 5-point scale). However, student’s qualitative comments indicate that students in the online section were more motivated than those in the traditional section. For instance, a few students in the traditional section complained about the content and frequency of chapter quizzes while those in the online section did not. In addition, students in the online section expressed greater satisfaction of the effectiveness of their learning in this course. A majority of students in the online section thought they had learned more in this course than from a traditional section. It was clear that such students’ qualitative comments were consistent with the research findings described previously.

The results in this study indicate that there is a significant difference in learning outcomes between online and traditional learners. This study did not support the “non-significant phenomenon” described by Russell (1999).  This finding is a surprise to the lead investigator due to various reasons. As described previously, the instructional requirements, activities, and content were attempted to be kept the same in both online and traditional sections. In addition, in the traditional section, the teacher also used various technologies such as using PowerPoint to present the course content in class and allowing students to access/print the teacher’s chapter notes in Acrobat (.pdf) format from WebCT before the class. However, the results are consistent with the line of research called “significant phenomenon” described by Russell (1999). That is, this study supports prior research line called “significant phenomenon” in this area and indicates that online instruction can be a viable alternative for higher education since it can achieve better student learning or at least as well as the traditional instruction.

Results of this study are inconsistent with some prior research. This may be related to various reasons:

First, a variety of samples were used. The samples in most such studies in this area were convenience samples. The sample in this study was also a convenience sample and participants were not randomly selected. Some studies involved undergraduate students while this study involved graduate students.

Second, a variety of subjects were involved in such studies including accounting, nursing, and construction. In this study, a graduate educational research course was involved.

Third, a variety of online instructional strategies were used. Some studies only used online writing assignments while this study used a combination of assessment techniques such as online quizzes/tests, writing, peer critiques, and group projects.

Fourth, a variety of online technologies were used. Some studies used the normal course web site while other studies used specialized course management and delivery systems such as Blackboard and WebCT. This study primarily used WebCT for online course delivery. Care should be taken in generalizing results to other environments without further investigation..

Conclusion

This study supports some previous research that (a) there is a significant difference in learning outcomes between online and traditional learners and (b) online instruction can be a viable alternative for higher education. This study has significant practical implications for higher education since many institutions are offering more online courses/programs. It also contributes to the current literature in the area of online instruction and e-learning. If online instruction is found to enhance student learning, more online courses/programs can be proposed. For example, embedded online courses may be used in place of more lengthy/costly traditional courses.

Due to various limitations of the study, care should be taken in generalization of results to other environments.
__________

*  An earlier version of this paper was presented at the International Congress of Psychology in Beijing, China, in August 2004.

**Acknowledgement: This project was partially sponsored by the Illinois Century Content Development Grant from Illinois Board of Higher Education from April 2002 to June 2003.

 

References

Al-Jarf, A. & Sado, R. (2002).  Effect of online learning on struggling ESL college writers.  San Antonio, TX:  National Educational Computing Conference Proceedings. (ERIC Document Reproduction Service No. ED 475 920).

CEO Forum (2000). The CEO forum: School technology and readiness report [Online]. DC:          CEO Forum. Available: http://www.ceoforum.org/.

Clark, R.E. (1983). Reconsidering research on learning from media. Review of Educational Research, 53, 445-459.

Clark, R. E. (1994). Media will never influence learning. Educational Technology, Research and Development, 42(2), 21-29.

Day, T., Raven, M. R. & Newman, M. E. (1998).  The effects of world wide web           instruction and traditional instruction and learning styles on achievement and changes in student attitudes in a technical writing in an agricommunication course.  Journal of Agricultural Education, 39(4), 65-75.

Dick, W., Carey, L., & Carey, J. O. (2001). The systematic design of instruction (5th Edition). New York: Addison-Wesley Educational Publishers, Inc.

Gagne, M. & Shepherd, M. (2001).  Distance learning in accounting.  T. H. E.  Journal, 29(9), 58-62.

Johnson, M. (2002). Introductory biology online: Assessing outcomes of two student populations. Journal of College Science Teaching, 31(5), 312-317.

Johnson, S. D., Aragon, S. R., Shaik, N., & Palma-Rivas, N. (2000).  Comparative analysis of learner satisfaction and learning outcomes in online and face-to-face learning environments.  Journal of Interactive      Learning Research, 11(1), 29-49.

Jones, E. (1999). A comparison of all web-based class to a traditional class. Texas, USA. (ERIC Document Reproduction Service ED 432 286).

Kearsley, G. (2000). Online education: Learning and teaching in no cyberspace. Belmont, CA: Wadsworth.

Kozma, R. B. (1994). Will media influence learning? Reframing the debate. Educational Technology Research and Development, 42, 7-19.

Liu, Y. (2003a). Improving online interactivity and learning: A constructivist approach. Academic Exchange Quarterly, 7(1), 174-178.

Liu, Y. (2003b). Taking educational research online: Developing an online educational research course. Journal of Interactive Instruction Development,

      16(1), 12-20.

McCollum, K. (1997). A professor divides his class in two to test value of online instruction. Chronicle of Higher Education, 43, 23.

Navarro, P., & Shoemaker, J. (1999). The power of cyberlearning: An empirical test. Journal of Computing in Higher Education, 11(1), 33.

Nesler, M. S., Hanner, M. B., Melburg, V., & McGowan, S. (2001).  Professional socialization of baccalaureate nursing students: Can students in distance nursing programs become socialized?  Journal of Nursing Education, 40(7), 293-302. 

Phipps R. & Merisotis J. (1999). What's the difference? A review of contemporary research on the effectiveness of distance learning in higher education. Washington, DC, USA: The Institute for Higher Education Policy.

Russell, T. L. (1999). The no significant difference phenomenon. Office of Instructional Telecommunications, North Carolina State University, USA.

Ryan, R. C. (2000).  Student assessment comparison of lecture and online construction equipment and methods classes. T. H. E.  Journal, 27(6), 78-83.

Schulman, A. H., & Sims, R. L. (1999). Learning in an online format vs. an in-class format: An experimental study. T. H. E. Journal, 26(11), 54-56.

The Institute for Higher Education Policy (2000). Quality on the line: Benchmarks for success in Internet-based distance education. Washington, DC, USA.

Wade, W. (1999). Assessment in distance learning: What do students know and how do we know that they know it?  T.H.E. Journal, 27(3), 94-100.

Waits, T., & Lewis L. (2003). Distance education at degree-granting postsecondary institutions: 2000-2001. U.S. Department of Education. Washington, DC, USA: National Center for Education Statistics (NCES Pub 2003-017).
 

About the Author

Dr. Yuliang Liu is assistant professor of instructional technology in the Department of Educational Leadership at Southern Illinois University Edwardsville. His major research interest is in distance education, online instruction, and research methodology.

Contact Data:

Yuliang Liu, Ph. D.
Department of Educational Leadership
Southern Illinois University Edwardsville
Edwardsville, Illinois 62026-1125 USA

Phone: (618) 650-3293     Fax: (618) 650-3808     E-mail: yliu@siue.edu
 

go top

Mar 2005 Index

Home Page