March 2004 IndexHome Page


Editor’s Note
: Evaluation can server a variety of purposes. Evaluation of instruction allows comparison of methods against a benchmark, in this case distance learning compared to traditional instruction. It also provides valuable data for improvement of the teaching-learning process.
 

Evaluating Distance Education

Comparison of Student Ratings of Instruction
in Distance Education and Traditional Courses
Claudia Flowers, LuAnn Jordan, Robert Algozzine,
Fred Spooner, Ashlee Fisher
 

Abstract

The fundamental concept of distance education is simple enough: Students and teachers are separated by distance and sometime by time. From correspondence and independent study to computer networks and multimedia distribution, learning away from the traditional classroom has evolved to the extent that almost every university or college in the United States participates in it in some way. Research illustrates that learning at a distance is effective when measured by student achievement and attitudes. In this study, we added to that literature by evaluating differences in student perceptions of course and instructor effectiveness in distance education and traditional courses.

The type of distance education examined was two-way interactive TV. Three different modes of course delivery were studied: (1) distance education off-campus, (2) distance education on-campus, and (3) traditional on-campus. Eight instructors taught a course using each method of delivery. On-campus students in traditional courses perceived the course and the instructor as being more effective than their off-campus peers in distance education courses. The magnitude of difference between the means was significant and large. The results are discussed with regard to their implications for new and ongoing distance education programs.

Comparison of Student Ratings of Instruction
in Distance Education and Traditional Courses

The term “distance learning” describes any instructional arrangement where the teacher and learner are geographically separated (Moore & Thompson 1997). Distance learning, sometimes described as distance education (DE), home study, correspondence study, independent study, or external studies, has been an alternative method for delivering university-level courses for almost 300 years. Correspondence education was invented in the late 19th century to enable learners to receive instruction when they could not attend traditional classes (Moore & Thompson 1997). Today, the more popular term for this type of learning at a distance is distance education or “…planned learning that normally occurs in a different place from teaching and as a result requires special techniques of course design, special instructional techniques, special methods of communication by electronic and other technology, as well as special organizational and administrative arrangements” (Moore & Kearsley, 1996, p. 2). From correspondence and independent study to computer networks and multimedia distribution, learning away from the traditional classroom has evolved to the extent that large numbers of universities and colleges in the United States are involved in it in some way. For example, according to data compiled by the National Center on Education Statistics (1997), 79 percent of public four-year institutions and 72 percent of public two-year institutions offered distance-education courses; further, more than 1,600 institutions offered a total of about 54,000 online-education courses with 1.6 million students enrolled. The widespread availability of high-speed Internet services has brought modern, electronic forms of distance education to new high levels of interest and use (Carnevale, 2000).

Keegan (1988) suggests that there are six defining characteristics of learning at a distance. First, there is separation of the teacher and the student (i.e., separation vs. face-to-face in the same classroom). Next, there is a component not typically found in most on-campus courses, the influence of an educational organization (e.g., department or college) in the planning, preparation or delivery of material (vs. a stand alone instructor responsible for content generation and delivery of course information). Third, there is the use of technical media. Historically, this technical media has been print, but as technology advances, electronic media (computers, TV studio delivery, computer software presentation packages) will be added to a list of technical options. The fourth defining characteristic is the provision for two-way communication. This could be via a telephone conference with a single student, or a group of students at a central location at a prescribed time. Another defining characteristic is the possibility of an occasional seminar. This would be the opportunity for students working independently, to assemble as a group in the presence of the instructor. The last defining characteristic as illustrated by Keegan is participation of the most industrialized form of education. Simply said, the industrialized form of education means a division of labor.

Moore and Kearsley (1996) describe the components of a general systems model for distance education. There must be sources of knowledge or skills that will be taught, systematic design of instructional experiences, at least one form of alternative instructional delivery (e.g., print, audio recordings, television, videoconferencing, computer networks), instructors who interact with students to facilitate the learning process, and alternative learning environments (e.g., homes, centers, workplaces). Typically, a team of individuals would be involved in the preparation and delivery of course content. Members of the team might include a content expert (e.g., a faculty member in elementary education, for a course offered from that program), graphic illustrators, who for all practical purposes, have no knowledge of the content, but take the content and bring it to life with related illustrations, and a “TV personality,” an individual trained to work in the presence of the camera and a TV or radio announcer’s voice to deliver the content.

Although distance education has been seen as promising by some, in the eyes of others it has been seen as something less than education typically received on a university or college campus: “They are the stepchildren of college courses, good for community relations but not considered part of mainstream higher education” (Turner, 1989). In evaluations of various types of distance education, comfort and convenience were repeatedly cited as positive elements of the distance experience (Moore & Thompson 1997). Essentially, students in these studies like the ease of taking distance education courses, but if given the choice to be in the same room with the instructor, most students will choose the personal contact.

Although a comprehensive historical review of technology research in special education (Woodward & Reith 1997) did not mention distance learning, researchers have examined the effectiveness of distance education. For example, Moore and Thompson (1997) reviewed research on learning outcomes and attitudes for students in participating in distance education experiences in higher education. The studies included in their review reflected no significant differences in cognitive factors (amount of learning, academic performance, achievement, and exam and assignment grades) between the distance classes and traditional classes. Other factors (e.g., student satisfaction with the course, comfort, convenience, communication with instructor, interaction and collaboration between students, independence, and perceptions of effectiveness) had more mixed results. In the majority of the studies where interaction was studied, the distance condition seemed to negatively affect opportunities for interaction between students and with the instructor. In contrast, distance condition was found to positively affect collaboration and interdependence among students, in addition to support for independent learning activities. Earlier, Moore and Kearsley (1996, p. 65) reached the following conclusions with regard to research on the effectiveness of distance education courses:

[T]here is insufficient evidence to support the idea that classroom instruction is the optimum delivery method; (2) instruction at a distance can be as effective in bringing about learning as classroom instruction; (3) the absence of face-to-face contact is not in itself detrimental to the learning process; and (4) what makes any course good or poor is a consequence of how well it is designed, delivered, and conducted, not whether the students are face-to-face or at a distance.

Do students believe distance education is better or worse than traditional classroom instruction? Neither, according to Thomas L. Russell, who tracks studies of distance education methods, since “most studies show no difference in the effectiveness of the two media” (Young, 2000, p. A55). Additional support for the “no difference phenomenon” in higher education was provided by Spooner, Jordan, Algozzine, and Spooner (1999) who compared student ratings in two special education courses in a masters-level curriculum sequence for students in the area of severe disabilities when each was offered on campus and off campus. Additionally, student ratings were compared when distance classes via two-way interactive TV were taught at local and remote facilities. Student evaluations suggested no differences for overall course means. Organizational ratings were similar for a methods course taught on campus and at a distance, but were different for a curriculum course. When outcome measures for on-campus students vs. off-campus students were examined no differences were found in the overall ratings. Ratings for course, instructor, and communication were similar across settings and courses. Ratings for organization were similar for a curriculum course taught on campus, but were different for a methods course.

This research was completed to evaluate the effectiveness of a university distance education graduate program in special education in learning disabilities in terms students’ evaluations of teaching rather than how much they learned. We empirically compared students’ perceptions of (a) course effectiveness, (b) instructor effectiveness, and (c) overall effectiveness of the instruction in distance education (DE) courses, both off- and on-campus locations, and traditional on-campus courses.

Method

A quasi-experimental program evaluation was conducted to examine differences between DE courses, both off- and on-campus, and traditional on-campus courses. The independent variable was mode of course delivery – DE off-campus, DE on-campus, and traditional on-campus. To control for the effects of instructor and course topic, the same instructor and same class were taught under all 3 conditions; that is, each instructor taught the same course under the DE off-campus, DE on-campus, and traditional on-campus conditions. Students self selected into the type of class they would attend. A questionnaire was administered to students at the end of the course to evaluate their perception concerning course effectiveness, instructor effectiveness, and overall effectiveness of instruction. The instructor was not present when the questionnaires were administered and all responses were anonymous.

Participants

All participants were graduate students enrolled at a large university in the southeast United States. Most students were white (89%) females (91%) and worked full-time (83%). All participants were enrolled in required courses as part of a graduate program in special education.

Intervention

This study examined three modes of course delivery – DE off-campus, DE on-campus, and traditional on-campus. All the DE courses were delivered using a two-way interactive TV that allowed for real-time interaction between the instructor and students. The only difference between the DE off-campus and DE on-campus was the setting in which the instructor presented the content of the course. Typically the instructor taught the class from the on-campus location. Students in the DE off-campus viewed the lesson from the two-way interactive TV screen. Students enrolled in DE off-campus classes met in a community college classroom fully equipped with video and audio communication equipment. The traditional on-campus classes were taught with the instructor and students in the same classroom.

Instrumentation

The questionnaire consisted of 23 items that examined course effectiveness (e.g., This course had clearly stated objectives), instructor effectiveness (e.g., Instructor was able to simplify difficult materials), and overall satisfaction with the course. Each item was answered on a 5-point scale ranging from strongly disagree (1) to strongly agree (5). The questionnaire consisted of three domains, (1) course effectiveness (items 1-11), (2) instructor effectiveness (items 12-18), and (3) overall course effectiveness (items 19-23). The domain scores were calculated by averaging all the items within the domain with scores ranging from 1 to 5. Coefficient alpha internal consistency reliability estimates were 0.98 for all 23 items, 0.95 for the scale that evaluated the course effectiveness (items 1 to 11), 0.95 for the scale that evaluated the instructor’s effectiveness (items 12 to 18), and 0.94 for the overall course evaluation (items 19-23).

Results

Eight instructors teaching eight different courses that were required in a graduate degree program were examined in this study. A total of 261 DE off-campus, 106 DE on-campus, and 176 traditional on-campus students completed and returned the questionnaires. Student results were aggregated to the class level and used in the analyses; that is, the mean class scores were used in the analyses.

A series of repeated measures ANOVAs was conducted with one within factor (i.e., mode of course delivery) to determine differences between the three modes of instruction. The means, standard deviations, F-values, and effect sizes (partial h2) for each domain (course effectiveness, instructor effectiveness, and overall course effectiveness) are reported in Table 1. The means for the DE off-campus were lower than those of the on-campus courses in all the domains. The DE on-campus courses had lower means than the traditional on-campus courses. In addition, there was greater variability in scores for the DE off-campus courses.

Table 1

Descriptive Statistics, Repeated Measures ANOVAs, and Effect Sizes for the Three Domains

 

Distance Education

Traditional

 

 

 

Off-Campus

On-Campus

On-Campus

 

 

Domain

M

SD

M

SD

M

SD

F

h2

Course Effectiveness Rating

4.13

.50

4.36

.33

4.56

.15

5.61*

.33

Instructor Rating

4.13

.59

4.47

.30

4.63

.20

4.77*

.40

Overall Course Rating

3.85

.69

4.28

.39

4.43

.29

4.79*

.41

 * p<.05

There was a statistically significant difference between the mode of course delivery for all three domains. The mode of course delivery accounted for a large part of the explained variance (h2), ranging from .33 to .41. Follow-up analysis (dependent t-tests) indicated that there were statistically significant differences between the DE off-campus courses and the traditional on-campus courses for course effectiveness (t=3.00, p<.05), instructor effectiveness (t=3.03, p<.05), and overall effectiveness (t=3.38, p<.05); large effect sizes (Hedges, 1981) were found for (a) course effectiveness (g=1.16), (b) instructor rating (g=1.14), and (c) overall course effectiveness (g=1.10). There were no statistically significant differences between the DE off-campus courses and the DE on-campus. In addition, there were no differences detected between the DE on-campus and traditional on-campus domain scores.

To better understand the differences between the method of delivery, responses to each of the 23 items on the course evaluation questionnaire were examined. Comparing the 11 course rating items (see Table 2), there were statistically significant differences for items 3, 4, 5, 8, and 9. Follow-up analyses indicated that the mean differences were between the DE off-campus and the traditional on-campus courses. The magnitude of differences between the means was large, ranging from .97 to 1.34. There were no differences between the DE off-campus and DE on-campus or the DE on-campus and the traditional on-campus course means. Examining the 7 instructor effectiveness items (Table 3), there were statistically significant differences for all items except item 13. Follow-up analyses indicated that the differences were between the DE off-campus and the traditional on-campus courses. The magnitudes of differences for all items were large, ranging from .83 to 1.42. Examining the overall course effectiveness items (Table 4), there were statistically significant differences for all 5 items. Again, follow-up analyses indicated that the differences were between the DE off-campus and the traditional on-campus courses. The differences were large, ranging from .83 to 1.20.

Table 2

Descriptive Statistics, Repeated Measures ANOVAs, and Effect Sizes for Course Ratings

 

Distance Education

Traditional

 

 

 

Off-Campus

On-Campus

On-Campus

 

Partial h2

Item

M

SD

M

SD

M

SD

F

1. This course had clearly stated objectives.

4.34

.52

4.54

.24

4.68

.11

2.02

.22

2. The stated goals of this course were consistently pursued.

4.25

.45

4.41

.31

4.60

.15

2.92

.30

3. I always felt challenged and motivated to learn.

3.90

.60

4.32

.39

4.49

.16

4.37*

.38

4. The class meetings helped me see other points of view.

4.15

.40

4.37

.37

4.56

.27

4.12*

.37

5. This course built understanding of concepts and principles.

4.17

.51

4.45

.32

4.62

.17

4.20*

.38

6. The practical application of subject matter was apparent.

4.13

.62

4.43

.43

4.60

.22

2.46

.26

7. The climate of this class was conductive to learning.

4.12

.59

4.15

.39

4.55

.22

2.71

.28

8. When I had a question/comment I knew it would be respected.

4.20

.59

4.62

.24

4.69

.13

4.48*

.39

9. This course contributes significantly to my professional growth.

4.00

.58

4.27

.44

4.53

.15

3.93*

.36

10. Assignments were of definite instructional value.

4.08

.52

4.26

.44

4.54

.16

3.03

.30

11. Assigned readings significantly contributed to this course.

4.03

.45

4.20

.47

4.34

.25

1.32

.16

* p<.05

Table 3

Descriptive Statistics, Repeated Measures ANOVAs, and Effect Sizes for Instructor Ratings

 

Distance Education

Traditional

 

 

 

Off-Campus

On-Campus

On-Campus

 


Partial
h2

Item

M

SD

M

SD

M

SD

F

12. Instructor displayed clear understanding of course topics.

4.45

.49

4.75

.26

4.76

.19

3.73*

.35

13. Instructor was able to simplify difficulty materials.

4.06

.72

4.44

.42

4.59

.29

3.00

.30

14. Instructor seemed well-prepared for class.

4.33

.58

4.63

.36

4.69

.19

4.57*

.39

15. Instructor stimulated interest in the course.

4.09

.66

4.46

.38

4.59

.30

5.16*

.42

16. Instructor helped me apply theory to solve problems.

3.95

.56

4.36

.39

4.52

.24

4.77*

.41

17. Instructor evaluated often and provided help when needed.

4.02

.60

4.31

.37

4.65

.18

5.29*

.43

18. Instructor adjusted to fit individual abilities and interests.

4.04

.62

4.36

.32

4.58

.26

4.28*

.38

* p<.05
 

Table 4

Descriptive Statistics, Repeated Measures ANOVAs, and Effect Sizes for Overall Course Ratings

 

Distance Education

Traditional

 

 

 

 

Off-Campus

On-Campus

On-Campus

 


Partial
h2

Item

M

SD

M

SD

M

SD

F

19. Instructor had an effective presentation style.

4.06

.66

4.50

.35

4.54

.33

5.07*

.42

20. Instructional methods used in this course were effective.

3.97

.65

4.34

.39

4.52

.28

3.96*

.36

21. Evaluation methods were fair and effective.

4.09

.56

4.50

.26

4.56

.25

4.22*

.38

22. This course is among the best I have ever taken.

3.40

.81

3.79

.64

4.18

.43

4.61*

.40

23. This instructor is among the best teachers I have known.

3.70

.80

4.25

.47

4.36

.30

5.14*

.42

p<.05

 

Discussion and Conclusions

Comfort and convenience have been repeatedly cited as positive elements of the distance condition. Additionally, students have reported that the more experience that they have had with distance education technology and conditions, the more comfortable they have become with the course and mode of interaction (Jones 1992). Moore and Kearsley (1996) identified the following “variables that determine the effectiveness of distance education courses:”

  • Number of students at learning site (individuals, small groups, large groups)

  • Length of class/course (hours, days, weeks, months)

  • Reasons for student taking class/course (required, personal development, certification)

  • Prior educational background of student (especially experience with self-study or distance education)

  • Nature of instructional strategies used (lecture, discussion/debate, problem-solving activities)

  • Kind of learning involved (concepts, skills, attitudes)

  • Type of pacing (student determined, teacher defined, completion dates)

  • Amount and type of interaction/learner feedback provided

  • Role of tutors/site facilitators (low to high course involvement)

  • Preparation and experience of instructors and administrators (minimal to extensive)

  • Extent of learner support provided (minimal to extensive). (p. 76)

Spooner, Spooner, Algozzine, and Jordan (1998) assert that learning, attending classes, and obtaining information should be enhanced via distance learning.

In this research, on-campus students in a graduate preparation program for teachers of students with learning disabilities perceived their courses and instructors as being more effective than the off-campus DE students. Students in the off-campus sections consistently rated the course and instructor lower than both on-campus groups. The students in the DE off-campus courses reported (a) less challenge and motivation to learn, (b) lower opinions about the extent to which the class meetings helped them see other points of view, (c) lower opinions about the course building understanding of concepts and principles, (d) less feeling of respect, and (e) lower opinions of the contribution of the course to their professional growth. In addition, the DE off-campus students rated the instructor lower in (a) displaying clear understanding of topics, (b) being prepared for class, (c) stimulating interest in the course, (d) applying theory to solve problems, (e) evaluating often and providing help when needed, and (f) adjusting to fit individuals’ abilities and interests.

This research addresses important concerns identified in recent reports questioning the effectiveness of distance education and arguing that much of the literature is not as useful as it could be because very little of it involves original research or is based on studies of questionable quality that render many of the findings inconclusive (cf. Blumenstyk & McCollum, 1999; Carnevale, 2000; The Institute for Higher Education, 1999). Further, the outcomes are different than the “no significant difference phenomenon” observed in many other studies of attitudes (Young, 2000, p. A55). Of course, there are a number of reasons why these program courses were viewed less favorably and each should be considered in future efforts to evaluate distance education programs. First, class sizes were different on and off campus and the characteristics of students enrolled in different sections of the same course might have influenced the outcomes. While this is difficult to control, it should be considered when comparing courses taught using different methods. The effect of vagaries of method is also a possible explanation for the findings. Organization, instructional strategies, and other methodological differences may have impacted a distance education course differently than an on-campus course. Similarly, placement of the course within the program (e.g., beginning vs. end) and its content (e.g., introduction vs. advanced, theory vs. methods) may create conditions to consider in evaluating instruction provided on and off campus. The novelty of taking courses at a distance should also be considered when evaluating programs (i.e., outcomes for earlier courses may be very different than those for courses taken later). Finally, the complex interaction of learner characteristics and learning style with instructional method and content should not be underestimated:

The primary assumption, which is flawed, is that the instructional effectiveness of each medium studied is constant across all content and all students. You’re lumping all the students together, and you’re ignoring their qualities and attributes as well as the qualities and attributes of the content. So by treating students, content, and instructional content as homogenous, we are ignoring some very important variables that we know for a fact do impact learning. (Barbara B. Lockee in interview with Dan Carnevale, February 21, 2001).

Faculty members and administrators at many universities and colleges remain skeptical about the quality and effectiveness of online research and teaching (Kiernan, 2000). Their skepticism, as well as other factors (e.g., time required for preparing and delivering distance education courses), can discourage young faculty members from embracing distance education. Institutions of higher education that base instructors’ performance on student evaluations should be aware that teaching DE courses might present important issue to overcome. What can be done to address the potential hazards? Spooner, Algozzine, Flowers, Gretes, and Jordan (1998) suggest seven strategies that can be used to facilitate faculty/student interaction at a distance, so that the students at the remote sites believe that they are connected to their peers and the instructor in the studio classroom on campus. These techniques include: (a) establishing weekly agenda that goes beyond the syllabus, (b) facilitating a weekly student share to encourage class participation, (c) establishing off-line small group discussion with reporting, (d) tapping sites and individuals at remote sites for questions, (e) encouraging across site questioning by students, (f) traveling to remote sites for broadcast (each site one per semester), and (g) playing off of your local audience.

Other variables which will likely impact on the instructor’s ability to reach students at remote sites, in addition to altering presentation style might be the overall size of the class. The instructor will likely have to work harder at making ALL students feel included as part of the group when the collective numbers approach 50, as opposed to as smaller number of students. A second important variable, and one that could potentially affect the evaluation outcomes is the number of times that the instructor delivers a course at a distance. The more practice the instructor has and the more times that s/he is “on the air” will also likely impact that individual’s ability to be effective at reaching those students at remote sites. The type of presentation equipment (e.g., white board “on the fly” writing, or prepared overhead material, or material developed with electronic presentation software with appropriate images to illustrate content) that the instructor uses to deliver the content is another variable that could likely affect the outcome of student evaluation of instruction as well. Regardless of the approach taken to address potential problems and difficulties when teaching at a distance, there is a clear need for additional research evaluating implementations of improvement strategies and their effects in distance education courses.

Although the intended purpose of this research was to evaluate a distance education program, the results support the position that technology (or method) is only one factor that contributed to opinions about the quality of the course (cf. Carnevale, 2001). For example, although learning tasks and instructors were the same for the courses evaluated in this study, learner characteristics (e.g., motivation, experience) were potentially very different and, most certainly, contributed to the outcomes. Similarly, the results point to the value of a few good practices as supporting the art of good teaching. In 1996, the American Association of Higher Education (AAHE) proposed the following “Seven Principles for Good Practice in Undergraduate Education” to assist those using new communication and information technologies to improve teaching and learning processes (The Institute For Higher Education Policy, 1999, p. 32):

  • encourage contact between students and faculty;

  • develop reciprocity and cooperation among students;

  • use active learning techniques;

  • give prompt feedback;

  • emphasize time-on-task;

  • communicate high expectations; and

  • respect diverse talents and ways of learning.

The principles have been included in a variety of publications on best practice and represent potential explanations for differences that result when distance education courses are compared to traditional on-campus courses (Carnevale, 2001; Chickering & Ehrmann, 1996). They also form the foundation for factors to be considered in future research focused on improving ways to teach students in higher education using distance as well as traditional methods.
 

References

Blumenstyk, G., & McCollum, K. (1999, April 16). Two reports question utility and accessibility in distance education. The Chronicle of Higher Education, A31.

Carnevale, D. (2001, February 21). Logging in with Barbara B. Lockee: What matters in judging distance teaching? Not how much it’s like a classroom course. The Chronicle of Higher Education. [Internet Archive: http://chronicle.com]

Carnevale, D. (2000, January 7). Survey finds 72% rise in number of distance education programs. The Chronicle of Higher Education, p. A57.

Chickering, A. W., & Ehrmann, S. C. (1996). Implementing the seven principles. AAHE Bulletin, 49(2), 2-4.

Hedges, L.V. (1981). Distributional theory for Glass’s estimator of effect size and related estimators. Journal of Educational Statistics, 6, 107-128.

Keegan, D. (1988). Problems in defining the field of distance education. American Journal of Distance Education, 2, 4-11.

Kiernan, V. (2000, April 28). Rewards remain dim for professors who pursue digital scholarship. The Chronicle of Higher Education, A45.

Jones, T. (1992). IITS students’ evaluation questionnaire for fall semester of 1991: A summary and report. (ERIC Document Reproduction Service No. ED 311 890).

Moore, M. G., & Kearsley, G. (1996). Distance education: A systems view. Belmont, CA: Wadsworth.

Moore, M. G., & Thompson, M. M. (1997). The effects of distance learning (rev. ed.). (ACSDE Research Monograph No. 15). University Park, PA: The Pennsylvania State University, American Center for the Study of Distance Education.

National Center on Education Statistics. (1997). Statistical analysis report: Distance education in higher education institutions. Washington, DC: Author [Report NCES 98-062].

Spooner, F., Algozzine, B., Flowers, C., Gretes, J. A., & Jordan, L. (1998, March). Facilitating communication in distance education classes. Electronic poster presented at the fifteenth annual meeting of the international Conference on Technology and Education, Santa Fe.

Spooner, F., Jordan, L., Algozzine, B., & Spooner, M. (1999). Student ratings of instruction in distance learning and on-campus classes. Journal of Educational Research, 92, 132-140.

Spooner, F., Spooner, M., Algozzine, B., & Jordan, L. (1998). Distance learning: Promises, practices, and potential pitfalls. Teacher Education and Special Education, 21, 121-131.

The Institute for Higher Education Policy. (1999). What’s the difference: A review of contemporary research on the effectiveness of distance learning in higher education. Washington, DC: Author.

Turner, J. A. (2000, September 27). 'Distance learning' courses get high marks from students and enrollments are rising. The Chronicle of Higher Education. [Internet Archive: http://chronicle.com]

Woodward, J., & Reith, H. (1997). A historical review of technology research in special education. Review of Educational Research, 67, 503-536.

Young, J. R. (2000, February 18). Distance and classroom education seen as equally effective. The Chronicle of Higher Education, A55.
 

About the Authors

LuAnn Jordan (Ph. D., University of Florida) is an Assistant Professor in the Department of Counseling and Special Education at the University of North Carolina at Charlotte. Her current research interests include learning disabilities, attention deficit disorders, and improving distance education programs.
Email: lujordan@email.uncc.edu.

Claudia Flowers (Ph. D., Georgia State University) is an Associate Professor in the Department of Educational Leadership. Her current research interests include assessment issues, alternative assessment, applied statistics and technology in education.
Email: cpflower@email.uncc.edu.

Bob Algozzine (Ph. D., Penn State University) is a Professor in the Department of Educational Leadership and Co-Director of the Behavior and Reading Improvement Center at the University of North Carolina at Charlotte. His current research interests include school-wide discipline, effective teaching, block scheduling, self-determination, alternative assessment, and improving distance education programs.
Email: balgozzine@carolina.rr.com

Fred Spooner (Ph.D., University of Florida) is a Professor in the Department of Counseling, Special Education, and Child Development and Principal Investigator on a Personnel Preparation Project involving distance delivery technologies at the University of North Carolina at Charlotte. His research interests include instructional procedures for students with severe disabilities, alternate assessment, and improving distance education programs.

Ashlee Fisher (M.A., University of North Carolina at Charlotte) is a Mental Health Therapist at Expeditions Day treatment program. Her responsibilities include providing mental health services that target emotional and behavioral problems with adolescents and their families through individual, group, and family therapy.
 

go top
March 2004 Index
Home Page