January 2008 Index
 
Home Page

Editor’s Note: Research answers two kinds of questions. What is different? And is it statistically significant? Having found a significant difference, we then ask the question, why? From a program or instructional design point of view, what do we change to improve learning, performance, or outcomes?

An Analysis of Factors Impacting Student Satisfaction
and Retention in On-Site and Hybrid Courses

Avi Carmel, Stuart S. Gold
United States

Abstract

This research project examined the relationship between several specific factors and the level of satisfaction and retention achieved for students attending either traditional On-Ground or Hybrid (partial Online and partial On-site) delivery modality university courses. The research project incorporated data from 110 courses and 164 students. Results indicate that there is a statistically significant relationship between the levels of student satisfaction and student retention and the quality of the university support organizations irrespective of the modality of course delivery. Contrary to the common belief prevalent at the university being studied, the impact of an individual professor or professors on student satisfaction was shown to be comparatively neutral.

Keywords: distance education, hybrid courses, online education, student retention, student satisfaction.

Purpose

The purpose of this study was to analyze the impact of several key factors upon student satisfaction and retention and to determine if the modality of course delivery served as a differentiator.

Modalities:

On-Ground: the traditional student and faculty in the same room where students meet face-to-face each week and engage in interactive instruction as well as meeting weekly in smaller learning teams outside the university classroom to work on group projects.

Hybrid: a combination of on campus and online instruction structured for students that require flexible schedules. A student who enrolls in a hybrid course attends the first 4 hours and last 4 hours in traditional in class sessions. The rest of the classroom assignments are held on online

Methodology

Test Design

The study used a Non-Equivalent Group Design (NEGD) also known as a quasi-test design. “A quasi-experimental design is one that looks a bit like an experimental design but lacks the key ingredient -- random assignment. In the NEGD, we most often use intact groups that we think are similar as treatment and control groups. In education, we might pick two comparable classrooms or schools” (Trochim, 2006). While an attempt is made to assure that the two groups are as similar as possible it is not possible for the researcher to control the assignment to the groups on a random basis. This makes the NEGD inherently subject to internal validity threats which need to be addressed.

The primary threat is threat of selection which impacts internal validity and may create a selection bias in the study. This bias creates a risk that any factor other than the ones being analyzed may have led to the observed result. There are a number of selection bias threats in a multiple group study. The most relevant are Instrumentation and Selection History.

The key to addressing these validity issues is to assure that groups are as equivalent as they can be made given the nature of the environment, and that methodology is applied in a consistent manner. Validity issues in this study were mitigated as follows:

  1. Random selection of survey participants (students) was made in each group so that there was no bias as to the prior history or accomplishments of the students.

  2. All students were students that were dedicated members of a given group or course modality for the duration of their program so there was no issue related to a student being a member of both sample groups.

  3. The sampling of a large number of students, spread across a variety of classes in each modality and the use of a single survey instrument for all students mitigated internal validity issues.

The sample in this study consisted of undergraduate and graduate students enrolled at two campuses of a regionally accredited university. The analysis explored the relationship between specific key factors, the level of satisfaction, and retention. It also determined the differences, if any, between hybrid course students and traditional on-ground course students regarding their level of satisfaction and retention. The sample data was collected by randomly asking students to fill out a survey questionnaire pertaining to their school status. The respondents were informed of the purpose of the research, which was self-administered. In addition, they were informed that after filling out the questionnaire, they would be asked to not discuss their responses with other participants to avoid the risk of biased information. All non-specific survey questions were written so that the answers would fit appropriately into a standard five-level Likert Scale for analysis. A high score was positive and a low score was negative.

Satisfaction

To understand how well the students in this sample were pleased, their individual satisfaction level needed to be measured. The following questions were asked in the satisfaction portion of the administered survey questionnaire:

  • Are the class sizes adequate?

  • Are student academic advisors, financial aid advisors or other student services staff helpful and courteous?

  • Are your professors knowledgeable about the class subjects?

  • Are you generally satisfied with the quality of the teaching provided in the classroom?

  • Is the university Website easily navigated?

  • How would you rate your overall educational experience at this university?

  • Would you recommend that other students pursue their education at this university?

  • Do you consider the university to be helpful in networking with peers that will assist in your future professional endeavors?

Retention

Review of Literature

In regard to student satisfaction, choice seems to play an important role in how the individual student determines if they are satisfied. Whenever students were given the opportunity to select a course delivery modality, satisfaction levels were determined to be higher. A “self starter” student, who values flexibility and convenience may choose to take an online format structure, while less “self-directed” students who value guidance and a traditional classroom’s supportive environment, may choose the ground-based format (Yatrakis, 2002). Haythornthwaite (2000), found that a ground-based “boot camp” preceding online courses can help build a sense of community among distance learning students and enhance their satisfaction and learning outcomes. This is supported by Doran (2001) who found that small group collaborative activity improved outcomes in online courses. Similarly when online students collaborated via chat rooms, bulletin boards, conference calls, etc, they showed significantly higher levels of satisfaction than other groups that were allowed to work individually (Yatrakis, 2002).

According to Thompson, Falloon, and Simmons (2001), no national statistics existed (at that time) that showed the number of students who completed distance education programs and courses. However, “Anecdotal evidence and studies by individual institutions suggest that course-completion and program retention rates are generally lower for distance-education courses than in their face-to-face counterparts” (Carr, 2000, p. A39).

There is an abundance of literature on student retention for online courses, but as Yatrakis and Simon (2002) state, its main theme is comparisons between online and on-ground formats. The research on the effect of on-ground and the hybrid or mixed method of on-ground and online studies is limited because of its relatively new implementation.

The majority of researchers have found major differences in retention between the two formats of online and on-ground. Some have cited student concerns about instructional quality in online courses (Bloom 1998; Terry 2000), while others consider virtual courses an “inferior technology,” particularly in the teaching of complex material (Farrington, 1999; Brown and Liedholm, 2002). Arguably the end results of online education may be similar or even better than traditional on-ground formats, which are also demonstrated by their overall course grades. One study focused on an attempt to remove instructor-bias by blind-scoring tests in a graduate-level online vs. traditional course environment. The results indicated, "...average score for the online class was 5 points (5%) higher than for the on campus class."(Fallah & Ubell, 2000). This is further supported by Gold (2004) who states that there is overwhelming evidence that instruction delivered using online technology is equivalent to conventional instruction when using student achievement as the outcome measure.

According to eLearn Magazine (n.d.), “keeping students enrolled in online courses can be a struggle. Online retention depends on factors such as how much support is provided and how the course is offered,” says Steve Ehrmann, director of the Flashlight Program, the American Association for Higher Education’s e-learning arm. Some colleges offer local proof that online retention lags behind brick-and-mortar retention. For example, Washington Online - Washington State’s online division for community college - claims a retention rate of 70% for online students versus 85% for the state’s on-ground community college students. As more schools investigate the discrepancy, they are finding ways to combat it through such strategies as better student advising, increased group work, and stronger academic and technical support (eLearn Magazine, n.d.). Studies have shown that the more time students spend interacting with classmates, the higher the satisfaction level. It has also been observed that dissatisfaction does not automatically translate into withdrawal from the program (Yatrakis, 2002).

Data from the University of Central Florida (UCF) show that student retention in hybrid courses is better than retention in online courses and equivalent to that of on-ground courses. (Dziuban, C. D. et al, 2001). According to Robertson (2003) the College of the Mainland proposal states that hybrid classes have the potential to improve retention in both online and on-ground courses.

Participants

In order to study these factors, responses from students were collected and tabulated from a sample of 164 students who chose to enroll in courses that were available either utilizing on-ground or hybrid formats. The sample consisted of 95 female students (58%) and 69 male students (42%) with the following ethnicity breakdown: 65 African American students (40%), five Asian students (3%), 30 Caucasian/White students (18%), 61 Hispanic/Latino students (37%) and three Native Hawaiian/Pacific Islander students (2%). Out of 164 students, 95 were attending on-ground classes and 69 students were attending hybrid courses. All students within the two groups answered the questions required to measure satisfaction and retention.

One-Factor Analysis of Variance (ANOVA) and Two-Factor ANOVA without replication were used to determine whether the on-ground and the hybrid groups differed significantly in their responses to the questions based on the independent variables of satisfaction and retention.

Results

Table 1
Mean Satisfaction and Retention Scores of Hybrid and On-ground student groups:
Groups
Satisfaction
Retention
Row Means
Hybrid
3.91
3.67
3.74
On-Ground
3.75
3.79
3.77
Column Means
3.83
3.73
 

From Table 1, we find that there is no difference in satisfaction and retention between the two groups. A test of hypothesis using a one-way ANOVA was conducted. Only the four factors impacting satisfaction were considered. Under this condition, the variation was either due to the treatments or it was random. The null hypothesis and the alternate hypothesis for comparing the mean levels of satisfaction were among the following four factors: classes, professors, staff and university website:

Ho: µ1= µ2= µ3 =µ4

Ha: Not all the mean satisfaction levels by category were the same.

The mean levels of satisfaction by category scores of the Hybrid and on-ground groups of students are shown in Table 2.

Table 2
Mean Satisfaction results of Hybrid and On-Ground Student groups by Category.

Groups

Classes

Professors

Staff
University
Website

 

Hybrid
3.99
4.05
3.46
4.14

 

On-Ground
3.93
4.07
3.38
3.92

 Table 3

Calculations for a One-Way ANOVA table by category:
ANOVA:
Single Factor
 
 
 
 
SUMMARY
Category
Count
Sum
Average
Variance
Classes
2
7.92
3.96
 
Professors
2
8.12
4.06
 
Staff
2
6.84
3.42
 
University Website
2
8.06
4.03
 

 

ANOVA:
Source of Variation


SS


df


MS


F


P-value


F crit

Levels of Satisfaction
0.54455
3
0.181517
24.69615
0.00
6.591382
Error
0.02940
4
0.00
 
 
 
Total
0.57395
7
 
 
 
 

It can be concluded that there was a statistically significant difference in the level of satisfaction among the four factors. Additionally, the null hypothesis was tested without taking into consideration the results from the “Staff” factor (Table 4). A one-way factor ANOVA was used to test the null hypothesis.

Ho: µ1=µ2=µ4

Ha: Not all the mean satisfaction levels among the categories were the same.

Table 4
Mean Satisfaction Levels between Hybrid and On-Ground Student Groups by category excluding Administrative Staff

Classes

Professors
University Website
Hybrid
3.99
4.05
4.14
On-Ground
3.93
4.07
3.92

 Table 5

Calculations for One Way ANOVA table by category (excluding Staff).
ANOVA:
Single Factor
 
 
 
 
SUMMARY
Groups
Count
Sum
Average
Variance
Classes
2
7.92
3.96
0.00
Professors
2
8.12
4.06
0.00
University Website
2
8.06
4.03
0.0242

 

ANOVA:
Source of Variation


SS


df


MS


F


P-value


F crit

Levels of Satisfaction
0.010533333
2
0.00
0.60305344
0.602367
9.552094
Error
 0.0262
3
0.00
 
 
 
Total
0.036733333
5
 
 
 
 

We can conclude that there was no difference in the mean level of satisfaction among the three factors included in this analysis. Therefore, it was concluded that there was strong evidence that students were not satisfied with the performance of the Administrative Staff.

The variation due to the treatments (levels of satisfaction) was analyzed and all the remaining variation appears to be random. However, the analysis to this point had not set up the blocking factors so that each of the two groups of students ratings were tested along with each level of satisfaction categories. In this case, the two student groups were set as the blocking variable, and removing the effect of the student groups from the sum of squares error (SSE) changed the F ratio for the level of satisfaction variable.

The same format was used in the two-way ANOVA table as in the one-way case, except there was an additional row for the blocking variables. Table 6 shows the results using a two-way ANOVA. In this case, the focus was on the difference in levels of satisfaction by the four factors for the two delivery modalities. The two sets of hypothesis were:

  1. Ho: The mean satisfaction levels by categories were the same (µ1 = µ2 = µ3 = µ4).

Ha: The mean satisfaction levels by categories were not the same.

  1. Ho: The means of level of satisfaction by categories of Hybrid and on-ground student groups were the same (µ1 = µ2).

The means of level of satisfaction by categories of Hybrid and on-ground student groups were not the same.

Table 6
Two- Factor ANOVA results between level of satisfaction by categories
and the two groups of students.
 

Classes

Professors

Staff
University
Website
Hybrid
3.99
4.05
3.46
4.14
On-Ground
3.93
4.07
3.38
3.92

ANOVA: Two Factor without replication
 
Count
Sum
Average
Variance
Hybrid
4
15.64
3.91
0.0938
On-Ground
4
15.3
 3.825
0.09270
Classes
2
7.92
3.96
 0.00
Professors
2
8.12
4.06
 0.00
Staff
2
6.84
3.42
 0.00
University Website
2
8.06
4.03
0.02420

ANOVA:
Source of Variation


SS


df


MS


F


P-value


F crit

Student Groups
0.01445
1
 0.01445
 2.89966555
0.18715
10.12796
Levels of Satisfaction
by categories
0.54455
3
0.18151667
24.69615
 0.00
9.276619
Error
0.01495
3
 0.00
 
 
 
Total
0.57395
7
 
 
 
 

It was concluded that there was a difference in the satisfaction level by factor, but not a significant difference in level of satisfaction between the two modalities of course delivery.

A two-way ANOVA factor was conducted evaluating the differences between the groups of students and the mean of the level of satisfaction using only three factors (excluding Staff). The results are shown in Table 7.

Table 7
Two-Factor ANOVA results between the student groups
and the level of satisfaction by categories- (excluding staff)
ANOVA:
Source of Variation


SS


df


MS


F


P-value


F crit

Student Groups
0.011266667
1
0.01126667
1.50892857
0.344237
18.51282
Three Satisfaction factors
0.010533333
2
0.00
0.70535714
 0.586387
19.0
Error
0.014933333
2
0.00
 
 
 
Total
0.036733333
5
 
 
 
 

The two sets of hypothesis were:

  1. Ho: The mean satisfaction levels by categories were the same (µ1 = µ2 = µ3).

Ha: The mean satisfaction levels by categories were not the same.

  1. Ho: The means of level of satisfaction by categories of the Hybrid and on-ground student groups were the same (µ1 = µ2).

The means of level of satisfaction by categories of the Hybrid and on-ground student groups were not the same. Therefore, it can be concluded that there was not a difference in the level of satisfaction between the three factors based upon modality of course delivery.

A test of hypothesis was conducted to determine whether the mean of the two student groups and the mean of three levels of retention differ. Table 8 shows the scores of the means of the three levels of retention by the two students groups. The null and alternate hypotheses were stated:

1.                  Ho: The mean retention levels were the same (µ1 = µ2 = µ3).

Ha: The mean retention levels were not the same.

2.      Ho: The mean of retention levels of the Hybrid and on-ground student groups were the same (µ1 = µ2).

The mean of retention levels the Hybrid and on-ground student groups were not the same.

Table 8
Levels of retention scores by the two groups of students:
 
 
Level One
1-3 courses
completed
Level Two
4-10 courses
completed
Level Three
11 or more courses completed
Hybrid
3.78
3.74
3.54
On-Ground
3.77
3.90
3.77

 A two-factor ANOVA was calculated using a .05 significance level (Table 9). Based upon the sample results there is no significant difference in the three levels of retention between the two groups of students. These p-values indicate that the null hypotheses for the two groups of students and the three levels of retention should be accepted.

Two -Way ANOVA results between the two modalities of course delivery and the three levels of retention:

Table 9
ANOVA: Two-Factor without Replication
 
Count
Sum
Average
Variance
Hybrid
3
11.06
3.68666667
0.01653333
On-Ground
3
11.44
3.81333333
0.00
Level One
2
7.55
3.96
5E-050.00
Level Two
2
7.64
4.06
0.0128
Level Three
2
7.31
3.42
0.02645

ANOVA:
Source of Variation


SS


df


MS


F


P-value


F crit

Group of Students
0.024066667
1
 0.02406667
3.15973742
0.217451
18.51282
Levels of Retention
0.0291
2
 0.01455
1.91028446
0.343609
19.0
Error
0.015233333
2
 0.00
 
 
 
Total
0.0684
5
 
 
 
 

Conclusions

The results of this study suggest that students who choose to enroll in courses in an on-ground format have the same overall rates of satisfaction and enrollment retention as do students that enroll in hybrid courses. This finding is consistent with earlier studies and suggests that students enrolled in these course modalities by choice may possess attributes likely to make learning a satisfactory and constructive experience.

The university experience has generally met student’s expectations. In respect to facilities, university web site and class size the survey shows that the physical environment was adequate. The student academic counselors, student services and general administrative / support staff were found to be an area of concern.

Faculty were shown to be a factor that generally met student expectations in a satisfactory manner. Follow up discussions to further explore this point with a number of the survey respondents indicated that they tend to view the relationship with a faculty member as a passing situation which may either please or displease them. Since they will be moving on to a new course with a different professor in a few weeks it was the ongoing aspects of the university environment that were more important to overall student satisfaction and retention. Further work is needed to expand on these findings.

References

BC College. (2002). Understanding student satisfaction. Retrieved May 1, 2005 from http://admin.selkirk.bc.ca/research/documents/issue_satisfaction%5B1%5D.pdf.

Bloom, David F. (1998). Digital Diploma Mills, part III. The bloom is off the rose. Monograph, November. Retrieved May 1, 2005 from http://www.vpaa.uillinois.edu/tid/resources/noble.html.

Brown, Byron W., and Carl Liedholm. (2002). Can Web courses replicate the classroom       in principles of microeconomics? American Economic Review. May.

Carr, S. (2001). Is anyone making money on distance education? Chronicle of Higher Education. Retrieved May 7, 2005, from http://chronicle.com/free/v47/i23/23a0410101.htm.

Doran,C.L.(2001). The effective use of learning groups in online education. New Horizons in Adult Education, Summer.

eLearn Magazine (n.d.). What makes students stay? Retrieved April 11, 2006 from   http://www.elearnmag.org/subpage.cfm?section=articles&article=22-1.

Fallah, M.H.& Ubell, R. (2000). Blind scores in a graduate test: conventional compared with web-based outcomes. ALN Magazine, 4(2).

Farrington, Gregory C. (1999). The new technologies and the future of residential undergraduate education.

Gold, S. S. (2004). An analysis of the relationship between software facilitated communication and student outcomes in online education. Dissertaton Abstracts International. (UMI No. 3118091)

Haythornthwaite, Caroline, Michelle M. Kazmer, and Jennifer Robins (2000). Community development among distance learners: Temporal and technological dimensions. Journal of Computer-Mediated Communication, September.

Robertson, J. (2003). COM to implement new hybrid courses. A Student Publication at College of the Mainland. Retrieved on May 1, 2005 from
http://64.233.179.104/search?q=cache:uh9kbZ-5dHsJ:www.com.edu/intercom/nov2003/
news.cfm%3Fnewsid%3D28+retention+hybrid+classes&hl=en&ie=UTF-8.

Terry, N. (2000). MBA student perceptions about the effectiveness of Internet instruction. Business Education Forum, April.

Thompson, J.M., Falloon, J., Simmons, B. (2001) Student retention. California Community Colleges Retrieved on May 1, 2005 from http://www.calbusinessed.org/besac/sr.html.

Yatrakis, P.G., Simon, H.K., (2002).The effect of self-selection on student satisfaction and  performance in online classes. Retrieved on April 22, 2005 from http://www.huizenga.nova.edu/about/ResearchReports/HS05-22-02EffectofSelfSelectioninOnlineStud.pdf.

 

About the Authors

Stuart S. Gold, Ph.D., is an experienced professor who teaches online and face-to-face classes. As an adjunct professor he has taught Management and MIS courses in the United States and the Caribbean during the past several years. Dr. Gold holds a B.A. in Physics and Mathematics from Northeastern Illinois University, M.B.A. from Loyola University of Chicago, and PhD from Northcentral University.

Email: sgold1@bellsouth.net

Avi Carmel Ph.D. is a published author with more than 15 years experience in traditional and on-line instructional delivery. He is a Professor of Research Methods, Statistics and Conflict Management at Thomas Edison State College.

Dr. Carmel has a Bachelor in Business Administration degree from Temple University; Master of Business Administration from University of Phoenix; Jurist Doctor from University of Florida, and a Masters of Science and a Doctor of Philosophy from the Graduate School of Humanities and Social Studies atNova Southeastern University.

Email: acarmel@bellsouth.net

 

go top
January 2008 Index
Home Page