Staying the Course: A Study in Online Student Satisfaction and Retention



Michael Herbert, Ph.D.
Chair, Criminal Justice Department
Bemidji State University
1500 Birchmont Drive NE
Bemidji , MN 56601-2699

 

Introduction

With the exponential growth of online courses in higher education, retention is an area of great concern. Online student retention has been suggested as one of the greatest weaknesses in online education (Carr, 2000; O'Brien, 2002). Studies show that the failed retention rate for online college and university undergraduates range from 20 to 50% and that online course administrators believe the failed retention rate for online courses to be 10 to 20% higher than traditional classroom environments (Frankola, 2001; Diaz, 2002). The number of college students who are participating in online courses continues to increase dramatically despite the greater likelihood of non-completion. Between 1995 and 1998 the number of institutions offering online courses essentially tripled and in the academic year 1999-2000 alone the number of students who took at least one online course increased by 57 percent ( National Center for Education Statistics, 2002).

Research in student retention has been conducted for decades but formerly dealt strictly with a traditional postsecondary setting, one where students typically entered college immediately after high school and attended classes on campus (Bean, 2003). The development of the personal computer, Internet and other various technologies have allowed a much broader and diverse population to participate in postsecondary education, creating a new category of learner. This new type of learner, preferring distance education, was quite different from the traditional on-campus student. Researchers soon found that because this new group of students had characteristics not found in traditional student populations, there needed to be new ways in which to address the problem of retention (Halsne & Gatta, 2002). With increasing demands being made on colleges and universities to become more accountable to student needs, and to provide the educational resources that will serve the largest percentage of the population, institutions are trying to find ways in which these needs can be met with ever increasing budget restraints (Ramsden, 1998). To appeal to a larger student base, institutions have utilized current online technologies to provide courses to those students who would not otherwise be served. Unfortunately, the online learning experience has not been a positive one for a substantial portion of participating students. Thus, a key issue for postsecondary institutions is that of trying to find ways in which student retention in online courses can be improved.

Bean's (1980) model of student retention suggests a causal relationship between organizational determinants and student satisfaction that ultimately led to commitment or withdrawal. Brown and Kayser (1982) developed a model that focused on the interaction between students and the institution in terms of student satisfaction with their institutional environment, as well as the student's psychological need for reinforcement when performing well.

The purpose of this study was to determine the variables significant for retention in online courses as asked by questions listed on an online course survey at a small Midwestern university. “Participation and persistence result from the interaction of a variety of student characteristics, circumstances, and the educational environment” (Kerka, 1989, p. 1). The existing research and literature suggests that there are student variables that can be measured to predict to what degree a student will complete an online course. Research has shown that there are demographic and institutional variables that have been found to be significant in student retention (Kemp, 2002; Parker, 1999; Wojciechowski & Palmer, 2005).

This article will deal specifically with those institutional variables relating to online instruction. The analysis deals specifically with the mean levels of importance and satisfaction regarding these variables.

Literature Review

The incredible growth of the number of households that have personal computers and internet access within the United States has facilitated the ability of countless learners to access higher education courses that had previously been unobtainable. According to the United States Census Bureau , the increase in the number of households that have personal computers has risen from 8.2% in 1984 to 56.3% in September of 2001 (U.S. Census Bureau, 2001). The percentage of homes in the United States that have access to the Internet has also grown tremendously. According to the U.S. Department of Commerce, the number of households that had Internet access in 1997 was 18%, and by 2000 that percentage had grown to over 41% (U.S. Department of Commerce, 2001).

A review of the literature has shown that the variables most commonly cited as being important to retention can be grouped into three main categories. These categories as identified by Berge and Huang (2004) are:

  1. Personal variables . These include demographics that encompass age, gender, and martial status; as well as variables such as academic skills and abilities, motivation, commitment and locus of control (Rotter 1966, Parker 1999, Kember 1995).

  2. Institutional variables . This category includes variables such as academic, bureaucratic and institutional social variables (Willis 1994, Alexander, McKenzie, and Geissinger 1998).

  3. Circumstantial variables . These include socio-economic variables, academic interactions, social interactions and life situation.

A critical issue in retention in online courses is related to a student's sense of belonging (Braxton, et. al., 1997). The group dynamics of online learning are an important factor in creating a safe and comfortable learning environment. Students in an online course should feel comfortable communicating and expressing themselves. It is important for retention that online students feel connected with the course, its instructor and fellow classmates. “Affiliation is a key to the development of a learning community” (Palloff & Pratt, 2001, p. 47). Frankola (2001) claimed that adult learners drop out of online courses due to the lack of time, lack of management oversight, lack of motivation, problems with technology, lack of student support, individual learning preferences, poorly designed courses, and substandard or inexperienced instructors. The Frontline Group (2001), an online learning provider, offers five reasons why adult learners drop out of online learning programs: poor design, failure to understand the new medium, lack of consideration for a variety of learning styles, lack of support systems and ignoring the self-selecting content need of learners.

Institutions have employed various surveys that measure the above-mentioned student variables in order to develop methods for increasing retention (Dynarski 1999, Parker 1999, Kemp 2002, Astin 1991). How institutions choose to measure those variables that contribute to retention can be critical. The variables consistently cited as causes for dropout with traditional on-campus courses include pre-entry attributes such as gender, high school GPA, race and socioeconomic status (Peltier, Laden, & Matranga, 1999), while online learners more often cite student engagement, motivation, and environment as the cause of failure to complete courses (Iverson, 1995; Kember, 1990c; Moore, 1990a). Environmental and situational variables include computer skills and ability to use the Internet (Van Patter & Chen, 2002), reading ability and time management skills (Miller, Rainer, & Corely, 2003, Osborn, 2001; Rovai, 2003) and “such things as time constraints or family support” (Scalese, 2001, p. 17).

Methodology

The purpose of this study was to determine the variables significant for retention in online courses as asked by questions listed on an online course survey. Utilizing the Noel-Levitz Priorities Survey for Online Learners™ (PSOL), which of the surveyed institutional predictor variables (e.g., satisfaction with technical assistance, library services, faculty responsiveness, quality of online instruction, etc.) are most influential in predicting whether a student retains an online course? The PSOL was sent to every student who enrolled in an online course at a medium-sized Midwestern state university. This included primarily undergraduate students, but included a small number of graduate students as well. This study utilized the survey data collected from the surveys sent out the fall semester of academic year 2005-2006. Follow-up surveys were mailed to those students who did not complete the courses in case they did not see the opportunity to take the survey online at the end of the semester. The completed survey is then sent online to the Noel Levitz company, where they compile the data. Noel Levitz sent this author printed copies of the raw data. Follow-up printed copies of the survey were mailed out to those students who did not complete the online course(s) and instructions were to mail the completed surveys to this author.

Formal data collection began in December of 2005 and ran through January of 2006. Four weeks were allowed for the return of the mailed surveys, and the data from these was added to the online survey data to create one master data set for analysis using the Statistical Package for the Social Sciences (SPSS).

Limitations

The online survey data collected is from students enrolled in a Midwestern state university with a full-time on campus student population of approximately 4,100 students. In that regard there may be limited application of this information in all but similar colleges and universities. The survey data does not discriminate by type of course, having been sent to every student who took any online course offered during the data collection period. Once again, there may be limited usefulness of the model in determining retention by college major, or whether or not courses taken were major courses or liberal education requirements. This was not a longitudinal study. The two additional questions added to the survey during the semester in which it was administered (fall, 2005) were only added that semesters survey.

Data Analysis

The Priorities Survey for Online Learners™ (Noel-Levitz, 2006) was sent for completion twice to every student who took an online course during the fall semester of 2005. The first distribution of the Priority Survey for Online Students (PSOL) was sent out in an electronic format via the internet to the students using their university online course address. The survey was to be completed online and returned electronically via the internet to the Noel-Levitz Corporation. The surveys were sent out a second time to those students who had not responded to the initial survey, in this instance using a paper format along with a stamped return envelope. As a result of the second request an additional 47 survey responses were received. In total, 122 surveys, or 25.1% of those students who took an online course in the fall of 2005 were received. Respondents were asked whether or not they completed the online course during the semester in which they were enrolled. Ninety-one of the respondents (74.6%) reported that they had successfully completed the course during the same semester as enrollment. Thirty-one respondents (25.4%) reported that they did not complete the course during the same semester as enrollment. The respondents who reported that they had not completed the course within the same semester comprised 40.1% of all students who had not successfully completed their online course fall semester.

The institutional survey questions related to the importance and satisfaction levels of the following variables:

•  Faculty responsiveness to student needs

•  Quality of online instruction

•  Faculty feedback to students in a timely manner

•  Institutional response to questions in a timely manner

•  The frequency of student and instructor interaction

•  The availability of adequate financial aid

•  The importance of student-to-student collaborations

The most important institutional variable selected by students was faculty being responsive to student needs, with a mean score of 6.62 on a scale of 0 (not important at all) to 7 (very important). As shown in the table below, the least important institutional variable was the importance of student-to-student collaborations with a mean value reported of 4.92 (See Table 1).

Table 1

Mean Values of the Importance of Institutional Variables rated and ranked by Level of Importance

 

N

Minimum

Maximum

Mean

Std. Deviation

Faculty are responsive to student needs.

121

5.00

7.00

6.61

.609

The quality of online instruction is excellent.

120

3.00

7.00

6.52

.732

Faculty provide timely feedback about student progress.

120

4.00

7.00

6.44

.742

This institution responds quickly when I request information.

121

0.00

6.33

6.33

.987

The frequency of student and instructor interactions is adequate.

120

3.00

6.02

6.02

.982

Adequate financial aid is available.

120

0.00

5.20

5.20

2.488

Student-to-student collaborations are valuable to me.

121

0.00

4.91

4.91

1.90

When looking at the same variables rated by respondents based on their level of satisfaction at the conclusion of the online course, we see the highest mean value was given to faculty responsiveness (mean value of 5.68) and the lowest was that given to adequate financial aid being available, with a mean value of 4.02 (see Table 2). An interesting observation is that none of the variables listed in Table 2 had satisfaction mean values that met or exceeded mean values of importance as shown in Table 1 (see Table 3).

Table 2

Mean Values of Selected Variables ranked in descending order by Level of Satisfaction.

 

N

Minimum

Maximum

Mean

Std. Deviation

Faculty are responsive to student needs.

122

1.00

7.00

5.68

1.26

This institution responds quickly when I request information.

122

0.00

7.00

5.51

1.52

Faculty provide timely feedback about student progress.

122

1.00

7.00

5.36

1.50

The quality of online instruction is excellent.

122

2.00

7.00

5.32

1.39

The frequency of student and instructor interactions is adequate.

121

1.00

7.00

5.19

1.44

Student-to-student collaborations are valuable to me.

122

0.00

7.00

4.82

1.79

Adequate financial aid is available.

120

0.00

7.00

4.02

2.38

None of the seven satisfaction variables had statistically significant mean differences between the successful completers of the online course and those who did not successfully complete their online course based on level of satisfaction.

Table 3

Means for Satisfaction and Importance

 

N

Mean- Importance

Mean- Satisfaction

Faculty are responsive to student needs.

121

6.61

5.68

The quality of online instruction is excellent.

120

6.52

5.32

Faculty provide timely feedback about student progress.

120

6.44

5.36

This institution responds quickly when I request information.

121

6.33

5.51

The frequency of student and instructor interactions is adequate.

120

6.02

5.19

Adequate financial aid is available.

120

5.20

4.02

Student-to-student collaborations are valuable to me.

121

4.91

4.82

The final demographic question on the survey asked those respondents who did not complete the course within the same semester why they did not complete. There were four possible responses to this question : time commitments, personal problems; instructor-related problems, and other. The most frequently cited response for those respondents who did not complete the course was time commitments, with a response rate of 61.3%, followed by personal problems at 16.1% (see Table 4).

Table 4

Reasons for Non-completion of Course.

 

Frequency (N)

Percent

Time Commitments

20

60.16

Personal Problems

5

15.15

Instructor-related Problems

4

12.12

Other

4

12.12

Total

33

100.0

A Chi-Square Goodness of Fit test was calculated comparing the frequency of occurrence of each reason listed as to why students did not successfully complete their online course. It was hypothesized in the chi-square statistical test that each reason for non-completion would occur an equal number of times. A significant deviation from the hypothesized reasons were found (?2(3)= 20.25, p<.05). The significant finding of the chi-square test indicates that the data varies from the expected values (see Table 5).

Table 5

Response Rates for Reasons for Non-completion.

 

Observed N

Expected N

Percentage

Time Commitments

20

8.3

60.60

Personal Problems

5

8.3

15.15

Instructor-related Problems

4

8.3

12.12

Other

4

8.3

12.12

Total

33

8.3

100.0

 

 

If you did not complete the course during the semester list the reason why .

Chi-Square

22.394

df

3

Asymp. Sig.

.000

•  Significant at the .05 level. (p<.05)

An independent samples t test was completed on the overall level of satisfaction to see if any significant difference existed between the means of the students who successfully completed the online course and the means of the students who did not successfully complete the online course. The difference between the means was statistically significant ( t = 2.244, df = 122, p>.05, one-tailed test). This shows that students who successfully completed the course were more satisfied with their experience than were those who did not successfully complete their online course (see Table 6).

Table 6

Overall Satisfaction with Online Experience by Completers and

Non-completers

 

Mean

N

Std. Deviation

Yes

5.54

91

1.21

No

4.90

31

1.75

Total

5.37

122

1.38

From the analyses, we can see that students are more likely to retain their online course if they are more satisfied with the experience. Another important observation is that none of the institutional variables mean levels of importance was even matched by mean levels of satisfaction. Lastly, we see that the primary reason that students did not retain their online course was due to time commitments.

Discussion

Respondents were asked whether they completed their online course during the semester in which they were enrolled in that course. Twenty-five percent of respondents reported that they did not successfully complete the course, comprising 42% of all students who did not complete the course. This indicates that even though students may not have completed their online course, nevertheless they felt it worthwhile to reply to the survey in an effort to make their views known. In that same light, the results of surveys returned by the non-completers carries far greater implications than input received from successful completers. Institutions do need to know what they are doing satisfactorily, but it is more important to know what areas need improvement or further program assessment in order to address matters that contribute to non-completion.

While this article looks specifically at the institutional variables relating to retention by examining levels of importance and satisfaction, it must be remembered that demographic variables may also have an impact on retention. In essence, the demographic variables were consistent with those found in similar previous research.

One of two capstone questions asked respondents whether their online course experience had met their expectations. The mean value reported by completers was 4.64 as opposed to 4.06 for those students who did not complete their online course. A t test run on this variable showed a statistically significant difference in mean scores between the completers and non-completers. Those students who did not complete their online course had a significantly lower level of expectations met by their course experience. With a decrease in meeting course expectations comes a corresponding decrease in engagement and motivation necessary to complete an online course.

Conclusion

Measuring student perception can be used as one way to identify those variables that are of the most importance to students. The respondents most highly ranked variable in terms of importance was faculty being responsive to student needs, with a mean score of 6.62 out of a possible 7. This ranking places the mean score at the “important” level. While there is limited information available for exactly what the national trend shows for this factor in comparison with the other listed variables, past research does show that even when a course is designed as a distance learning methodology, students still place a strong sense of importance on the responsiveness of course faculty (Carr, 2000; Frankola, 2001). This study's findings support research indicating that regardless of the course delivery system, students still have an expectation of faculty interaction and support. A lack of feeling connected to faculty has been shown in past research to be a significant variable in the student's sense of potential for completion (O'Brien, & Renner, 2002).

Based on the results of this study it can be concluded that those students surveyed who successfully completed their online course had expectations consistent with their course experience. While neither completers nor non-completers ranked their overall experience exceptionally high, the data showed that almost without exception, successful completers were more satisfied with all aspects of the online course. There were several statistically significant findings in terms of differences in mean level of satisfaction between the two groups; however, discriminant analysis did not yield a significant model that would predict retention. The overall mean level of satisfaction for both groups indicates that the online course experience is somewhat deficient as neither group ranked their experiences as satisfactory. While it would be expected that non-completers would have an overall lower rating of the course experience, the relatively low rating by both groups indicated that more must be done to improve the online course experience.

Assessing student satisfaction can be valuable in terms of program and course improvement. Research has shown that students who are more satisfied with their institutions are more likely to graduate (Scalese, 1999; Carr, 2000). It can be inferred that the same would hold true for individual courses as well. Longitudinal satisfaction studies can show program/course trends from which appropriate revisions can be made.

The data analysis also showed that there were many variables in which there were no significant differences between completers and non-completers of online courses. This would indicate that, as shown in many of the theoretical models, a holistic view of student demographic and institutional variables must be examined in determining the overall “weather” of the online experience as opposed to the single variable, or daily forecast. The data also showed that regardless of the course delivery system, students still expect and find it important to have interaction with the course instructor. As with any course, immediacy and feedback are critical to student success.


References

Alexander, S., & Geissinger, H. (1998). An evaluation of information technology projects

Astin, R. (1999). A Study of Employment and Distance Education Students at a Community College . Community College Research, 12(2), 41-49.

Bean, J. (2003). College Student Retention. The Gale Group. Retrieved on October 5, 2005 from http://encyclopedias.families.com/college-student-retention-401-407-eoed

Berge, Z.; & Huang, Y. (2004). A Model for Sustainable Student Retention: A Holistic Perspective on the Student Dropout Problem with Special Attention to e-learning. Distance Online Symposium , The American Center for the Study of Distance Education. Vol. 13 (5).

Braxton, J., Shaw Sullivan, A. V., & Johnson, Jr., R. M. (1997). Appraising Tinto's theory of college student departure. In J. C. Smart (Ed.), Higher education: Handbook of theory and research, Vol. 12. New York : Agathon Press.

Brown, J., & Kayser, T. (1982). The transition of special needs learners into post secondary vocational education. St. Paul : University of Minnesota . Research and Development Center for Vocational Education. (ERIC Document Reproduction Service No. ED 217298).

Carr, S. (2000). As distance education comes of age, the challenge is keeping the students. Chronicle of Higher Education , 2/11/2000, p.A39. Retrieved August 11, 2005, from http://chronicle.com/weekly/v46/i23/23a00101.htm

Diaz, D. (2002). Online drop rates revisited. Retrieved August 11, 2005 from University of North Carolina , The Technology Source Archives Web site: http://ts.mivu.org/default.asp?show=article&id=981

Dynarski, S. (1999). Does Aid Matter? Measuring the Effect of Student Aid on College Attendance and Completion. Retrieved August 15, 2005, from Kennedy School of Government and NBER Web site: http://nber.org/~confer/99/lssi99/dynarski.pdf

Frankola, K. (2001). Why online learners drop out. Workforce, 80 (10), 53-59.

Halsne, A., & Gatta, L. (2002). Online vs. traditionally-delivered instruction: A descriptive study of learner characteristics in a community college setting. Online Journal of Distance Learning Administration. Vol. V, (1), Spring 2002.

Iverson, K.(1995). The Telecourse Success Prediction Inventory. Chicago , IL : Loyola University.

Kember, D. (1995). Opening learning courses for adults: A model of student progress. Englewood Cliffs, NJ; Education Technology Publications.

Kemp. W. (2002). Persistence of adult learners in distance education. The American Journal of Distance Education, 16 (2), 65-81.

Kerka, Sandra. (1989). Retaining Adult Students in Higher Education. Columbus , OH : Eric Clearinghouse on Adult Career and Vocational Education, ERIC Digest No.88. (ERIC Document Reproduction Service No. ED 308401)

Miller, M., Rainer, R., & Corley, J. (2003). Predictors of engagement and participation in an online course. Online Journal of Distance Learning Administration, 6, (1). Retrieved February 1, 2006, from State University of West Georgia , Distance Education Center Website: http://www.westga.edu/%7Edistance/ojdla/spring61/miller61.htm

Moore , M. (1990a). Contemporary Issues in American Distance Education. Oxford : Pergamon Press.

National Center for Educational Statistics. (1999-2000). National Postsecondary Student Aid Study. Washington , D.C. : Retrieved February 15, 2005 from the U.S. Department of Education Statistics Web site: http://nces.ed.gov/programs/digest/d99/

Noel-Levitz, Inc. (2006). Priorities Survey for Online Learners™. Retrieved December 15, 2004 at Noel-Levitz Web site: www.noellevitz.com

O'Brien, B. & Renner, A. (2002, June). Online student retention: Can it be done? Paper presented at the ED-MEDIA 2002 World Conference on Educational Multimedia, Hypermedia & Telecommunications, Denver, CO.

Osborn, V. (2001). Identifying at-risk students in videoconferencing and web-based distance education. The American Journal of Distance Education, 8 (1), 47-63.

Palloff, R., & Pratt, K. (2001). Lessons from the cyberspace classroom (p. 47). San Francisco: Jossey-Bass.

Parker, A. (1999). A study of variables that predict dropout from distance education. International Journal of Educational Technology, 1 (2).

Peltier, G., Laden, R., & Matranga, M. (1999). Student persistence in college: A review of research. Journal of College Student retention , 1, pp. 357-376.

Ramsden, P. (1998). Learning to lead in higher education. New York : Routledge.

Rotter, J. (1966). Generalized expectancies for internal versus external control of reinforcements. Psychological Monographs, 80, (Whole No. 609).

Rovai, A. (2003). In search of higher persistence rates in distance education online programs. Internet and Higher Education, 6 (1), 1-16.

Scalese, E. (2001). What can a college distance education program do to increase persistence and decrease attrition? Journal of Instruction Delivery Systems, Vol. 15 (3), 17.

U.S. Census Bureau (2001). Computer use up sharply; one in five Americans uses Internet, Census Bureau says. Washington D.C. : U.S. Census Bureau Public Information Office.

U.S. Department of Commerce. (2001). Home Computers and Internet Use in the United States: August 2000. Publication P23-207, issued September, 2001.

Van Patten, J., & Chen, G. (2002). The Internet culture, student learning and student retention. Paper presented at the annual meeting of the American Educational Research Association. New Orleans , LA. , April 1-5.

Willis, B. (1994). Enhancing faculty effectiveness in distance education. In Willis, B., (Ed.) Distance education: strategies and tools ( pp.277-288) . Englewood Cliffs, NJ: Educational Technology Publications, Inc

Wojciechowski, A.; & Palmer, L. (2005). Individual student characteristics: Can any be predictors of success in online classes? Online Journal of Distance Learning Administration, 8 (2), 13.


Online Journal of Distance Learning Administration, Volume IX, Number IV, Winter 2006
University of West Georgia, Distance Education Center
Back to the Online Journal of Distance Learning Administration Content