Online Learning: Outcomes and Satisfaction among Underprepared Students in an Upper-Level Psychology Course


Dr. Colleen McDonough
Neumann University
mcdonouc@neumann.edu


Dr. Ramona Palmerio Roberts
Neumann University
robertsr@neumann.edu


Jessamy Hummel
Neumann University
jmsh2014@gmail.com

 

Abstract

Online learning is on the rise, but research on outcomes and student satisfaction has produced conflicting results, and systematic, targeted research on underprepared college students is generally lacking. This study compared three sections (traditional, online, and 50% hybrid) of the same upper-level psychology course, taught with identical materials by the same instructor. Although exam scores were marginally higher in the traditional course, final grades and written assignments did not differ across sections, nor did student satisfaction. Student engagement predicted outcomes online. Taken together, these results suggest that outcomes and satisfaction are equivalent in online, hybrid, and traditional courses, and that a student's own diligence and drive might better predict success in online learning.

 

In the past decade, the educational system has witnessed tremendous growth in online learning. An increasing number of courses at American colleges and universities are offered in completely online or hybrid (part online) format, and in fact, students can earn nearly every undergraduate and graduate degree, somewhere in the country, without ever setting foot on a college campus. At the university where the authors of this study teach, the number of courses offered in either online or hybrid format has doubled since 2008 (from 3.59% to 7.68%), and the number of students taking at least one online or hybrid course in a semester has tripled in the same time frame (from 10.29% to 30.54%). A recent report issued by the Babson Survey Research Group and Quahog Research Group, LLC (Allen & Seaman, 2013) suggests that, while the growth rate in online enrollment has slowed slightly over the past two years, as many as 32% of college students are enrolled in online courses. In the United States in 2011, nearly 7 million postsecondary students took at least one online course (Allen & Seaman, 2013). Even in the K-12 sector, online charter schools continue to emerge and market themselves, often quite successfully, to a population that enjoys the convenience that online education has to offer. The real question for consumers and educators is whether the quality of online learning is comparable to that offered in a traditional face-to-face classroom setting. Additionally, given a recent report by The College Board that nearly 43% of SAT test takers are not college-ready (The College Board, 2013), it is important to know how underprepared students perform in online-learning platforms, particularly in the more challenging, upper-level courses.

           
Research on online course outcomes, which has focused primarily on exam scores and final grades, has produced conflicting results. Comparing online to traditional (in class, face-to-face) courses, equivalent exam performance has been reported by many researchers (e.g., Elvers, Polzella & Graetz, 2003; Hemmati & Omrani, 2013; Hollister & Berenson, 2009; Jensen, 2011; McGready & Brookmeyer, 2013; Stowell & Bennett, 2010; Summers, Eaigandt, & Whittaker, 2005). In fact, the phenomenon of "no significant difference" has been well documented by Russell (2013). Mosalanejad and colleagues found that among first-year nursing students, although there was no difference in a practical (applied) exam, online students outperformed traditional students on a theoretical exam (Mosalanejad, Shahsavari, Sobhanian, & Dastpak, 2012). Their findings suggest that while questions tapping rote memorization may be "easier" online (perhaps due to the ability to look up the answers), deep learning does not vary depending on the delivery method.

 

Some studies, however, have produced evidence of differences in online and traditional testing results, typically favoring courses offering a traditional setting to some degree. Waschull (2001) found a trend toward a higher final exam score in traditional versus online students. Ashby, Sadera and McNary (2011) found the highest exam scores in the traditional class, followed by online, then hybrid. Terry (2007) reported that both traditional and hybrid exams were higher than online and Fillion, Limayem, Laferriere, and Mantha (2009) likewise reported that hybrid students outperformed online ones. In contrast, Lim, Kim, Chen, and Ryder (2008) reported higher exam scores in both online and hybrid courses, compared to traditional. Taking the findings on exam scores as a whole, the picture becomes very muddied, with research demonstrating every possible combination of findings.

 

Final grades are another academic outcome that has received attention, albeit less so, in the online learning research. Here the results are a little more clear. While some studies have reported no significant differences in final grades (Akyol & Garrison, 2010; Kirtman, 2009), the research demonstrating group differences favors the traditional setting. Students taking traditional courses were more likely to pass (Jaggers, Edgecombe & Stacey, 2013; Waschull, 2001) and complete courses (Ashby, Sadera, & McNary, 2011; Terry, 2007, although see Waschull, 2001) compared to those students taking hybrid or online versions of the same course.

 

Three factors could possibly conflate these results. First, perhaps the results can simply be attributed to the different demographics of the online students, as typically, the students self-selected into the traditional, hybrid, or online section. Online learning offers a flexibility that allows nontraditional degree seekers to attend college courses. Therefore it is not surprising that in higher education, students who choose to take courses online tend to be older than the traditional college student, employed full-time, and have children at home. They are also more likely to be white, from a higher socioeconomic status, and English-speaking (Ashby, Sadera, & McNary, 2011; Jaggers, Edgecombe, & Stacey, 2013). Edmonds (2006) found that traditional students received higher exam scores than online students, after controlling for SATs and High School GPA, but the other demographic variables have been largely unstudied. Within individual studies some researchers have reported no significant differences in their online vs. traditional samples (e.g., Waschull, 2001), but this may be attributed to a small sample size. More research on the interplay of demographics is needed. Second, and of great concern to educators and colleges, is the possibility of cheating online. Hollister and Berenson (2009) conducted a thorough analysis to ascertain whether online students' test scores could be attributed to cheating, but found no evidence of cheating online. Further, the studies reviewed here do not show that online students overwhelmingly outperform traditional students on exams; on the contrary, most of the research finds that exam scores are either equivalent, or traditional students do better. These results imply that educators need not be too concerned about cheating online, but it is still an issue of concern, particularly among online-learning critics. Third, the format of an online course typically requires the student to be disciplined and self-motivated. Failure to access the online course regularly, coupled with procrastination, can easily result in poor outcomes. Elvers, Pozella, and Graetz (2003) found that in an online course (but not a traditional one), procrastination led to lower exam scores. Similarly, DeNeui and Dodge (2006) found a small but significant correlation between the amount of Blackboard Vista usage (an online course delivery system) and the students' final grades. This issue is germane to the topic of underprepared students, as they may not have learned healthy study habits that would allow them to succeed in a self-paced course. Taken together, these three factors may account for some of the contradictions in research findings.


Another important outcome to consider is the students' level of satisfaction with the course. Some aspects of online learning may be perceived as extremely advantageous to students. For example, students who are afraid to raise their hands in front of a room full of their peers may be much more comfortable voicing their opinions on a web-based discussion board. In contrast, online lectures often fail to maintain student attention the same way that classroom-based lectures do, and some students are partial to the personal interaction afforded by traditional classes. The importance of student satisfaction is not to be underestimated. In a climate of extreme market competition, colleges and universities need to be on top of student attrition, and faculty members are similarly concerned with their course evaluations for the purposes of promotion and tenure.

           
As with the academic course outcomes, satisfaction outcomes have produced very conflicting results. While some studies have reported increased satisfaction in hybrid and online courses (Hemmati & Omrani, 2013; Lim et al., 2008), others have demonstrated the opposite pattern (Summers, Waigandt, & Whittaker, 2005; Terry, 2007). Gecer and Dag (2012), and Kirtman (2009), along with Yudko, Hirokawa and Chi (2008) found that online and hybrid courses received positive ratings overall, and Beqiri, Chase and Bishka (2010) found that online courses were most preferred by males, graduate students, married students, and commuters. However, Waschull (2001) found no difference in satisfaction between traditional and online courses. The satisfaction findings, unclear as they are, may also be attributed to extraneous factors. For example, Arbaugh (2010) reported that instructor teaching presence and response time significantly improved student satisfaction in an online course.

 

Targeted research on underprepared students is generally lacking. Jaggers (2011) reported that underprepared students typically do poorly in online coursework for four reasons: 1) the technical difficulties associated with navigating the online content, 2) social distance from classmates and instructor, 3) lack of student supports online, and 4) the lack of structure in online platforms.  However, Kim and Lee (2011) suggest that the self-paced nature of the online environment may be beneficial to these same students.


The Current Research

           
Previous research on online learning outcomes has been limited by several important factors. First and most importantly, previous results are conflicting and additional research needs to be conducted in an attempt to understand the effects, if any, of course delivery method. Second, many previous studies have been plagued by small sample size; Akyol and Garrison (2011), Summers, Waigandt and Whittaker (2005), Waschull (2001) and Wise et al., (2004), for example, had sample sizes ranging from 20-41. Third, some studies have introduced significant differences into the design of the online vs. traditional courses utilized in their research (e.g., Waschull, 2001). Fourth, few studies have compared online, hybrid, and traditional versions of the same course. Finally, with few exceptions (e.g., see Edmonds, 2006, Jaggers, Edgecombe, & Stacey, 2013), the research has been conducted at large colleges and universities with small proportions of developmental students.

           
The current research addresses each of these limitations by comparing course outcomes in three sections (traditional, hybrid with 50% of the content online, and online) of the same course – Psychology 330 (Psychopathology), an upper-level elective that is popular among psychology and nursing majors. Each section was taught by the same instructor with identical course materials.

1Student learning outcomes (multiple-choice exam scores, written case study scores, and final grade) along with satisfaction were compared across the three versions of the course. This research is archival in nature, in that the study was conceived after the courses were all completed.


Method

           
The purpose of this study was to investigate whether there were differences in course outcomes and satisfaction in the same course, delivered online, hybrid, and traditionally. Archival data were extracted from three different sections of the same course (Psychology 330, Psychopathology) taught by the same instructor, using identical course materials. One section was offered completely online, one section was offered in a hybrid format (with 50% of the course delivered online), and the last section was offered in a traditional, face-to-face format. All courses were offered at the same university during a typical 14-week semester. Students self-selected into the courses: 32 students took the traditional course, 26 students took the hybrid course, and 23 students took the online course, for a total sample size of 81. There were no significant differences in demographics (gender, race, age) of the students across the three versions of the course (ps < .05). The sample was taken from a small, Catholic Liberal Arts university with a high proportion of working students (50.42% employed part time, 22.34% employed full time), nearly two-thirds of whom entered college at the remedial level. 2The student population is diverse (52% Caucasian), and drawn heavily from surrounding underserved communities; many are first generation college students, and the vast majority (95%) receive financial aid.

Three separate course outcomes were included in the analysis: the average of four multiple choice, non-cumulative exam grades, the average of two applied written case studies, and the final grade in the course. The exam questions and timing (60 minutes) were identical across the three sections of the course, however, the delivery method varied: The exams for both the online and the hybrid course were delivered online. The traditional class had closed-book exams in class. Student satisfaction was taken from the end of semester student evaluations, which included 17 Likert scale questions (1-7 scale with higher ratings indicating higher satisfaction on all items). Students were also able to leave open-ended comments on the evaluation. Finally, student engagement was computed by looking at the number of hours each student spent online in the online and hybrid sections of the course.


Results

 

Course Outcomes

 

Oneway ANOVAs were conducted with class type (3 levels: online, hybrid, traditional) as the IV, and the DV as 1) Exam grade average, 2) Case study average, and 3) Final grade in the course. The results demonstrated a nearly significant effect of class type on exam grade, with the traditional students outperforming the others (Traditional M, 86.18%; Hybrid M, 78.88%; Online M, 78.30%; F(2,783) = 3.07, p=.052). Planned comparisons revealed higher exam scores among the traditional students compared to the hybrid ones (F(1,56) = 4.69, p=.03) but no differences when comparing traditional to online (F(1,53) = 3.45, p=.07) or online to hybrid (F(1,47) = .03, p=.87). There were no differences in case study average (F(2,79) = 0.86, p=.43) or final grade (F(2,79) = 0.21, p=.81) across the three groups. Results of the ANOVAs are shown in Table 1.

 

To determine if class type was related to passing/failing the class, final grade was recoded into a categorical variable with 2 levels: 1) A-C (passing), 2) D-F (failing) and a 2X3 chi-square test of independence was conducted between final grade and class type. The chi square was not significant (C2(2) = 1.75, p=.41). Data are shown in Table 2.


Student Satisfaction

           
Means for the 17 items on the end of semester student evaluation were entered into an ANOVA with class type as the IV. There were no differences overall or on any of the 17 items (All ps > .05). Note that there was very little variability in scores overall – means were all above 6.4 on a 1-7 scale.


The Impact of Student Engagement

           
Finally, course outcomes (final grade, exam score average, and case study average) were correlated with the number of hours each student logged onto the course website in the online and hybrid sections of the course. All tests failed to reach significance, indicating that overall, student engagement was unrelated to course outcomes. However, additional analysis revealed a different pattern of results.

 

In the hybrid course, the number of hours logged online ranged from 27.7 to 155.45, with a mean of 78.88 (SD = 38.24). However, in the online course, the number of hours logged online was more variable, ranging from 10.66 to 505.41, with a mean of 127.81 (SD = 110.34). Two online students had particularly high usage (354.25 hours and 505.41 hours), more than 2 standard deviations above the mean. When they were eliminated from the analysis, the results changed for the online course. Hours spent online was still unrelated to exam grade (r (19) = .34, p>.05), but it was nearly significantly related to final grade (r (19) = .42, p=.06). The correlation between hours spent online and case study average was significant (r (19) = .47, p<.05). The more hours spent online, the higher the case study average. See Table 3 for the significance test results on student engagement.


Discussion

           
This study attempted to resolve some lingering questions in the national debate surrounding the efficacy of online learning, focusing on a population high in underprepared, developmental students. From the student's perspective, outcomes are critical – anecdotally, we are asked by students all the time: Are online courses hard? Am I going to do well in an online or hybrid course? This may be particularly concerning for the almost half of intended college students who are not college-ready (The College Board, 2013), and who may not have acquired the necessary academic and study skills that will allow them to succeed in an online platform. From the perspective of the instructor and the institution, student satisfaction is critical – faculty are concerned about their end of semester evaluations for promotion reasons, and institutions of higher learning want to ensure that tuition dollars continue to flow into their doors. The results of this research suggest that in terms of outcomes and satisfaction, online and hybrid courses are comparable to traditional courses, at least when the key course materials (lectures, exams, and central assignments) are kept the same.

           
Course outcomes were nearly identical in the three sections of Psychopathology (online, 50% hybrid, and traditional) that were used for this analysis. There were no significant differences between the final grade and the case study average across the sections. The exam average did reveal a very nearly significant (p = .052) difference across the three sections, with highest scores among the traditional students. However, that difference was washed out when it came to final grades, which is the score that really matters to students. Importantly, this finding cannot be attributed to grade inflation; rather, the traditional students' final grades were nearly five percentage points lower than their exam average. Similar to the findings of Akyol and Garrison (2010) and Kirtman (2009), there was also no difference in course failure rate across the three sections of the course. These results suggest that for course outcomes, there are no meaningful differences in online, hybrid, and traditional courses.

           
In terms of student satisfaction, here again the results suggest no impact of course type. The end of semester satisfaction survey revealed no significant differences on any of the 17 Likert scale items across the three sections of the course. Also, the open-ended questions were overwhelmingly positive in all three sections, with not a single criticism about online learning written in by any student. Note also that most students did leave a comment in the open-ended section, mitigating any concern about response bias. These findings demonstrate that, at least when the instructor and course materials are the same, student satisfaction does not change when a traditional course is delivered hybrid or online. One potential caveat/limitation here is that the professor who taught the courses is very well liked by, and available to, students, with near ceiling scores on all the end of semester satisfaction surveys. Barbatis (2010) reports that interaction with faculty is linked to improved student outcomes in underprepared students, a finding that may have mediated our results, since the instructor of this course is very accommodating of student requests to meet and discuss the course. In addition, the course used for this research is a very interesting and popular one that has the ability to engage students' interest. Therefore, although the results demonstrate no effect of course delivery method on satisfaction, we cannot speak to whether similar findings would emerge with less popular instructors or courses, and future research will need to address this issue. These results do, however, hold promise for remedial students, as the course was a challenging upper-level elective.

           
Finally, this research looked at whether student engagement was related to course outcomes. Correlating the hours logged onto the online course with final grades, exam average, and case study average revealed no significant differences in the sample overall. These results are in contrast with the findings of DeNeui and Dodge (2006) and may be the result of high variability in our sample, particularly in the online course. When the two highest online users were removed from the analysis, hours spent online was significantly correlated with case study average and marginally, with the final exam grade. These findings lend some evidence to the idea that students' cognitive presence may predict their course outcomes, as in Akyol and Garrison (2008) and Ramos and Yudko (2008). Further, since the case studies tapped critical thinking skills, these results also suggest a relationship between the amount of effort and depth of processing, although the direction of effect is unclear and warrants future research.

           
In conclusion, this research suggests that outcomes and student satisfaction do not differ in any meaningful ways in traditional, online, and hybrid college courses. These findings underscore the quality and value of the online learning platform for institutions of higher learning, educators, parents, students, and the general public – not only is student performance similar to traditional courses, but students enjoy it as well. Given that the scholarship of online learning seems to be here to stay, it is reassuring to know that its effectiveness is similar to traditional courses at the undergraduate level. Student engagement and course involvement online may play a role in their outcomes, however, so students who tend to procrastinate or who lack intrinsic motivation might be better suited to traditional courses.

 

Acknowledgements: We wish to thank Marcia Finch and Carol Micun for compiling the university-wide enrollment and employment statistics and Lori Kaczenski for providing us with data on our developmental students.


References

Akyol, Z., & Garrison, D.R. (2011). Understanding cognitive presence in an online and blended community of inquiry: Assessing outcomes and processes for deep approaches to learning. British Journal Of Educational Technology, 42(2), 233-250.


Allen, I.E., & Seaman, J. (2013). Changing course: Ten years of tracking online education in the United States. Babson Survey Research Group and Quahog Research Group, LLC. Retrieved from http://www.onlinelearningsurvey.com/reports/changingcourse.pdf


Arbaugh, J.B. (2010). Sage, guide, both, or even more? An examination of instructor activity in online MBA courses. Computers & Education, 55(3), 1234-1244.


Ashby, J., Sadera, W.A., & McNary, S.W. (2011). Comparing student success between developmental math courses offered online, blended, and face-to-face. Journal of Interactive Online Learning, 10(3), 128-140.


Barbatis, P. (2010). Underprepared, ethnically diverse community college students: Factors contributing to persistence. Journal of Developmental Education, 33(3), 14-24.


Beqiri, M.S., Chase, N.M., & Bishka, A. (2010). Online course delivery: An empirical investigation of factors affecting student satisfaction. Journal of Education for Business, 85(2), 95-100.


DeNeui, D.L., & Dodge, T.L. (2006). Asynchronous learning networks and student outcomes: The utility of online learning components in hybrid courses. Journal of Instructional Psychology, 33(4), 256-259.


Edmonds, C.L. (2006). The inequivalence of an online and classroom based general psychology course. Journal of Instructional Psychology, 33(1), 15-19.


Elvers, G.C., Polzella, D.J., & Graetz, K. (2003). Procrastination in online courses: Performance and attitudinal differences. Teaching of Psychology, 30(2), 159-162.


Fillion, G., Limayem, M., Laferriere, T., & Mantha, R. (2009). Integrating information and communication technologies into higher education: Investigating onsite and online students' points of view. Open Learning, 24(3), 223-240.


Gecer, A., & Dag, F. (2012). A blended learning experience. Educational Sciences: Theory and Practice, 12(1), 438-442.


Hemmati, N., & Omrani, S. (2013). A comparison of internet-based learning and traditional classroom lecture to learn CPR for continuing medical education. Turkish Online Journal of Distance Education, 14(1), 256-265.


Hollister, K.K., & Berenson, M.L. (2009). Proctored versus unproctored online exams: Studying the impact of exam environment on student performance. Decision Sciences Journal of Innovative Education, 7(1), 271-294.


Jaggers, S.S. (2011). Online learning: Does it help low-income and underprepared students. CCRC Working Paper No. 26. Assessment of Evidence Series. Community College Research Center, Columbia University.


Jaggars, S.S., Edgecombe, N., & Stacey, G.W. (2013). What we know about online course outcomes. Community College Research Center, Teachers College, Columbia University, April 2013, 1-8.


Jensen, S.A. (2011). In-class versus online video lectures: Similar learning outcomes, but a preference for in-class. Teaching of Psychology, 38(4), 298-302.


Kim, J., & Lee, W. (2011). Assistance and possibilities: Analysis of learning-related factors affecting learning satisfaction of underprivileged students. Computers & Education, 57, 2395-2405.


Kirtman, L. (2009). Online versus in-class courses: An examination of differences in learning outcomes. Issues in Teacher Education, 18(2), 103-116.


Lim, J., Kim, M., Chen, S.S., & Ryder, C.E. (2008). An empirical investigation of student achievement and satisfaction in different learning environments. Journal of Instructional Psychology, 35(2), 113-119.


McGready, J., & Brookmeyer, R. (2013). Evaluation of student outcomes in online vs. campus biostatistics education in a graduate school of public health. Preventive Medicine: An
International Journal Devoted to Practice and Theory, 56
(2), 142-144.


Mosalanejad, L., Shahsavari, S., Sobhanian, S., & Dastpak, M. (2012). The effect of virtual versus traditional learning in achieving competency-based skills. Turkish Online Journal of Distance Education, 13(2), 69-75.


National Association for College Admission Counseling. (2014).College data. Retrieved from: http://www.collegedata.com/cs/data/college/college_pg02_tmpl.jhtml?schoolId=628.


Ramos, C., & Yudko, E. (2008). "Hits" (not 'discussion posts') predict student success in online courses: A double cross-validation study. Computers & Education, 50(4), 1174-1182.


Russell, T.L. (2013). The no significant difference phenomenon. Retrieved from http://www.nosignificantdifference.org.


Stowell, J.R., & Bennett, D. (2010). Effects of online testing on student exam performance and test anxiety. Journal of Educational Computing Research, 42(2), 161-171.


Summers, J.J., Waigandt, A., & Whittaker, T.A. (2005). A comparison of student achievement and satisfaction in an online versus a traditional face-to-face statistics class. Innovative Higher Education, 29(3), 233-250.


Terry, N. (2007). Assessing instruction modes for Master of Business Administration (MBA) courses. Journal of Education for Business, 82(4), 220-225.


The College Board. (2013). The 2013 SAT report on college & career readiness. Retrieved from http://research.collegeboard.org/programs/sat/data/cb-seniors-2013.


Waschull, S.B. (2001). The online delivery of psychology courses: Attrition, performance, and evaluation. Teaching of Psychology, 28(2), 143-147.


Wise, A., Chang, J., Duffy, T., & Del Valle, R. (2004). The effects of teacher social presence on student satisfaction, engagement, and learning. Journal of Educational Computing Research, 31(3), 247-271.


Yudko, E., Hirokawa, R., & Chi, R. (2008). Attitudes, beliefs, and attendance in a hybrid course. Computers & Education, 50(4), 1217-1227.


Table 1

Means Table for Oneway ANOVAs

 

 

IV

 

 

 

Traditional

Hybrid

Online

Significance

DVs

Exam Average

86.18

78.88

78.3

0.05

Case Study Average

77.8

83.3

79.17

0.43

Final Grade

81.41

82.16

79.7

0.81


Table 2

Frequencies for Chi Square Test of Independence: Final Grade by Class Type

 

Traditional

Hybrid

Online

pass

30

22

18

fail

3

4

5

*n.s.

Table 3

Correlations: Course Outcomes and Student Engagement

 

Online

Hybrid

Online with Highest 2 users removed

Hours Logged Online X Final Grade

r (21) = .18, p>.05

r (24) = .16, p>.05

r (19) = .42 p=.06

Hours Logged Online X Exam Grade Average

r (21) = .06, p>.05

r (24) = .19, p>.05

r (19) = .34 p>.05

Hours Logged Online X Case Study Average

r (21) = .28, p>.05

r (24) = .13, p>.05

r (19) = .47 p<.05


Online Journal of Distance Learning Administration, Volume XVII, Number III, Fall 2014
University of West Georgia, Distance Education Center
Back to the Online Journal of Distance Learning Administration Contents