An Investigation of the Relationship Between Grades and Learning Modes in an Introductory Research Methods Course


Donna Roberts
Embry-Riddle Aeronautical University
donna.roberts@erau.edu


John C. Griffith
Embry-Riddle Aeronautical University
John.Griffith@erau.edu

Emily Faulconer
Embry-Riddle Aeronautical University
Emily.Faulconer@erau.edu

Beverly L. Wood
Embry-Riddle Aeronautical University
Beverly.Wood@erau.edu

Soumyadip Acharyya
Embry-Riddle Aeronautical University
acharyys@erau.edu

Abstract

Education researchers have conducted studies on the relationship of learning mode to student performance, but few studies have evaluated pass rate, grade distribution and student withdrawal rate in an introductory research methods course. In this study, researchers examined 2,097 student grades from the 2015-2016 academic year to determine if such a relationship existed. In this study, learning mode was significantly related to failure rate, grade distribution and withdraw rate. Synchronous video home students had a significantly higher failure rate than traditional In-Person or online students. Online student grade distributions were significantly different than In-Person classroom, synchronous video home or synchronous video classroom students. Online Students tended to earn more "A"s and fewer "B"s and "D"s. Synchronous video home students also had a significantly higher withdraw rate than synchronous video classroom students. Recommendations for further research include investigating variables which may impact student performance such as faculty experience with course content and technology and how students select learning modes when taking classes. Future research should continue to employ outcome-based studies to measure the impact of learning mode on student performance. This remains a key issue from the perspective of the students and the institution.

Introduction

While the online delivery of courses has become ubiquitous, often outnumbering traditional delivery forms at many colleges and universities, researchers and educators alike continue to question the relative effectiveness of modalities with respect to student performance. Comparing student outcomes and satisfaction between delivery modes has proven challenging and yielded mixed and equivocal results, making broad generalizations difficult. On a micro level, however, the individual results of various studies, can be used to make course and program adjustments to better serve both students and faculty.

Literature Review

Over the past decade, increasing attention has been devoted to teaching research methods at the undergraduate level. Although faculty typically maintain the need for students to be familiar with the research process in order to both conduct their own inquiries and to interpret the studies of other researchers, students typically lack enthusiasm for these introductory courses. Combined with the perceived difficulty of the subject matter, this poses a challenge for student engagement, and thus, student success (Lewis, 2014; Peachy & Baller, 2015).

Even while overall enrollments in college courses has seen a decline in recent years, enrollment in online courses continues to climb (Lederman, 2013; Seaman, Allen & Seaman, 2018). The literature is abundant with studies comparing the differences in student perception and outcomes between traditional in-person and online delivery; however, there are no clear conclusions.  Some meta-analyses demonstrate significant differences while other reviews do not reveal significant or conclusive findings. Bernard et al. (2004), Cavanaugh et al. (2004), Jahng et al. (2007), Lundberg et al. (2008), Nguyen (2015), Russell (2001), and Zhao et al. (2005) found no significant differences in student performance between online and traditional classroom instruction. M. Allen et al. (2004) and Xu & Jaggars (2013) found that traditional students performed better.  Sitzmann et al. (2006), Shachar & Neumann (2003), and Williams (2006), found that online students performed better. Furthermore, while several comparative studies exist evaluating modalities in research courses at the graduate level (Campbell, et al., 2008; Girod & Wojcikiewicz, 2009; Holmes & Reid 2017; Lim, Dannels & Watkins, 2008; Ni, 2013; Petracchi & Patchner, 2001; Stocks & Freddolino, 1999), few can be found that specifically address an undergraduate research methods course.  Campbell et al. (2008) argue that the online environment provides unique opportunities for data collection related to student performance and various related metrics.

Generally speaking, online delivery, including both synchronous and asynchronous, is now considered a viable alternative to the traditional classroom. Technology advances have led to superior equipment and delivery platforms reducing technological barriers. Advocates of online learning argue that the delivery platform provides an effective means of eliminating barriers of time and place, while providing increased convenience, flexibility, currency of material, customized learning, and focused feedback, when compared to a traditional face-to-face experience (Hackbarth, 1996; Harasim, 1990; Kiser, 1999; Matthews, 1999; Ni, 2013; Swan et al., 2000). In contrast, opponents, or rather skeptics, point to issues of isolation (Brown, 1996), increased confusion, and frustration with both the material itself and the mechanics of its presentation (Hara & Kling, 2000) and a subsequent decrease in motivation, engagement and learning effectiveness in the online environment (R. Maki, W. Maki, Patterson, & Whittaker, 2000), as well as increased drop rates (Njenga & Fourie, 2010).

Despite numerous studies addressing the issue, researchers cite the inherent difficulty in measuring outcomes specifically related to the online delivery modality. Brown and Wack (1999) note the various problems in generally applying an experimental design to educational research, while specifically highlighting difficulties related to comparing online versus traditional instruction. Phipps and Merisotis (1999) note particular problems throughout this comparative literature, including, no control for extraneous variables (and therefore no demonstrable illustration of cause and effect), lack of randomization for sample selection, weak validity and reliability of measuring instruments, and lack of control for reactive effects. Beyond student engagement, various studies have also emphasized the importance of instructor engagement, as well as competence in areas of both content and online pedagogy, as key factors in student success in both adaptations to the online environment and ultimate successful performance (Garrison & Arbaugh, 2007; Holzweiss et al., 2014).

Various individual, situational and contextual complexities factor into teaching efficacy, student engagement and student satisfaction in all learning environments – whether online, in-person or hybrid (Holmes, 2017; Lyke & Frank, 2012; Summers et al., 2005). In short, student performance is a multi-faceted phenomenon, with various measures of outcome (e.g., grades, withdrawal, knowledge enhancement not measured by grades, personal satisfaction, etc.) all dependent upon the unique interaction of both the inherent variables of interest (of which modality of delivery is only one) and potentially numerous situational variables. Further research is necessary to determine the relative impact of these various factors, and thus, the “right formula” for student success, achievement and satisfaction.

The purpose of this study was to explore student performance in multiple modes of instructional delivery of an introductory research methods course.  Embry-Riddle Aeronautical University’s Worldwide Campus offers the opportunity to minimize some confounding factors by delivering an undergraduate research course in several modes: in-person, through synchronous video (EagleVision or EV) – in classrooms or from home – as well as fully asynchronous online.  The analysis distinguishes between EV Class and EV Home due to characteristics that are shared with in-person classrooms and asynchronous online, respectively.  Regardless of the delivery mode, instructors use the same set of learning outcomes, textbook and mandatory assignments.  The learning management system’s template for the online course is also used to create the courses in other delivery modes which is rarely altered by in-person or EV instructors.

This study will compare withdrawal rates, failure rates, and grade distribution among the four delivery modes. The research hypotheses are:

Ha1.  Student failure rates in classroom, on-line and video synchronous learning modes are not equivalent.
Ha2.  Grade distribution in classroom, on-line and video synchronous learning modes are not equivalent.
Ha3.  Student withdrawal rates were not equally distributed between the four learning modes

Methods

The university campus used in this research was a private, not-for-profit institution serving a student population of 15,022 enrolled in the fall of 2015.  Undergraduate students made up approximately 72% (10,807) of this total. Approximately 28% of the undergraduate student body attended full time (identified as 12 semester hours in the July through October 2015 terms).  The average campus undergraduate student was 33 years old.  51% of undergraduates were affiliated with the military. Women comprised 11% of the student population.  A majority of students are working adults.   All undergraduate programs list the RSCH 202 in this study as a requirement for graduation.  Although demographic data were not analyzed to the course level, researchers believe that course demographics reflect those of the campus as a whole due to mandatory requirement to take this research course (ERAU, 2018).

Aggregate data containing 2,097 student grades were mined from the Campus Dashboard for the time period August 2015 to July 2016.  The three hypotheses were tested using Chi-Square (α=.05) at the appropriate degrees of freedom.  Effect size was also calculated on results using the Cramer’s V statistic. Fishers’ Exact tests were used if a Chi-Square Test resulted in a low cell count warning.  (Gay, Mills, & Airasian, 2006). The Bonferroni Correction was applied in the post hoc pairwise testing of each hypothesis (Gould & Ryan, 2013).  The hypotheses concerning failure rates and grade distribution (n=2,040) used a subset of the entire data file. The hypothesis concerning withdrawal rates (n=2,097) included data on the 57 students who withdrew from the course.  All data were aggregate with no individual identification of students to assure student confidentiality. As such, this study was exempt from the institutional Internal Review Board.

A series of tests were applied to the data to evaluate each hypothesis.  The first two tests were run to evaluate if failure rates and learning mode were related. The first statistical test compared the number of students who passed vs the number who failed based on learning mode and are displayed in Table 2.  The second set of pairwise statistical tests are shown on Table 3.  These tests used a Bonferroni corrected .00833 level of significance.

The third and fourth tests were run to evaluate grade distribution equivalency between the four learning modes. The third test compared all the modes to determine if learning mode and grade distribution was related and are displayed in Table 5. The fourth series of pairwise tests were run using the same .00833 Bonferroni corrected level of significance.  Significant findings are shown on Table 6.  

The fifth test evaluated the third hypothesis of the study to determine if withdrawals and learning mode were associated and the data were displayed in Table 8.  The final group of pairwise tests were run with a .00833 Bonferroni corrected level of significance and are shown in Table 9 (Triola, 2013; West, 2016).  


Results

Data showing overall pass and failure rates comparison between learning modes follows on Table 1.



Overall, 90.78% of all students who took Research 202 passed the course.  Students who took an In-Person classroom course passed at a rate of 95.14%.  Students who took EV Home passed at a rate of 85.51 %, the lowest of the four learning modes.  The Chi-Square Analysis of these data is shown in Table 2.



The Chi-Square result indicated a statistically significant relationship between learning mode and failure rates.  The Cramer’s V Effect Size value was low (.08517). There is evidence to support the idea that student grades and learning mode are related. In particular, students who attended RSCH 202 In-Person or Online had a significantly higher pass rate than students who took the course via EV Home or EV Classroom.  Each learning mode was then compared against the other learning modes in a series of two by two Chi-Square comparison (α=.05) shown in Table 3.



The Bonferroni corrected alpha of .00833 was used to determine if the result was statistically significant in the pairwise comparisons.  This was done to avoid a Type I Error when evaluating the hypothesis.  The EV Home failure rate was significantly higher than the In-Person Classroom rate (.00037) and the Online rate (.0032).  The EV Classroom failure rates were higher than the In-Person Classroom rate (.0163) and the Online failure rate (.0364) but not to a statistically significant degree.  The In-Person Classroom and Online modes of learning had the lowest failure rates and were not statistically different from each other.  The EV Home and EV classroom had the highest failure rates but were not significantly different from each other (Triola, 2013).

The second hypothesis stated that grading distributions were not equivalent between the four learning modes. Descriptive statistics are shown in Table 4. 

Overall, 50.54% of RSCH 202 students earned an A, 24.22% earned a “B”, 12% earned a “C”, almost 4% earned a “D”, and a little over 9% failed the course. The distribution of “A”s differs between the four learning modes from a high of 54.77% for Online students to a low of 39.45% for students who took the course via EV Classroom. Students who took the course In-Person (Classroom mode) earned the highest proportion of “B”s. and “D”s and had the lowest percentage of “F” grades. EV Classroom students earned the most “C”s and EV Home Students earned the most “F”s.  Online students earned more “A”s and fewer “D”s than all other modes examined.  Additionally, the failure rate for online students was less than EV Home and EV Classroom. A Cramer’s V test for association yielded a value of .08917.  This “effect size” measurement is low meaning there are other factors influencing these results. A Chi-Square Analysis of the data is shown in Table 5.



The Chi-Square result indicates that there is a relationship between learning mode and student grades.  Students who took In-Person and online courses tended to get more “A”s.  In-Person and EV students tended to earn more “B”s.  Students who took EV courses tended to earn more “F”s.   Each learning mode was then evaluated against the other learning modes in a series of mode vs mode (α=.00833) comparisons and shown on Table 6 (Triola, 2013). 

The Bonferroni corrected alpha (.00833) was used for the pairwise comparisons.  The Online Learning mode was significantly different from the other three learning modes. The grade distribution for online was significantly different from EV Home (.0011), EV Classroom (.001) and In-Person Classroom (.0051).  The remaining mode comparisons yielded non-significant results meaning not enough evidence to reject the idea of similarity in grade distributions.

The third hypothesis stated that student withdrawal rates were not equally distributed between the four learning modes.  Table 7 shows that the overall withdrawal rate was 2.72%. The breakdown by learning mode follows.


Students who took EV Home courses withdrew at a rate of 4.89%, the highest of the four learning modes.  Online had the second highest percentage (2.99%) followed by EV classroom and In-Person classroom courses (.69%).  A Chi-Square analysis (Table 8) was conducted to determine if these differences were statistically significant.



The Chi-Square result indicated a statistically significant relationship between learning mode and withdrawal rates.  The data yielded a small Cramer’s V effect size test for association (.07314). A Pairwise comparison of the learning modes was conducted to determine if there was a significant difference.  Results are shown in Table 9.



a Fisher’s Exact Test values are shown for Chi-Square results which indicated a low cell count warning (LCW).
*p < .00833

In pairwise comparisons (α=.00833), the EV Home withdrawal rate was significantly higher than EV Classroom (.0025).   EV Home students withdrew at a higher rate than In-Person classroom but not to a statistically significant degree (.0260).  The Online student withdrawal rate was also higher than the EV Classroom rate (.0243) but the difference was not statistically significant after the Bonferroni correction.  The Online withdrawal rate and EV Home rate were not statistically different from each other (Triola, 2013).

Analysis

All alternative hypotheses were supported by the statistical analysis in this study.  Student failure rates were related to the learning mode student chose when taking RSCH 202.  Grade distribution and student withdrawal rates were also related to learning mode to a statistically significant degree.   Students who took RSCH 202 Online or In-Person Classroom passed their classes at a higher rate than students who took RSCH 202 in the EV Home learning mode. 

The grade distribution for students who took RSCH 202 Online was significantly different than the three other learning modes. Online students received more “A”s and fewer “D”s than all other modes examined.  To a non-statistically significant degree, students who took In-Person Classroom offerings earned the highest percentage of “B”s and “D”s and earned the lowest percentage of “F”s. 

The last hypothesis examined yielded some curious results when compared to the first two hypotheses.  Student withdrawal rates were lowest for students who took In-Person Classroom and EV Classroom than the other two modes of learning.  The EV Home withdrawal rate was significantly higher than the EV Classroom rate.   This possibly could be related to visible peer support when in a Classroom or EV Classroom environment.

In each of these learning modes, students are surrounded by peers. It is interesting to note that In-Person Classroom and EV Classroom students both withdrew at a .69% rate, much lower than EV Home (4.89%) or Online (2.99%) where students attend class on their own without the direct presence of peers.

Limitations

Analysis on gender, age or other initial difference between the groups was not assessed.  These characteristics were assumed to be equally distributed within the groups studied.   Additionally, student age (average 34), gender mix (11% female) and background (51% affiliated with the military) differs from traditional universities.  The campus studied offers course starts every month (with five major terms starting in August, October, January, March and May). Course term length is 9 weeks.  All these factors may have an impact on the generalizability of these findings. 

Conclusions

Results of this study indicate relationships between learning mode and pass rates, grade distributions and student withdraw rates.  The study design was a conservative retrospective analysis using the Bonferroni correction on post hoc pairwise testing results to avoid Type 1 Errors.  An argument can be made that learning mode has an impact on student performance.  That said, the results also showed small effect sizes as measured by the Cramer’s V statistic.  This indicates that other variables influenced the study results as well.  Similar to the ideas expressed by Holmes (2017), Lyke and Frank (2012), and Summers et al. (2005), we can conclude that learning mode is a factor on student performance, but it is not the only factor which is why this topic needs continued study in the harder to define areas of learning such as faculty experience and student self-selection of delivery mode.

Recommendations for Future Research


Future researchers should continue outcome-based studies such as this one which measure the impacts of student outcomes based on course delivery mode.  The hope is that significant findings will become rarer as tools and training are developed to make the course delivery process more consistent across learning platforms.

In this study, low Cramer’s V results (effect size) implies that there are other factors behind the relationship of learning mode and student performance.  Future research is warranted to better understand the factors. The student psychology of learning is an area that needs further exploration.  Student learning styles and course selection processes (as to which delivery mode to select when taking classes) are important aspects to consider when examining student performance.  The influence of visible student peer support should be examined as it relates to student performance and persistence.

Future researchers should also evaluate factors such as age and gender to determine their impact on learning mode selection by students.   


References

Allen, M., Bourhis, J., Burrell, N., & Mabry, E. (2002). Comparing student satisfaction with distance education to traditional classrooms in higher education: A meta-analysis. American Journal of Distance Education, 16(2), 83-97. doi:10.1207/S15389286AJDE1602_3

Bernard, R. M., Abrami, P. C., Lou, Y., Borokhovski, E., Wade, A., Wozney, L., et al. . (2004). How does distance education compare with classroom instruction? A meta-analysis of the empirical literature. Review of Educational Research, 74(3), 379-439. Retrieved from http://www.jstor.org/stable/3516028

Brown, G., & Wack, M. (1999). The difference frenzy and matching buckshot with buckshot. The Technology Source. Retrieved from http://technologysource.org/?view=article&id=3200

Brown, K. M. (1996). The role of internal and external factors in the discontinuation of off-campus students. Distance Education, 17, 14–71.

Campbell, M., Gibson, W.; Hall, H., Richards, D., Callery, P. (2008) Online vs. face-to-face discussion in a web-based research methods course for postgraduate nursing students: a quasi-experimental study. International Journal of Nursing Studies, 45(5), 750-759. Doi: 10.1016/j.ijnurstu.2006.12.011

Cavanaugh, C., Gillan, K. J., Kromrey, J., Hess, M., & Blomeyer, R. (2004). The effects of distance education on K-12 student outcomes: A meta-analysis. Learning Point Associates/North Central Regional Educational Laboratory. Retrieved from https://eric.ed.gov/?id=ED489533

Embry-Riddle Aeronautical University. (2018).  Worldwide campus undergraduate and graduate demographics (2015-16). Retrieved fromhttps://ir.erau.edu/Factbook/Enrollment/

Garrison, D. R., & Arbaugh, J. B. (2007). Researching the community of inquiries framework: Review, issues, and future considerations. Internet and Higher Education, 10, 157-172. doi:10.1016/j.iheduc.2007.04.001

Gay, L. R., Mills, G. E., & Airasian, P. W. (2006). Educational Research: Competencies for analysis and applications. (8th ed.). Upper Saddle River, New Jersey:  Pearson Education, Inc.

Girod, M., Wojcikiewicz, S. (2009). Comparing distance vs. campus-based delivery of research methods courses. Educational Research Quarterly, 33(2).

Gould, R., Ryan, C. (2013).  Introductory statistics: Exploring the world through data. Upper Saddle River, New Jersey:   Pearson Education Inc.

Hackbarth, S. (1996). The educational technology handbook: A comprehensive guide. Englewood Cliffs, NJ: Educational Technology Publications.

Hara, N., & Kling, R. (2000). Students’ distress with a web-based distance education course: An ethnographic study of participants’ experiences. Information, Communication, and Society, 3, 557–579.

Harasim, L. M. (1990). Online education: Perspectives on a new environment. New York: Praeger.

Holmes, C. M., & Kozlowski, K. A. (2014). “Tech support”: Implementing professional development to assist higher education faculty to teach with technology. Journal of Continuing Education and Professional Development, 2(1), 9-20. doi:10.7726/jcepd.2015.1002

Holmes, C. M., Reid, C. (2017) A comparison study of on-campus and online outcomes for a research methods course. Journal of Counselor Preparation and Supervision, 9(2). Doi: 10.7729/92.1182

Holzweiss, P. C., Joyner, S. A., Fuller, M. B., Henderson, S., & Young, R. (2014). Online graduate students’ perceptions of best learning experiences. Distance Education, 35(3), 311-323. doi:10.1080/01587919.2015.955262

Jahng, N., Krug, D., & Zhang, Z. (2007). Student achievement in online distance education compared to face-to-face education. European Journal of Open, Distance, and E-Learning, 10(1) Retrieved from http://www.eurodl.org/materials/contrib/2007/Jahng_Krug_Zhang.htm

Kiser, K. (1999). 10 things we know so far about online training. Training, 36, 66–74.

Lederman, D. (May 17, 2013). Enrollment decline picks up speed. Inside Higher Ed. Retrieved from https://www.insidehighered.com/news/2013/05/17/data-show-increasing-pace-college-enrollment-declines#sthash.jkDsJhdv.dpbs

Lewis K. (2014). Strategies and techniques for teaching a research methods course for Health Science undergraduate students. Proceedings of the 142nd APHA Annual Meeting and Exposition 2014. New Orleans, LA.

Lim, J.H., Dannels, S.A., Watkins, R. (2008). Qualitative investigation of doctoral students’ learning experiences in online research methods courses. Quarterly Review of Distance Education, 9(3).

Lundberg, J., Castillo-Merino, D., & Dahmani, M. (2008). In Castillo-Merino D., &Sjoberg M. (Eds.), Do online students perform better than face-to-face students? reflections and a short review of some empirical findings (1st ed.) Editorial UOC. Retrieved from http://www.uoc.edu/rusc/5/1/dt/eng/lundberg_castillo_dahmani.pdf

Lyke, J., & Frank, M. (2012). Comparison of student learning outcomes in online and traditional classroom environments in a Psychology course. Journal of Instructional Psychology, 39(4), 245-250.

Matthews, D. (1999). The origins of distance education and its use in the United States. T.H.E. Journal, 27(2), 54–67.

Nguyen, T. (2015). The effectiveness of online learning: Beyond no significant difference and future horizons. Journal of Online Learning and Teaching, 11(2), 309-319. Retrieved from http://jolt.merlot.org/Vol11no2/Nguyen_0615.pdf

Ni, A.Y. (2013, Spring). Comparing and effectiveness of classroom and online learning: teaching research methods. Journal of Public Affairs Education, 19(2), pp. 199-215. Retrieved from http://www.jstor.org/stable/23608947

Njenga, J. K., & Henry Fourie, L. C. (2010). The myths about e-learning in higher education.  British Journal of Educational Technology, 41(2), 199-212. doi:10.1111/j.1467-8535.2008.00910.x

Peachy, A. A. & Baller, S. L. (2015). Ideas and approaches for teaching undergraduate research methods in the health sciences. International Journal of Teaching and Learning in Higher Education, 27(3). 434-442.

Petracchi, H.E., & Patchner, M. E. (20010). A comparison of live instruction and interactive televised teaching: a 2-year assessment of teaching an MSW research methods course. Research on Social Work Practice, 11(1), 108-117. Doi 10.1177/104973150101100107

Phipps, R. A., & Merisotis, J. P. (1999). What’s the difference: A review of contemporary research on the effectiveness of distance learning in higher education. Washington, DC: The Institute for Higher Education Policy. Retrieved from http://www.ihep.org/Publications/publications-detail. cfm?id=88

Russell, T. (2001). The no significant difference phenomenon: As reported in 355 research reports, summaries, and papers (5th ed.). North Carolina State University: IDECC.

Seaman, J. E., Allen, I. E., & Seaman, J. (2018). Grade increase: Tracking distance education in the United States. Babson Park, MA: Babson Survey Research Group.

Shachar, M., & Neumann, Y. (2003). Differences between traditional and distance education academic performances: A meta-analytic approach. The International Review of Research in Open and Distributed Learning, 4(2) Retrieved from http://www.irrodl.org/index.php/irrodl/%20article/viewArticle/153/234

Sitzmann, T., Kraiger, K., Steward, D., & Wisher, R. (2006). The comparative effectiveness of web-based and classroom instruction: A meta-analysis. Personnel Psychology, 59(3), 623-664. doi:10.1111/j.1744-6570.2006.00049.x

Stocks, J.T., Freddolino, P.P. (1999) Evaluation of a world wide web-based graduate social work research methods course. Computers in Human Services, 15(2-3), 51-69. Doi: 10.1300/J407v15n02_05

Swan, K., Shea, P., Frederickson, E., Pickett, A. Pelz, W., & Maher, G. (2000). Building knowledge-building communities: Consistency, contact, and communication in the virtual classroom. Journal of Educational Computing Research, 23(4), 389–413.

Triola, M. (2013).  Statdisk (12.0.2). Pearson Education Inc. Retrieved fromhttp://www.statdisk.org/

West, W. (2016).  StatCrunch – Data analysis on the web. Pearson Education Inc. Retrieved from https://www.statcrunch.com/

Williams, S. L. (2006). The effectiveness of distance education in allied health science programs: A meta-analysis of outcomes. American Journal of Distance Education, 20(3), 127-141. doi:10.1207/s15389286ajde2003_2

Xu, D., & Jaggars, S. S. (2013). The impact of online learning on students' course outcomes: Evidence from a large community and technical college system. Economics of Education Review, 37, 46-57. doi:10.1016/j.econedurev.2013.08.001

Zhao, Y., Lei, J., Yan, B., Lai, C., & Tan, H. S. (2005). What makes the difference? A practical analysis of research on the effectiveness of distance education. Teachers College Record, 107(8), 1836-1884.

 

Online Journal of Distance Learning Administration, Volume XXII, Number 1, Spring 2019
University of West Georgia, Distance Education Center
Back to the Online Journal of Distance Learning Administration Contents