Online vs. Face-to-Face Course Evaluations: Considerations for Administrators and Faculty


Michael P. Marzano
Lindenwood University
MMarzano@lindenwood.edu

Robert Allen
Lindenwood University
RAllen@lindenwood.edu

Abstract

The purpose of this study was to determine whether students evaluate courses differently, and perhaps more critically, when delivered online vs. face-to-face (F2F). Course evaluations are associated with the instructor that taught the course. Course evaluation continues to be a significant assessment vehicle of faculty performance used by many administrators. This analysis attempted to control for variations in instructors and courses, by comparing student course evaluations, where the same instructor taught the same course, in both modalities. Moreover, the study attempted to understand the contributing factors to the course rating. The results of this study confirm that courses taught by the same instructor, using the same course content, were rated lower when delivered in the online modality. The results of the lower ratings, for online courses, have implications for faculty and administrators. Areas potentially affected by the lower ratings include: 1) a drop in the faculty member’s assessed performance; 2) a difficulty to recruit full-time or tenure seeking faculty to teach online courses; 3) potential unproductive attempts to compensate for deficiencies or ‘student dislikes’ with the Learning Management System; and 4) potential morale issues with faculty experiencing less job satisfaction due to lower online course ratings.

Introduction

In the early 2000s, less than 50% of US universities indicated online education to be strategically important – by 2013, that percentage had climbed to nearly 70% (Allen & Seaman, 2013).  Moreover, by 2013, 32% of all students had taken at least one online course (Allen & Seaman, 2013).  Though online education has become mainstream, students have been found to evaluate online courses differently than face-to-face (F2F) courses.  Research has suggested that students are more critical of their online classes and instructors (Rovai, Ponton, Derrick, & Davis, 2006).   Other studies show no clear difference between online and F2F courses, regarding student ratings or assessments of effectiveness.

One of the primary methods for gathering student feedback, especially regarding satisfaction with a course, is the ‘end of course’ evaluation.  These instruments commonly seek feedback on the course, the instructor, as well as student oriented factors.  Course evaluations provide input to administrators, regarding faculty performance evaluation, and can be examined by accreditation bodies.  With this responsibility comes the challenge of objectively and fairly assessing courses, in both online and F2F modalities.  The focus of this study was to compare the course evaluations of the same instructor, teaching the same course, in both online and on-ground modalities, to determine if the degree of student satisfaction was different, and if so, why?   


Literature Review

Course Evaluation (CE)

Many course evaluation (CE) instruments seek feedback on multiple dimensions that impact student satisfaction, e.g., course design, instructor, textbook.  Due to the continuance of higher education to rely on such instruments, administrators may need to take into account the more negative ratings of online courses (Rovai et al., 2006).  Course evaluations have become a surrogate for student satisfaction with a course/instructor.  These CEs have largely focused on traditional, on-ground, face-to-face classroom courses and results have been used for formative, summative and informational purposes (Rovai et al., 2006).  Research for online CEs have indicated that students may be more extreme and perhaps more negative in rating online courses as compared to on-ground courses (Rovai et al., 2006).  Given the potential difference of ratings, due to a course’s delivery method, it is important for stakeholders to understand that CE scores may vary depending upon whether the course was taught face-to-face, or online.  

There are indications that students may have reduced inhibitions when completing course evaluations (Rovai, et al., 2006.).  Research provides evidence that faculty, teaching the same course in a traditional classroom setting, may have higher course ratings than when the course is taught at a distance or online (Fetzer, 2000).  Other researchers have found that while students like the convenience of online courses, the online learning format was not as satisfying as a traditional classroom (Ponzurick, France, & Logar, 2000).  Rovai et al. (2006) found significant course evaluation differences between traditional face-to-face and online students.  Other researchers have concluded that students react to a course, based on the content, the instructor, the climate, and themselves - with modality being less a predictor of success or withdraw.  They did not suggest student ‘satisfaction’ to be equivalent, by modality (Dziuban & Moskal, 2011).

Administrator Evaluation of Faculty

Course evaluation surveys continue to be widely used by university administrators to assess course success and faculty performance, to decide pay and promotion, and to prescribe professional development actions (Villanueva & Stewart, 2013).  While CEs often provide feedback on course content and design, as well as on instructor, some universities standardize course content and design, and then proceed to evaluate the facilitation and instructional skills of faculty members rather than their ability as instructional designers. (Mandernach, Donnelli, Dailey, & Schulte, 2005).Other methods of faculty assessment are also in play, including observation of classroom behavior, by visiting the physical classroom, or by observing instructor/student interaction, in the virtual classroom – and proceeding with summative assessment and actions (Tobin, 2004).  More recent studies call for longer term faculty assessments to include alumni feedback, i.e., career success (Kane, Shaw, Pang, Salley, & Snider, 2016).     

Instructor (as a dimension of CE)

Research has indicated that face-to-face students want an instructor of good character and content knowledge, whereas, online students suggested course organization and materials were important (Kelly, Ponton, & Rovai, 2007).  It is the student’s perception of the differences between on-ground and online faculty roles that may result in a difference regarding how students evaluate faculty on course evaluations.  Research indicates two key instructor factors that influence CEs.  These factors are: 1) instructor commitment/affective relationship; and 2) the role the instructor plays in the class.  Instructors need an attitude that embodies commitment and investment in students (Brinthaupt, Fisher, Gardner, Raffo, & Woodard, 2011).  The Hanover Research Council’s study (2009) found that teaching in action, involves matters of ‘presence,’ student involvement, professor interaction, and adapting to technology advancements.  Instructor-student interactivity becomes the “heart and soul of effective asynchronous learning,” according to Pelz (2004).
       
Keengwe and Kidd (2010) study of online pedagogy found that the online instructor’s role contained four categories: pedagogical, social, managerial, and technical.  The pedagogical role revolves around educational facilitation.  The social role involves creating a friendly social environment that facilitated online learning.  The managerial role includes agenda setting, pacing, objective setting, rulemaking, and decision-making.  The technical role depends on the instructors first becoming comfortable with the technology being used and then being able to transfer that level of comfort to their learners’ outcomes (Keengwe & Kidd, 2010).  

In a recent study geared towards understanding faculty’s perceptions about the importance of ‘teaching clarity,’ researchers determined three key instructor behaviors including clearly explaining course goals, teaching in an organized fashion, and using examples, as being most important (Ribera, BrckaLorenz, Cole, & Nelson, 2012).  Research by Puzziferro and Shelton (2008) indicates that clear goals and expectations, multiple active learning opportunities, and a course assessment strategy are critical in an online course.  The inclusion of a course syllabus and instructions for each segment of the course are paramount to providing ‘clear goals and expectations.’

Research debate has raised the question of whether media/technology impacts student learning, or whether the learning outcome is more attributable to the combination of media/technology and method.  This debate centered on articles written by Richard Clark in 1984, 1985, and 1994, and Robert Kozma in 1991 and 1994 (Shaffer, 2008).  Clark concluded that media did not impact learning (Shaffer, 2008).  Kozma researched the combination of media and method (1994), and later concluded that media does impact learning (Shaffer 2008).  There is a constant need for professors to have quality professional development in order to implement the most updated technology available (McBrien, Jones, & Cheng, 2009). 

How teachers assess students is a critical dimension of learning.  Bain (2004) made the point that “outstanding teachers used assessment to help students learn, not just to rate and rank their efforts”.  In the 2009, Hanover Research Council’s report listed several ‘assessment’ practices as important for online teaching, including: enabling learners to assess progress and identify areas for review, use of multiple strategies, including self-tests, journals, and writing assignments, and the clear articulation of assessment criteria.  Eskey and Schulte (2012) found that instructors should utilize rubrics and provide individualized, constructive feedback to enhance student learning.


Course (as a dimension of CE)

Research has indicated course design has a significant impact on CEs.  The planning and development phase of a course has implications upon the student’s perception of the course provisions.  The design of F2F and online courses requires attention to course provisions.   Bailey and Hendricks (2014) indicate that students think a higher level of technical proficiency is needed to be a successful online student.  This indicates that the online environment must be ‘user friendly’ in order to reduce student anxiety.  Technology should support the quality of instruction and course design (Johnson & Aragon, 2003).  This indicates that technological training, course design process, and quality of instruction are critical for positive online course evaluations possess (DiPietro, Ferdig, Black, & Preston, 2008).

“Instructors should try to create a natural critical learning environment” says Bain (2004, p. 99).  For online courses, faculty need to address the design of discussion forums in order for students to become involved with each other and the instructor in the teaching and learning of the course.  Course navigation must be easy for students to understand and to follow.  The course’s technology components must be available to the students and the students must understand how to use the required technology.  Textbooks, publisher supplies resources and course resources must be easy to access and use by students (Puzziferro & Shelton, 2008).  Barnett-Queen, Blair, and Merrick (2005) found that online discussion posts created a student centered, critical thinking environment which was not necessarily the case in face-to-face classrooms.  The student led discussion posts provided an environment that is similar to constructivism.  

Further support for the clarity elements are embodied in the popular Quality Matters and Blackboard Exemplary Course design specification sets.  These rubrics provide course designers with a wide and deep range of considerations, and when properly implemented, can lead to respective certifications. 

The choice and use of a textbook are also important aspects of a ‘course’, relative to student evaluation of courses.  Researchers have studied student preferences of print books vs. e-books, as well as textbook characteristics.  Textbook characteristics that were found to be important to students included: relatable examples, self-assessments, good explanations, one concept at a time, interesting content (Martins, 2014). 

Studies involving the ‘pace’ of a course and its impact on student satisfaction or other similar dimensions are limited.  In Arbaugh’s study of how instructor behaviors impact student satisfaction and learning, Arbaugh recommends faculty to “run courses using compressed schedules, thereby reducing the likelihood that a course merely drags on” (Arbaugh, 2000 p. 51).  Constructivists argue that self-paced courses provide for better learning (Eom, Wen, & Ashill, 2006).

Student (as a dimension of CE)

Research indicates that a student’s expected grade impacts the rating of the teacher, by the student (Nowell, 2007).  In a similar study, researchers found the students’ expectations for a good grade (A) led them to higher levels of satisfaction, for a given course (Kupczynski, Mundy, & Jones, 2011).  Moreover, how teachers assess students is a critical dimension of learning.  Bain (2004) made the point that “outstanding teachers used assessment to help students learn, not just to rate and rank their efforts”.  In the 2009, Hanover Research Council’s report listed several ‘assessment’ practices as important for online teaching, including: enabling learners to assess progress and identify areas for review, use of multiple strategies, including self-tests, journals, and writing assignments, and the clear articulation of assessment criteria.  Eskey and Schulte, (2012), found that instructors should utilize rubrics and provide individualized, constructive feedback to enhance student learning

To engage students in learning, courses should require active student participation.  Students must feel that they can seek assistance from the instructor and that the instructor will be responsive to their needs.  Puzziferro and Shelton (2009) posit that cooperative, collaborative and social aspects are important, for high-quality online courses.  Students who experience some level of autonomy in the classroom, via autonomy supportive faculty, are more likely to engage (Reeve & Jang, 2006).  This indicates that faculty must provide online students with autonomy but, be ready to guide the student when the student seeks the instructor’s assistance. 

Students take online courses, largely for the convenience.  This particular aspect has to do with the perceived flexibility of coursework and/or the leniency of the instructor, regarding a student missing a particular class.  In a study of virtual classroom characteristics and student satisfaction, Arbaugh (2000) used a scale item “Taking this class via the internet allowed me to take a class that I would otherwise need to miss”.  Other researchers, who have taken the ‘student as a customer’ perspective, discussed the use of video-recordings for missed classes, to enable better alignment of student with peers and instructor – this is a virtue of an asynchronously designed online course (McCollough & Gremler, 1999). 

In their recent study, Young and Duncan concluded that online courses were rated lower than F2F courses, for 11 pairs of course/instructors – though, they did not reach that conclusion in their comparison of full populations of their F2F and online students’ ‘overall evaluation’, i.e., satisfaction.  For future research, they suggested a more carefully controlled research design, comparing the two delivery modes (Young & Duncan, 2014). 
   
This research addresses some of the control limitations of the Young and Duncan study (2014), in that the course content, course sequence, textbook, learning outcomes were the same, in both modalities.  Instructors, in this study, received the same training in andragogy, online pedagogy, and technical training.  Furthermore, this research controlled for the student, i.e., they were self-selecting graduate students in the same program.  Additionally, this research attempts to identify and measure contributing variables to ‘satisfaction’.

Research Question 1
Is there a difference in the Rate Class mean, between the online and face-to-face modalities, of the overall population?
H0: No difference exists in Rate Class mean, between online and face-to-face modalities, of the overall population.

Research Question 2
If there is a difference in the Rate Class mean, between the online and face-to-face modalities, of the overall population, then what are the contributing independent variables?
H0: Rate Class is not affected by Instructor, Provisions, Clarity, GPA & Expected Grade, Participation, Pace, nor Missed Class, in the overall ‘online’ population.
H0: Rate Class is not affected by Instructor, Provisions, Clarity, GPA & Expected Grade, Participation, Pace, nor Missed Class, in the overall ‘face-to-face’ population.

Research Question 3
Is there a difference in the Rate Class mean, between the online and face-to-face modalities, for the same course/same instructor?
H<sub>0</sub>: No difference exists in Rate Class mean, between online and face-to-face modalities, for the same course/same instructor.

Research Question 4
If there is a difference in the Rate Class mean, between the online and face-to-face modalities, of the overall population, then what are the contributing independent variables to Rate Class, at the course/instructor level?
H0: Rate Class is not affected by Instructor, Provisions, Clarity, GPA & Expected Grade, Participation, Pace, nor Missed Class, at the course/instructor level, in the online modality.
H0: Rate Class is not affected by Instructor, Provisions, Clarity, GPA & Expected Grade, Participation, Pace, nor Missed Class, at the course/instructor level, in the face-to-face modality.

Methods

This study employed a quantitative, non-experimental research design, which assessed approximately 3,500 student course evaluations involving both online and face-to-face MBA classes, at a mid-sized private mid-western university, where the online delivery was facilitated in the Blackboard LMS, via asynchronous mode.  Twenty-one course/instructor (anonymity preserved) combinations of online and face-to-face student course evaluations were obtained over a four-year period to assess student satisfaction.  The faculty in this study were the primary on-ground course instructors, who had also developed and designed the online course which they taught.  Instructors received the same training, regarding andragogy, pedagogy, and technical.  The on-ground curriculum was identical to online curriculum. Textbook, topic sequence, assessments, and other assignments were identical in both modalities.  Online and F2F courses were laid out with weekly assignments that supported the week’s learning objectives.  The school mandated that course navigation was organized in the same manner to permit students to have an understanding of the course organization.

The survey instrument, i.e., student course evaluation, used a surrogate variable Rate Class to determine the level of satisfaction with the course.  This study used an overall course evaluation rating variable, i.e., Rate Class, to assess the student’s satisfaction with the course and instructor.   The Rate Class variable was used as the dependent variable in surveys (CEs) conducted in both online and face-to-face modalities.  The Rate Class variable contains four rating points for the class: Excellent, Above Average, Average, and Below Average. Seven independent variables: (Instructor, Provisions, Clarity, GPA & Expected Grade, Participation, Pace, Missed Classes) were assessed, regarding their predictive contribution to Rate Class. The results of this study has importance to faculty and administrators due to the significance of student course evaluations to faculty performance appraisals, faculty career opportunities, and job satisfaction. 

Means Comparison 
The Rate Class variable, for each population, i.e., online and face-to-face, was compared using a t-Test.  This test was executed to determine if there was a statistically significant difference in how each population rated the courses.

Factor Analysis
Beyond the assessment for differences in Rate Class mean, a factor analysis was executed to reduce the number of independent variables involved in the student course evaluation.  This factor analysis was performed without respect to specific modality, to enable a factor by factor comparison, across modalities. Seven predictor variables, including: Instructor, Provisions, Clarity, GPA & Expectations, Participation, Pace, and Missed Class were tested for their effects on Rate Class.  Factor analysis also reduced the concern of multi-collinearity.  

Equalization of Sample Sizes, at the Course/Instructor level
In some cases, over the time-span of the data collection, there were fewer offerings of the online classes, resulting in sample size differences between modalities, for a given course/instructor.  Therefore, randomization was used to reduce the larger samples, effectively equalizing the sample sizes, for a given course/instructor.

Multiple Regression
Multiple regression analyses were used to test the influence of the predictors on the dependent variable Rate-Class, for the overall online and face-to-face student evaluations.  See Table 1, in Results section.  Additionally, multiple-regression analyses were used to test the influence of the predictors on the dependent variable Rate-Class, for each modality, at thecourse/instructor level.  See Table 2, in Results section.

Results

Rate-Class Mean: Online vs. Face-to-Face Populations - Overall

Using an independent t test, it was found that Rate-Class means were different (statistically significant) between the online student course evaluations and the face-to-face student course evaluations.  An independent t test, was used, after first confirming that variance within the two groups (online and face-to-face) was not significantly different. Overall, the online Rate-Class mean was 3.898 versus the face-to-face Rate-Class mean of 4.146, at .000 Sig. (2 tailed .95).   
The null hypothesis was rejected:

H0:  No difference existed in Rate Class mean, between online and face-to-face modalities, of the overall population.
The alternate hypothesis was accepted:
H1: A difference existed in the Rate Class mean, between online and face-to-face modalities, of the overall population.

Multiple Regression of Rate Class with Contributing Factors/Variables for Overall Online Population
The null hypothesis was rejected: because there was a significant relationship between at least one of the independent variables and Rate Class for online coursesTable 1 presents a summary of the results.
H0: Rate Class is not affected by Instructor, Provisions, Clarity of Stated Objectives, Student GPA and Expected Grade, Student Participation, Pace, nor Missed Classes, in the overall ‘online’ population. 
The alternative hypothesis was accepted because at least one of the independent variables was useful in explaining/predicting Rate Class, expressed as:
H1: At least one βi is ≠ 0.

Multiple Regression of Rate Class with Contributing Factors/Variables for Overall Face-to-Face Population
The null hypothesis was rejected because there is a significant relationship between at least one of the independent variables and Rate Class for face-to-face courses.   Table 1 presents a summary of the results.
H0: Rate Class is not affected by Instructor, Provisions, Clarity of Stated Objectives, Student GPA and Expected Grade, Student Participation, Pace, nor Missed Classes, in the overall ‘face-to-face’ population.
The alternative hypothesis was accepted because at least one of the independent variables was useful in explaining/predicting Rate Class, expressed as:
H1: At least one βi is ≠ 0.

Table 1

Variables contributing to Rate Class – Overall populations, both modalities     (Betas)

 

Online

F2F

  Instructor 

0.312*

0.239*

  Provisions

0.208*

0.285*

  Clarity

0.151*

0.040

  GPA/Grade Expectation

0.059*

0.04*

  Participation

0.137*

.163*

  Pace

0.137*

.163*

  Missed Class

0.074*

0.001

 

 

 

Rate Class Mean

3.898

4.146

Sample Size

605

2903

R2

0.572

0.505

Adjusted R2

0.567

0.504

F

114.186

422.583

 Df

7

7

*<.05

Note.
Instructor is positively correlated with Rate Class, in both Online and F2F modes     
Provisions is positively correlated with Rate Class, in both Online and F2F modes   
Clarity - no commonality
GPA/Grade Expectation is positively correlated with Rate Class, in both Online and F2F modes

Participation is positively correlated with Rate Class, in both Online and F2F modes           
           Pace is positively correlated with Rate Class, in both Online and F2F modes
           Missed Class - no commonality

Rate-Class Mean Comparison Between Online and Face-to-Face Populations –
Course /Instructor

Since the Rate-Class mean was significantly different between the online and face-to-face course evaluations, within the overall population, it was decided to proceed with comparing the Rate-Class means between the online and face-to-face student course evaluations, for the same course/instructor combinations.   It was found that four course/instructor combinations had a Rate-Class mean which differed, in a statistically significant way, between the online and the face-to-face modalities.   

OL 30 B vs. F2F 30 B (3.375 vs. 4.188, at .030 Sig. (2 tailed .95)
OL 30 R vs. F2F 30 R (2.909 vs. 4.545, at .003 Sig. (2 tailed .95)
OL 85 Q vs. F2F 85 Q (3.793 vs. 4.345, at .039 Sig. (2 tailed .95)
OL 90 E vs. F2F 90 E (3.611 vs. 4.574, at .000 Sig. (2 tailed .95)

The null hypothesis was rejected: 
H0:  No difference existed in Rate Class variable mean, between the online and face-to-face modalities, of the course/instructor.
The alternate hypothesis was accepted:
H1: A difference existed in the Rate Class mean, between online and face-to-face modalities, at the course/instructor level.
Multiple Regression of Rate Class with Contributing Factors/Variables for Course/Instructor, in both modalities
Regarding the multiple regression of Rate Class, with the contributing variables, for each course/instructor, in both modalities, the results follow and are presented in Table 2.

In ten of the sample sets (of 21), the F test was significant, indicating the models contained some variables contributing to the Rate Class dependent variable.  Adjusted R2 for Online samples ranged from 0.413 to 0.742, meaning that model is a good fit (much of the variance in the dependent variable is explained by the combination of independent variables).   For the Face-to-Face mode, the Adjust R2 ranged from 0.201 to 0.709, with six of the samples exceeding 0.600, meaning that the model is a good fit.

Commonalities found:
(Statistically significant in more than one sample set) across both modalities:
Instructor was positively correlated with Rate Class in both delivery modes
Provisions was positively correlated with Rate Class in both delivery modes
Pace was positively correlated with Rate Class in both delivery modes
(Statistically significant in more than one sample set) within online modality:
Participation was positively correlated with Rate Class, in online mode

Table 2

Variables contributing to Rate Class: Course/Instructor    (Betas)

Table 2 Continued


Discussion and Conclusion
The purpose of this study was to investigate:

  1. Whether students evaluate courses differently when they are taught online versus face-to-face.
  2. Contributing factors influencing course ratings for both online and face-to-face modalities.
  3. While controlling for course and instructor, do students evaluate courses differently, when they are taught online versus face-to-face.
  4. Contributing variables to Rate Class, in both online and face-to-face modalities, of the same course/instructor.

Generally, it was found that students do evaluate courses differently in online versus face-to-face courses.  The ratings of the online courses were lower than the ratings of the F2F courses, for the same instructor, in multiple instances.  Regarding the assessment of variables contributing to course ratings, only the Participation variable stood out as statistically significant to the online courses.    
Given the results of this study, here are some considerations for administrators and faculty.

  1. The results indicate that online courses may have a lower course rating.  This could have administrative ramifications for policies regarding student evaluation of courses for online courses and instructors.  Administrators using the same course evaluation scale may need to adjust their faculty performance assessment depending upon delivery mode or there may need to be different instruments measuring the courses, for the two modalities.
  2. Administrators may need to expend extra effort in training and coaching faculty who are deployed to teaching online courses.  Through their course ratings, students have evaluated online courses lower, regarding overall satisfaction.   Faculty need to be coached about factors that contribute to course rating, by modality.   
  3. Faculty need to be aware of the potential for lower course ratings due to modality.  Knowing the potential for lower ratings would permit a faculty member to be prepared for lower scores.  It would also permit the faculty member to be more informed about which modality they may prefer.  Additionally, it would allow the faculty member to hone specific skills indicated to be important, relative to modality.  
  4. Course Designers need to develop activities that will bolster student participation, regarding course design and facilitation for online courses.  Faculty need to be cognizant of the online student’s expectations of faculty/student participation and feedback.  Faculty have to make a conscience effort to ‘be in’ the online classroom in order to meet student expectations.

Limitations of the Study:

Interpretations concerning causality should not be made.  The purpose of this study was to determine whether students evaluated courses differently when taught online versus face-to-face and to examine antecedents contributing to the ratings.  The study did not attempt to identify any causal order of factors that cause students to rate online and face-to-face courses differently.

The study was limited to a private university in Missouri.  The institution’s characteristics could restrict generalization of the study’s results and findings.  Factors that influence course ratings may vary by individual institution.  Additional research, with different types of institutions, could provide a basis to generalize the relationships found in this study.

There may be other variables which are significant to course ratings that were not measured by this study.  Such variables may include: a) student expectations for online or traditional courses; b) student preferences of delivery methods; c) the expectations and academic rigor of the instructor;  d) personal factors that affect the student which are unrelated to the course; or e) instructor attitude, affective, and voice.

This study had three critical limitations. First, no interpretations concerning causal relationships were made. Second, the results were limited by the cross-section of student and faculty represented in the study. Third, additional variables may have had a significant impact on course ratings. These limitations may provide a fertile ground for future research on course evaluations, in a variety of delivery modalities. Another variant for future study could involve different instructors, teaching the same course, in both modalities. This might highlight instructor characteristics, whether inherent or learned, which contribute to success in a particular delivery modality. 

 



References

References Allen, I. E., & Seaman, J. (2013). Changing course: Ten years of tracking online education in the United States. Newburyport, MA: Sloan Consortium. PO Box 1238

Arbaugh, J. B. (2000). Virtual classroom characteristics and student satisfaction with internet based MBA courses. Journal of Management Education, 24(1), 32-54. doi: 10.1177/105256290002400104

Bain, K. (2004). What the best college teachers do. Cambridge MA: Harvard University Press

Bailey, S., & Hendricks, S. (2014). What really matters? Technological proficiency in an online course. Online Journal of Distance Learning Administration, 17(2).

Barnett-Queen, T., Blair, R., & Merrick, M. (2005). Web-based education in the human services: models, methods, and best practices. Journal of Technology in Human Services, 23(1/2, 3/4).

Brinthaupt, T., Fisher, l., Gardner, J., Raffo, D., & Woodard, J. (2011). What the best online teachers should do. Journal of Online Learning and Teaching, 7(4).

DiPietro, M., Ferdig, R. E., Black, E. W., & Preston, M. (2008). Best practices in teaching K-12 online: Lessons learned from Michigan virtual school teachers. Journal of Interactive Online Learning, 7(1), 10-35.

Dziuban, C., & Moskal, P. (2011). A course is a course is a course: Factor invariance in student evaluation of online, blended, and face-to-face learning environments. Internet and Higher Education, 14, 236-241.

Eom, S. B., Wen, H. J., & Ashill, N. (2006). The determinants of students' perceived learning outcomes and satisfaction in university online education: An empirical investigation. Decision Sciences Journal of Innovative Education, 4, 215-235. doi:10.1111/j.1540-4609.2006.00114

Eskey, M., & Schulte, M. (2012). Comparing attitudes of online instructors and online college students: Quantitative results for training, evaluation, and administration. Online Journal of Distance Learning Administration, 15(5).

Fetzer, S. J. (2000). A pilot study to investigate the impact of interactive television on student evaluation of faculty effectiveness. Journal of Nursing Education, 39(2), 91-93.

Hanover Research Council. (2009). Best practices in online teaching strategies. Washington DC: Hanover Research Council.

Johnson, S., & Aragon, S. (2003). An instructional strategy framework for online learning environments. New Direction Adult Continuing Education, 100. doi:10.1002/ace.117

Kane, R.T., Shaw, M., Pang, S., Salley, W., & Snider J. B. (2016). Faculty professional development and student satisfaction in online higher education. Online Journal of Distance Learning Administration, 19(2).

Keengwe, J., & Kid T., (2010). Towards best practices in online learning and teaching in higher education, Journal of Online Learning and Teaching, 6(2).

Kelly, H. F., Ponton, M. K., & Rovai, A. P. (2007). A comparison of student evaluation of teaching between online and face-to-face courses. The Internet and Higher Education, 10, 89-101. doi: 10.1016/j.iheduc.2007.02.001

Kupczynski, L., Mundy, M., & Jones, D. (2011). A study of factors affecting online student success at the graduate level. Journal of Instructional Pedagogies. Retrieved February 16, 2016, from http://www.aabri.com

Mandernach, B. J., Donnelli, E., Dailey, A., & Schulte, M. (2005). A faculty evaluation model for online instructors: Mentoring and evaluation in the online classroom. Online Journal of Distance Learning Administration, 8(3), 1-10.

Martins, N. (2014). Determining textbook learning enhancement as perceived by students and lecturers. Prodedia – Social and Behavioral Sciences, 112, 57-63. doi:10.1016/j.sbspro.2014.01.1139

McBrien, J. L., Jones, P., & Cheng, R. (2009). Virtual spaces: Employing a synchronous online classroom to facilitate student engagement in online learning. International Review of Research in Open and Distance Learning, 10(3), 1-17.

McCollough, M. A., & Gremler, D. D. (1999), Guaranteeing student satisfaction: An exercise in treating students as customers. Journal of Marketing Education, 21, 118-130. doi: 10.1177/0273475399212005

Nowell, C. (2007). The impact of relative grade expectations on student evaluation of teaching. Economics Network. Retrieved March 3, 2016, from, http://economicsnetwork.ac.uk

Nowell, C., Gale, L. R., & Handley, B. (2010). Assessing faculty performance using student evaluations of teaching in an uncontrolled setting. Assessment & Evaluation in Higher Education, 35(4), 463-475. doi:10.1080/02602930902862875

Ponzurick, T. G., France, K., & Logar, C. M. (2000). Delivering graduate marketing education: An analysis of face-to-face versus distance education. Journal of Marketing Education, 22(3), 180-187. doi: 10.1177/0273475300223002

Pelz, W. (2004). (My) three principles of effective online pedagogy. Journal of Asynchronous Learning Networks, 8(3). Puzziferro, M., & Shelton, K. (2008). A model for developing high quality online courses: Integrating a systems approach with learning theory. Journal of Asynchronous Learning Networks, 12(3-4).

Puzziferro, M., & Shelton, K. (2008).  A model for developing high quality online courses: Integrating a systems approach with learning theory. Journal of Asynchronous Learning Networks, 12(3-4).

Puzziferro, M., & Shelton, K. (2009). Supporting online faculty – revisiting the seven principles (a few years later). Online Journal of Distance Learning Administration, 12(3).

Reeve, J., & Jang, H. (2006). What teachers say and do to support students’ autonomy during a learning activity. Journal of Educational Psychology, 98(1), 209–218. doi: http://dx.doi.org/10.1037/0022-0663.98.1.209

Ribera, T., BrckaLorenz, A., Cole, E. R., & Laird, T. F. N. (2012). Examining the importance of teaching clarity: Findings from the faculty survey of student engagement. Proceedings from Annual Meeting of the American Educational Research Association, Vancouver, British Columbia, Canada.

Rovai, A. P., Ponton, M. K., Derrick, M. G., & Davis, J. M. (2006). Student evaluation of teaching in the virtual and traditional classrooms: A comparative analysis. The Internet and Higher Education, 9, 23-35. doi:10.1016/j.iheduc.2005.11.002

Shaffer, S. C. (2008). Media and learning – Clark vs. Kozma debate [Web log comment]. Retrieved from http://sites.psu.edu/shafferpsy/2008/09/30/media-and-learning-clark-vs-kozma-debate

Tobin, T. (2004). Best Practices for Administrative Evaluation of Online Faculty. Online Journal of Distance Learning Administration, 7(2).

Villanueva C., & Stewart S. (2013). Evaluating student-faculty evaluations. Raleigh, NC: The John William Pope Center for Higher Education Policy.

Young, S., & Duncan, H. E. (2014). Online and face-to-face teaching: How do student ratings differ? Journal of Online Learning and Teaching, 10(1), 70-79.

 

Appendix A


Table A1

Table A2


Online Journal of Distance Learning Administration, Volume XIX, Number 4, Winter 2016
University of West Georgia, Distance Education Center
Back to the Online Journal of Distance Learning Administration Contents