A Comparison of Instructor Evaluations for Online Courses



Karyn W. Tunks, Ph.D.
University of South Alabama
ktunks@usouthal.edu

Mary F. Hibberts, M.Ed
University of South Alabama
mfk701@jagmail.southalabama.edu


Abstract

Online learning is growing at a rapid rate across the United States (Durrington, Berryhill & Swafford, 2006; Tabatabaei, Schrottner, & Reichgelt, 2006). However, course evaluation systems have not kept up with these changes and are often inadequate for evaluating the unique expectations and demands faced by online instructors. Typically, online instructors are evaluated using instruments designed for face-to-face classroom instruction (Mandernach, Donnelli, Dailey, Schulte, (2005). As a result, important indicators of effective teaching in an online format are not evaluated by students. Key competencies for online instruction can include instructor response rate and availability, frequency and quality of instructor presence, community building, assessment, and overall management of the course (Luck, 2001; Firch & Montambeau, 2000). Evaluation specific to online instructors is beneficial in informing online instructors of how their specific behaviors are viewed by students, providing data to administrators for faculty evaluation purposes (Tobin, 2004), and useful for planning and providing professional development opportunities (Mandernach, Donnelli, Dailey, Schulte, (2005).

This study describes the comparison of two faculty evaluations. The traditional evaluation required by the university in all classes regardless of the type of delivery (face-to-face, hybrid, online) is compared to an evaluation (Online Instructor Evaluation) designed specifically to evaluate online teaching competencies. Subjects identified the Online Instructor Evaluation as providing the most useful and relevant feedback for evaluating online instructors.

A Comparison of Instructor Evaluations for Online Courses

Online learning is growing at a rapid rate across the U.S.(Dunlap, Sobel, & Sands, 2007; Durrington, Berryhill & Swafford, 2006; Stoltenkamp, Kies, & Njenga, 2007; and Tabatabaei, Schrottner, & Reichgelt, 2006). Singh & Pan (2004) identified over 54,000 online courses offered at the postsecondary level and by 2008, 83% of higher education institutions offered some form of distance learning (Allen & Seaman, 2008). The trend toward online course delivery continues to increase in the effort to satisfy the growing demand (Gallien & Oomen-Early, 2008; Kim, Bonk, & Zeng, 2005).

Along with the introduction and rapid growth of online instruction came concern for equivalency between the two delivery formats. Educators and researchers wanted to know if online instructors were able to present information and evaluate student learning to the same degree of effectiveness as in a traditional classroom. Comparisons with traditional classroom learning have been made in areas such as critical thinking (Bullen, 1998; Mandernach, Forrest, Babutzke & Manker, 2009) and quality of interactions between instructors and students (Polhemus, Shih, Richardson, & Swan, 2000; Rourke, Anderson, Garrison, & Archer 2001; Ward & Newlands, 1998). Results showed that there could be greater benefits in online courses. For example, Paskey (2001) found that among graduate students, online discussion groups generated more involvement in learning than voluntary class discussions in a face-to-face environment because participation of all students was required. Over time, there has been a major shift in the focus of research. As the online format gained momentum so did a pedagogical foundation specific to best practices in online classrooms.

Today, online instructional practices are being reviewed and scrutinized to determine the most effective methods for delivering technology-based instruction. Studies have examined the opinions of university faculty and administrators (Mandernach, Donnelli, Dailey & Schulte, 2005) as well as student satisfaction and perceived learning (Swan, 2001). Graham, Cagiltay, Lim, Craner, & Duffy (2001) identified a set of principles for effective teaching in online classrooms. These include student-faculty contact, cooperation among students, active learning, prompt feedback and communicating high expectations. Other key competences that have been identified are instructor response rate and availability, frequency and quality of presence in the online classroom, facilitation of discussions, quality of instructor-created supplemental materials, and overall management of the course (Mandernach, Donnelli, Dailey, & Schulte, 2005). While some of these practices are also useful in the face-to-face classroom, they are considered paramount to academic success in an online learning format.

As the need for and understanding of online instruction continues to rise, it is imperative that systems are in place for evaluating instructors who teach online courses (Mandernach, Donnelli, Dailey, & Schulte, (2005). These evaluation systems should have the ability to evaluate teaching behaviors specific to online teaching. They should also be able to pinpoint areas that require remediation in order to promote best practices in online learning.  Since there are key competencies specific to online teaching, evaluation tools must be designed to include these. It is unlikely that the generic course evaluation tools used for traditional classroom instruction will include all of these competencies.

Using an evaluation tool designed specifically for online instruction is also important to the faculty review process. Typically, administrators who are directly responsible for evaluating faculty have experience teaching in a traditional classroom. This provides a perspective from which to evaluate others. However, it cannot be assumed that those in a position of conducting faculty teaching evaluations have experience in or fully understand online pedagogy (Tobin, 2004). A thorough understanding of the similarities and differences between effective instruction in a face-to-face classroom versus an online classroom is needed, and tools designed specifically for evaluating online teaching are crucial. The information derived from evaluation tools can be used to conduct annual evaluations of online teaching as well as to identify areas of remediation.

Purpose of the Study

The purpose of this study was to compare two different evaluations to determine which, according to students, provides the most useful information for evaluating instructors of online classes. The study was conducted in the College of Education at a university located in the Southern United States. The first instrument used in the comparison was the traditional instructor evaluation (TIE) which is required in all classes regardless of platform for delivering instruction (i.e., face-to-face, hybrid, online). The second instrument was the Online Instructor Evaluation (OIE) which was designed specifically to evaluate key competencies of online instruction. These competencies were grouped into three areas: Climate and Community, Discussion Interaction and Facilitation, and Grading/Assessment. Data was collected to determine which of the two surveys provide the most useful and relevant feedback for evaluating instructors of online classes.

Method
           
A survey was conducted which compared the traditional instructor evaluation (TIE) with the more specific Online Instructor Evaluation (OIE) designed for this study. The TIE is the instrument required by the university to evaluate teaching in all classes. Even though the instrument was revised in 2011, the same evaluation is used for all classes regardless of the instructional delivery (f2f, hybrid, online). The revised evaluation still reflects primarily traditional classroom teaching behaviors such as “The instructor has clear policies (e.g., grading, attendance, office hours, and assignments,” and “The instructor was well prepared for class meetings.” Only one item is specific to online instruction; “The instructor was well prepared for online class sessions and activities.”
           
The Online Instructor Evaluation (OIE) is specifically designed to evaluate competencies associated with online instruction. It is based on the Online Instructor Evaluation System (OIES) developed by Mandernach, Donnelli, Dailey, & Schulte (2005). Researchers developed the OIES after a comprehensive review of the literature on best practices in online pedagogy. The purpose was to inform online instructors of accepted standards in online teaching, hold them accountable for best practices through evaluation, and improve teaching through professional development. Guidelines for good practice in technology-supported classes were emphasized including reciprocal student-faculty contact, inclusions of active learning strategies, prompt feedback, promotion of student time-on-task, clear communication of high expectations, and respect for diversity in student talents and ways of learning (Chickering & Ehrman, 1996).
           
After obtaining permission from the authors, the original instrument was revised slightly for the purpose of this study. The original tool developed by Mandernach, Donnelli, Dailey, & Schulte (2005) was used not only to evaluate online teaching but to target specific areas in need of remediation. In our study, the tool was adapted and used to elicit information from students about online instructor behaviors for the purpose of comparison with the traditional instructor evaluation already in use. Like the OIES, our instrument was designed to focus exclusively on instructor competencies, not on the content of the classes.

Participants
           
The sample was comprised of students enrolled in online classes in a College of Education. Of the 110 total number of students enrolled in the classes, 64 students participated in the survey resulting in a 58% response rate. Since a 30% response rate is considered average for an online survey (Hamilton, 2003), the 58% percent rate is considered above average.

Data Collection and Analysis

           
At the end of each semester, students are required to complete the traditional instructor evaluation. In online classes, students must complete the evaluation before they can gain access to the course. An announcement was posted within the online classes inviting students to participate in a “supplemental evaluation” (OIE). After completing both evaluations, students completed a brief questionnaire comparing the two. They were asked to rate the ability of each questionnaire to evaluate key competencies associated with effective online teaching behaviors: timeliness of instructor’s response (i.e., posting grades, replying to emails), quality and frequency of instructor’s interaction in discussions, feedback provided in comments feature of grade book, and overall quality of online communication and interactions. Paired t tests were used to compare mean ratings for responses regarding the TIE and OIE evaluation forms.
           
Students were asked a final question in which they identified which survey enabled them to most accurately evaluate instructor performance in the online class. The choices were “traditional instructor survey,” “online instructor survey,” or “no difference/no opinion.”

Results


Evaluation ratings of the ability to assess instructor characteristics and behaviors in an online course (i.e., timeliness of instructor’s response, quality and frequency of instructor’s interaction in discussions, feedback provided in comments feature of grade book, and overall quality of online communication and interactions) using two different instruments, traditional instructor evaluation (TIE) and Online Instructor Evaluation (OIE), were compared through four paired t tests.  All four mean comparisons between the TIE and OIE ratings were statistically significant (see Table 1). Overall, students rated the OIE as the evaluation tool that better enabled them to evaluate instructor characteristics in the online courses (p ≤ .001).

Table 1.

 

Paired t Test Results Showing Significant Differences on Mean Ratings for the Ability to Assess Instructor Behaviors with the TIE and OIE Questionnaires.

Variable

TIE

OIE

t

df

p

Cohen's d

Ability to assess instructor response time

4.22

4.75

3.6

63

.001

.57

Ability to assess quality and frequency of instructor participation

4.14

4.77

3.99

63

.000

.61

Ability to assess feedback provided in grade book

3.77

4.69

4.62

63

.000

.75

Ability to assess overall quality of online communication and interactions

4.17

4.8

3.8

63

.000

.67

Note. Ratings were based on a five point Likert-type scale of the degree to which students agreed that the TIE and OIE enabled them to evaluate the specific instructor characteristics (1 = Not at all, 2 = Very little, 3 = Undecided/Don’t know, 4 = Somewhat, 5 = To a great degree). All significance tests were two-tailed with Bonferroni adjusted alpha = .0125.

Students were asked a final question in which they identified which questionnaire enabled them to most accurately evaluate instructor performance in the online class. The majority of students (54.7%) responded that the OIE best enabled them to assess instructor performance which is supported by the statistical results.

Discussion
           
It is a widely supported fact that good teaching requires time, effort, commitment, knowledge, presence, and ingenuity (Weimer, 2010). Yet, quality instruction at the university level is often taken for granted. Although evaluation of instructors is required for courses at the end of the semester, evaluation tools are typically designed for face-to-face classes making it difficult for students to evaluate teaching behaviors in an online setting. Simply adding evaluation items that relate to online instruction to a traditional survey is not adequate for assessing the wide range of teaching behaviors unique to online instruction.
           
Results of this study indicate the need for a separate instructor evaluation designed specifically for online classes. Policy makers and administrators should give strong consideration to adopting an alternative evaluation system for online instruction, and online teaching faculty should encourage this effort. In the meantime, however, online faculty can develop their own supplemental evaluation in order to assess their teaching behaviors. Supplemental evaluations can be used in addition to the surveys required by the university. They should focus on behaviors relevant to quality online instruction and the results should be used to improve instruction where needed.


References


Allen, I. E. and Seaman, J. (2008) Staying the Course: Online Education in the United States, 2008 Needham MA: Sloan Consortium.

Bullen, M. (1998). Participation and critical thinking in online university distance education. Journal of Distance Education, 13(2), 1-32.

Chickering, A. & Ehrmann, S. (1996, October). Implementing the seven principles: Technology as lever. American Association for Higher Education Bulletin, 49 (2), 3-6.

Dunlap, J. C., Sobel, D. M., & Sands, D. (2007). Supporting students’ cognitive processing inonline courses: Designing for deep and meaningful student-to- content interactions. TechTrends, 51(4), 20-31

Durrington, V. A., Berryhill, A., & Swafford, J. (2006). Strategies for enhancing student interactivity in an online environment. College Teaching, 54(1), 190−193.

Gallien, T. & Oomen-Early, J. (2008). Personalized versus collective instructor feedback in the online courseroom: Does type of feedback affect student satisfaction, academic performance and perceived connectedness with the instructor? International Journal of E-Learning, 7(3), 463-476.

Graham, C., Cagiltay, K., Lim, B., Craner, J., & Duffy, T.M. (2001). Seven Principles of Effective Teaching: A Practical Lens for Evaluating Online Courses. Retrieved May 14, 2012 from http://technologysource.org/?view=article&id=274

Hamilton, M. B. (2003). Online survey response rates and times: background and guidance for industry. Tercent, Inc.

Kim, K., Bonk, C. & Zeng, T. (2005, June). Surveying the future of workplace e-learning: The Rise of blending, interactivity, and authentic learning. E-Learn Magazine. Retrieved May 20, 2012 from: http://www.elearnmag.org/subpage.cfm?section=research&article=5-1

Luck, A. (2001, January/February). Developing courses for online delivery: One   strategy. The Technology Source . Retrieved from  http://technologysource.org/article/developing_courses_for_online_delivery/

Mandernach, B. J., Donnelli, E., Dailey, A., & Schulte, M. (2005). A faculty evaluation model for online instructors: Mentoring and evaluation in the online classroom. Online Journal of Distance Learning Administration, 8 (3), 1-30.

Mandernach, Forrest, Babutzke, & Manker (2009). The role of instructor interactivity in promoting critical thinking in online and face-to-face classrooms. Journal of Online Learning and Teaching, 5(1), 49-62.

Paskey, J. (2001). A survey compares two Canadian MBA program, one online and one traditional. Chronicle of Higher Education, April 26. 

Polhemus, L., Shih, L., Richardson, J., & Swan, K. (2000, November). Building an affective learning community: Social presence and learning engagement. Paper presented at the World Conference on the WWW and the Internet (WebNet), San Antonio, TX.

Reeves, T. (1997). Evaluating What Really Matters in Computer-Based Education. http://bengal.missouri.edu/~sfc42/reeves.html

Rourke, L., Anderson, T., Garrison, D., & Archer, W. (2001). Methodological issues in the content analysis of computer conference transcripts. International Journal of Artificial Intelligence in Education, 12(1), 8-22. Retrieved from: http://iaied.org/pub/951/file/951_paper.pdf  

Singh & Pan (2004) Singh, P., and Pan, W. (2004). Online education: Lessons for administrators and instructors. College Student Journal, 38(2), 302-308.

Stoltenkamp J., Kies, C. & Njenga, J. (2007). Institutionalising the elearning division at the University of the Western Cape (UWC): Lessons learnt. International Journal of Education and Development using ICT. 3(4).

Swan, K. (2001). Virtual interactivity: design factors affecting student satisfaction and perceived learning in asynchronous online courses. Distance Education, 22(2), 306-331.

Tabatabaei, M., Schrottner, B., and Reichgelt, H. (2006). Target populations for online education. International Journal on Elearning, 5(3), 401-415.

Tobin, T. (2004). Best practices for administrative evaluation of online faculty. Online Journal of Distance Learning Administration, 7(2).

Ward, M., & Newlands, D. (1998). Use of the web in undergraduate teaching. Computers and Education, 31(2),171-184.

Weiss, R., Knowlton, D., & Speck, B. (Eds.) (2004). Principles of effective teaching in the online classroom. San Francisco: Jossey-Bass.


Online Journal of Distance Learning Administration, Volume XVI, Number II, Summer 2013
University of West Georgia, Distance Education Center
Back to the Online Journal of Distance Learning Administration Contents