Current Practices of Online Instructor Evaluation in Higher Education


Jonathan E. Thomas
Brigham Young University
jonathanethomas2002@gmail.com

Charles R. Graham
Brigham Young University
charles.graham@byu.edu

Anthony A. Piña
Sullivan University
APina@sullivan.edu

Abstract

As enrollment of students in online courses has steadily increased over the last few decades, very little attention has been given to online instructor evaluation. This is an area of online education that needs additional research to better ascertain the current state of online instructor evaluation as well as discover ways to improve its effectiveness. The purpose of this study is to identify how institutions evaluate online instructors and why. Findings indicated that the post-secondary institutions studied utilized many types of evaluation including student evaluations, administrative evaluations, peer evaluations, self-evaluations, and also employed metrics in their evaluations. Recommendations for the use of triangulation, course observation rubrics, formative evaluations, and metrics as part of an online instructor evaluation system are provided.

Introduction

The rapid growth of online learning requires careful measures to ensure that courses are designed and facilitated according to quality standards.  Evaluation is a critical component to attain these standards.  It is also critical to ensure that evaluations provide accurate information (Rothman, Romeo, Brennan, & Mitchell, 2011; Tobin, Mandernach & Taylor, 2015).  Through evaluations of online courses and the instructors that teach them, important information can be conveyed to instructors, instructional designers, and administrators to improve course quality and facilitate learning objectives.  These evaluations inform administrative decisions like tenure and promotion (Academic Senate of California Community Colleges, 2013; Darling, 2012; Donovan, 2006; Roberts, Irani, Telg, & Lundy, 2005; Stanišic´ Stojic´, Dobrijevic´, Stanišic´, & Stanic´, 2014) as well as professional development (Academic Senate of California Community Colleges, 2013; Dana & Havens, 2010; DeCosta, Bergquist, & Holbeck, 2015; Mandernach, Donnelli, Dailey, & Schulte, 2005; Palloff & Pratt, 2008).

Unfortunately, the systematic evaluation of online courses and instructors is surprisingly underdeveloped, considering the rapid growth of online education (Thomas & Graham, 2017; Berk, 2013; Rothman, Romeo, Brennan, & Mitchell, 2011).    Berk (2013) admitted that “evaluation of these online courses and the faculty who teach them lags far behind” course production, especially “in terms of available measures, quality of measures, and delivery systems” (p. 141).  Some institutions still do not perform any evaluation of online instructors, while others perform evaluations that do not measure unique aspects of online instructor performance (Piña & Bohn, 2014).  Many instructor evaluations tend to focus more on instructional decisions reflected in course design (Drouin, 2012) rather than specific behaviors of the instructor.  This is problematic because not all instructors are responsible for course design and should, therefore, be evaluated separately (Piña et al., 2014; Schnitzer & Crosby, 2003; Schulte, 2009).  More research is needed to inform better practices of online instructor evaluation separate from course design.

The purpose of this study is to inform improved evaluation practices of online instructors by examining current practices of instructor evaluation at post-secondary institutions.

Literature Review

The current landscape of online instructor evaluation includes many of the measures recommended by Berk (2005) as ways to evaluate teaching effectiveness.  These recommendations include student, administrative, peer, and self-evaluations.  In a survey sent out to attendees of  the Distance Learning Administration Conference and to members of the Association for Educational Communications and Technology (AECT), Pina et al. (2014) found that the most commonly used method for measuring online instructor effectiveness was student evaluations (89%), followed by supervisor evaluations (47%) peer evaluations (32%) and other (3%).  That 92% of institutions perform some kind of evaluation of their online instructors is a positive sign of the maturing nature of online learning.  This is significant, given that the previous decade saw little attention given to evaluating online courses and their instructors, while online courses and enrollments increased at a feverish pace (Bangert, 2004; Compora, 2003).

Traditional and Online Course Evaluations

In their review of research on student evaluations, Benton and Cashin (2012) affirmed that traditional and online courses are similar enough that there is no need to develop a new instrument.  Their conclusion was based on an earlier work of Benton, Webster, Gross, and Pallett (2010).  In this study, Benton et al. (2010) tested the use of a student evaluation instrument in both online and traditional courses focused on student’s views of learning objectives and whether instructors used a variety of methods in their teaching.  They found minimal differences between the results of students in the two modalities.  Dziuban and Moskal (2011) found that traditional face-to-face student evaluation instruments can also measure online instructor effectiveness.  Moskal, Dziuban, and Hartman (2013) added further that effective teaching, including providing feedback, answering questions, etc., is the same, regardless of modality. 

Berk (2013) agreed that a student evaluation instrument used in a traditional course could also be used in an online course if it will be used to inform summative personnel decisions.  Perhaps this is because traditional face-to-face student evaluations tend to address instructor effectiveness in a very broad way, independent of modality. 

Berk also suggested that peer evaluations of online teaching ought to be different than those performed in face-to-face courses.  He recommended that a new instrument ought to be developed for peer or self-evaluation that is specific to online instructors.  Many feel that colleagues are better equipped to evaluate teaching effectiveness than students (Darling, 2012) and this may even be more true in online courses where additional competencies are necessary to be an effective instructor.

Evaluations Emphasize Course Design

In response to the question, “Do you use a rubric to measure online instructor quality?” Piña & Bohn (2014) found that the sample was almost evenly divided among those that use a rubric developed by Quality Matters (33.6%), a rubric that the institution developed on its own (32.9%), and those that did not use any rubric at all (32.9%).  Almost one third of those that developed their own rubric admitted that they based it on the Quality Matters rubric – which focuses on course design and not teaching.  This is consistent with what Drouin (2012) asserted.  Post-secondary institutions typically use some kind of general course rubric “as checklists” for peer evaluations (p. 61).

These general course rubrics, along with the student evaluations developed specifically for online courses, focus heavily on course design (Tobin, et al., 2015).  The student evaluations developed by Stewart, Strong, and Strudler (2004), Roberts et al. (2005), Bangert (2008), and Rothman et al. (2011) devote 70%, 75%, 69%, and 88% respectively of the items on their instrument to elements of course design (Thomas et al., 2017).  This heavy emphasis on course design is not ill-placed.  Course design is made up of a series of instructional decisions often made by the instructor that teaches the course.  In these instances, it is reasonable to evaluate an instructor on the instructional decisions that make up the course design.  However, not all online instructors are responsible for the design of the course they teach.

The Master Course Model

A prominent model utilized in online education, but strangely absent from the research, is the master course model (Cheski & Muller, 2010; Hill, 2012).  In this model, a team is responsible for course design.  This team may include one or more instructional designers and one or more subject matter experts that may or may not be faculty members.  When the course design is complete, the course is then duplicated into as many sections of the course as are necessary.  Instructors, usually content matter experts, are then assigned to facilitate a course that they did not design.  Consequently, utilizing instruments to evaluate online instructors that focus heavily on course design in these instances would be inappropriate.  In these circumstances, it is important to ensure that the evaluative instruments effectively evaluate the online instructor separate from course design.

Methodology

The purpose of this research study is to explore evaluation practices of online instructors at a variety of post-secondary institutions.  It will address the following research question: How do post-secondary institutions evaluate online instructors and why?

To accomplish this purpose and answer this research question, this study investigated multiple institutional cases.  We utilized purposive sampling to identify all institutions in the United States that are in each of the following three categories: 4-year for profit, 4-year not for profit, and 4-year public.  Using a recent report based on data collected by the National Center for Education Statistics’ Integrated Postsecondary Education Data System (IPEDS) and data available from a report titled, Online Report Card; Tracking Online Education in the United States (Allen, Seaman, Poulin & Straut, 2016), we identified all institutions that reside in the U.S., are four year degree-granting institutions that offer baccalaureate degrees and above, and have more than 10,000 enrolled distance education students (some of which are first-time, full-time undergraduate students). All data is from the 2015 calendar year.  By doing this, we identified 15 for profit, 9 private, and 24 public universities.

From these 48 institutions, we sought representation of at least two institutions from each category for inclusion in the study.  This is consistent with previously published research that identified three categories and sampled two institutions from each (Graham, Woodfield, & Harrison, 2012). We feel that this gave us the variety of perspectives we needed within each category.

We utilized a network of professionals in online learning to identify individuals at these institutions we could contact as potential interview subjects.  The final sample included 2 for profit institutions, 5 private institutions and 3 public institutions.  The sample along with the interviewees and their generalized titles are included in Table 1

Data Collection

This study relied primarily on interviews.  The interviews helped to identify instruments used in student, peer, self, or some other kind of instructor evaluation.  These instruments provided important data to help address the questions this study seeks to address.  We collected any other documents that informed historical changes in evaluations, previous instruments, and other forms.

To establish credibility and trustworthiness, we performed member checks both during data collection via email as well as after the analysis so that interviewees could confirm our conclusions and ensure that our analyses were accurate.  Additionally, we engaged other colleagues with a strong research background to employ peer debriefing.   They were invited to review the methodology and conclusions of the study to also help ensure the study’s accuracy.

The various sources of data collection helped to employ triangulation.  By collecting data from interviews, artifacts, and relevant literature, the validity of the study’s conclusions was confirmed through multiple data points.  Additionally, researchers sought to attain data redundancy in order to ensure that data is adequate to provide meaningful analysis and conclusions.



Data Analysis

This is exploratory research.  The transcribed interviews were systematically coded and compared in order to perform a thematic analysis.  In coding and analyzing these interview transcripts, we utilized the method of thematic network analysis to identify relevant themes (Attride-Stirling, 2001).  This began by open-coding the interview transcripts into basic themes described in the words of the interviewees.  Basic themes were then grouped into organizing themes that provided more abstract categorization of themes.  Organizing themes were finally grouped into global themes that seek to "[encapsulate] the principal metaphors in the text as a whole" (p. 388). 

We recognize that as researchers, our work can be influenced by our own biases and experiences.  One of the authors has practical experience with performing online instructor evaluations and has preconceived notions about what constitutes effective evaluation.  This study does not seek to evaluate evaluation practices, only to describe them.  This approach is an effort to moderate researcher bias.

Findings

As a result of this research, we found that evaluation of online instructors at the sampled post-secondary institutions shows signs of improvement compared to recent findings in the literature (Thomas et. al, 2017, Piña et. al, 2014).  The sampled institutions utilized many different types of evaluation in assessing the effectiveness of online instructors which include student, administrative, peer, self, and even the use of metrics that measure different aspects of teaching effectiveness as part of the evaluation.

How do institutions evaluate online instructors and why?

The institutions that participated in this research utilize great variety in the types of evaluation they employ.  Rather than depending on only a few types of evaluation, these institutions seek to incorporate many types, as indicated in Table 2.  All institutions in this sample utilize student evaluations to evaluate online instructors.  Only one institution does not use an administrative evaluation of its online instructors. Of the 10 institutions in this sample, four of them utilize all five of identified types of evaluation.  Although, by their own admission, the public institution focuses primarily on course design in their evaluations.

One discovery of this research was the growing trend of post-secondary institutions utilizing metrics to inform online instructor evaluation.  Of the 10 institutions sampled, five did not use metrics as part of their evaluation of online instructors.  Some use metrics more than others, but all those that use it have found that it can be a helpful resource in performing online instructor evaluation.

We will describe each of these types of evaluations below as well as the institutions’ reason for their use.

Student Evaluation. Student evaluations are the only form of evaluation that every institution in this sample uses and all reported that it occurs at the end of every course. They allow students to have some influence in their instructional experience. Six of the ten institutions in this sample reported using student evaluation instruments in online courses that are exactly like the instruments that their institution uses in traditional courses. Those that did mention differences admitted that there are agreed upon similarities regardless of modality. It is also interesting to note that several institutions are utilizing formative mid-course student evaluations as part of their process. A formative student evaluation may provide valuable information to an instructor to make improvements during the course, thereby providing a better experience for students.

Every institution reports the results back to the instructor.  Instructors are encouraged to use the information provided to make any necessary adjustments. This information is also communicated back to their supervisors or department chairs. On occasion, administrators may share the results of a student evaluation with a faculty support center, to provide additional help for a struggling instructor.

Many institutions have acknowledged that student bias is a clear problem with student evaluations; however, they also capture a critical perspective on online instructor behaviors that may be missed otherwise. Recognizing the importance of the student perspective, administrators have made sure that student evaluation questions address things that students are capable of answering and evaluating, such as instructor communication, feedback and responsiveness.

Administrative Evaluation.  Among the institutions sampled, administrative evaluations are performed at least once a year to determine how instructors are performing.  Administrators may utilize a variety of other types of evaluations conducted previously to inform their evaluations, including student and peer evaluations or evaluation performed by a separate institutional entity, such as an online support department.  Although administrative evaluation is performed at least annually in a summative way, many institutions also perform formative evaluations.

Institutions face a variety of challenges in performing administrative evaluations.  These include the time and logistics of performing evaluations for all instructors and the lack of a common standard among institutions for evaluating online instructors.  In addition to this challenge of sufficient resources, many also face both faculty and department resistance to performing regular evaluations of online instructors.

Institutions perform administrative evaluations to reveal areas in which instructors can improve teaching.  Administrators can then tailor training to meet personalized needs.  Additionally, these evaluations identify teachers who excel and can be rewarded with promotion or tenure-like benefits.  This also helps administrators to schedule the best instructors as often as possible.

Peer Evaluation.  Of the types of online instructor evaluation described in this study, there is the greatest amount of variability among how institutions approach peer evaluations. This is certainly the case with how often the institutions in the sample perform these evaluations, with some doing reviews only during the first year of teaching, while others do them annually. Still others perform reviews only upon departmental request, while other conduct them during each teaching term.

There is some variation in who each institution selects to perform the peer evaluation. These may include close associates of the instructor, other faculty who teach the same course, or a departmental supervisor. In all cases, these are individuals that can provide valuable feedback to an instructor because of their own experience and/or training.  This evaluation typically involves the peer “visiting” an online course and observing the teaching activities of the instructor. 

Six institutions in this sample utilize a rubric to facilitate the peer evaluation. These rubrics address very similar things including the kind of feedback instructors are giving, and the timeliness of grading.  Many also address the regular posting of useful, course-specific announcements by the instructor and regular, positive interactions with students that encourage participation and dialogue through email or discussion boards.  Each rubric also has some variability in institutional goals and other items that a peer evaluator may consider during their course visit.

Institutions identified several challenges in performing peer evaluations.  Orchestrating a process for effective peer-to-peer evaluation may require more resources to organize and implement than are currently at an institution’s disposal.  The time and resources necessary to perform peer evaluations require careful consideration as to whether the benefits will be worth the cost.

There is also some concern about the subjective nature of these evaluations, which may be a result of a rubric that is vague, an evaluator’s inattention to detail, or even the evaluator’s reluctance to provide any critique of a colleague’s performance.  These “love letters” as one institution called them, do little to improve teaching or identify high quality instructors.  Without a clear standard to measure by, peer evaluations may continue to vary based on who is evaluating.

Several institutions identified three major reasons why they perform peer evaluations.  First, they help to identify effective teaching practices among their faculty.  These instructors are then encouraged to share these practices with their peers.  Second, peer evaluation provides a safe environment for feedback because the results are often not reported to administrators.  When instructors are evaluated, they tend to feel exposed and vulnerable.  When the evaluation is performed by someone that an instructor knows and feels comfortable with, it can help to lower any defensiveness that may otherwise result in an unwillingness to take feedback. The final reason administrators use peer evaluation is to provide an avenue whereby instructors can pursue and demonstrate professional development to make their case for institutional benefits like promotion or tenure.

Self-Evaluation.  The most common approach to self-evaluation among the sample is to use an unstructured format.  Instructors are invited to write whatever they would like to about their goals, personal improvement plans, or how they feel that they have demonstrated excellence.  Some institutions have instructors fill out the same form or rubric that is filled out during a peer or administrative evaluation. These will then be used to inform the subsequent peer or administrative evaluation.  It allows instructors to spot areas of weakness in anticipation for the evaluation of others.  It can also lead to instructors making a case for why their assessment of their performance is more accurate than a peer or administrator in case there are inconsistencies among reviewers.  Among the institutions that utilize self-evaluation the main purpose was to provide a reference point for other types of evaluations, primarily the peer evaluation. 

Metrics Evaluation.  Half of the institutions in this sample utilize metrics to evaluate their instructors.  The institutions that are utilizing metrics in their evaluations are at varying levels of development and use.  Some have developed programs that automatically retrieve and aggregate data.  Aggregated data can be retrieved from the learning management system and student information system and used to populate dashboards for administrators and instructors.  Others retrieve various types of data to anaylze and discover useful statistical patterns that inform faculty, administrators, or other faculty support departments about effective teaching behaviors and student indicators of teaching effectiveness. 

Some examples of student indicators of teaching effectiveness include student engagement, satisfaction, and success.  Metrics can help identify instructors whose students regularly do more than what is expected of them for a good grade.  This suggests that students care and are engaged in what they are learning.  Student satisfaction is another indicator of teaching effectiveness that metrics can help faculty and administrators to more clearly see.  Some institutions use metrics to evaluate instructor effectiveness through their students success rates.  This is defined in a variety of ways, but often includes student retention and success in subsequent courses.

The main challenge that institutions have faced by incorporating the use of metrics into their evaluation is the resistance of faculty members.  Some faculty are worried about how automated metrics may infringe on their academic freedom when their own pedagogical approach is different from institutional philosophies.  They may feel forced to complyt othe institutional policies rather than risk being labeled as an ineffective instructor.  Other faculty have expressed concerns about being accountable for the success or satisfaction of their students.  This is a difficult obstacle to the effective use of metrics as part of an online instructor evaluation process.

A unique reason institutions use metrics is to identify teachers that are not meeting baseline standards in a very fast and effective way.  It does not address the quality of instruction, only the lack thereof.  This is a more efficient and precise way to monitor instructor behavior as well as student engagement, satisfaction, and even success.  Although there are obstacles to effective use of metrics at institutions, it provides a possible solution to the challenge of an unwieldy and large evaluation system.

Discussion

In comparing and analyzing the online instructor evaluation processes at 10 different institutions, there are several implications for online instructor evaluation.  These include the importance of triangulation in providing a clear representation of instructor teaching effectiveness, employing course observations using a rubric, incorporating formative evaluations into the process, and capitalizing on the use of metrics.

Triangulation
A pattern we discovered in this research was that all institutions relied on multiple sources of data and types of evaluation in their process.  We conclude from this that triangulation (i.e., using a variety of sources) is an important aspect of an effective evaluations process.  By utilizing a variety of approaches to evaluation, different types of useful data can be acquired to better inform faculty and administrators of effective teaching.  It is important to include the insights of students, skilled peers, and the instructors themselves to provide a more complete picture of the instructor’s efforts to be effective in their virtual classroom.

Course Observations Using Rubrics
The majority of the institutions in this study performed online course observations as part of either an administrative or peer evaluation.  This allowed them to observe specific online teaching behaviors and more accurately assess online teaching effectiveness.  We recommend that this be a part of evaluation systems of online programs.  These observations allow administrators to make more accurate evaluations of instructors by focusing on teaching behaviors and not only on course design.  This is particularly important when instructors did not design the course they are teaching.  Course observations also provide opportunities to provide feedback to improve or commend effective teaching.  All institutions that utilized this type of evaluation found this information to be among the most useful in determining instructor effectiveness. 

In every case where institutions used course observations as part of an evaluation, observers used an observation rubric.  These institutions explained that the use of a rubric helps establish standards of instructor behavior and clear directions to observers about what to look for in their evaluation.  They have regularly revised their rubrics constantly seeking to be more clear and simple in order to avoid inconsistency among observers.  Therefore, we suggest that institutions develop and use rubrics to guide observers as they perform course visits.  These rubrics may take time to revise and improve.  They are help provide clear standards of performance and contribute to more trustworthy evaluations of teaching behaviors.

Formative Evaluation
All institutions in this study supported the use of formative evaluations but not all performed them institution-wide.  In most cases where the institution did not have an established, formative evaluation process, means were available for faculty to perform their own formative evaluations either with students or another peer.  These are valuable evaluations that ought to be a part of evaluation systems, institution-wide.  Based on our findings, we recommend that formative evaluations should only be communicated with the instructor.  Observations, either performed by a peer or an administrator, can be an important way to provide formative feedback.  Peer observations, in particular, provide a safe environment for instructors to seek and receive feedback, especially when they know that the results will not be communicated to supervisors.  Formative student evaluations during a course can also provide great feedback to instructors.  This mid-course feedback allows instructors to make immediate adjustments to their teaching to better serve students.  Instructors are not as responsive to end of course student evaluations as they could be to mid-course student evaluations. 

Although formative evaluations can have positive effects on teaching, performing them too often can also negatively affect instructor morale.  Instructors, generally do not like evaluation.  Some faculty may assume that increased evaluations mean mistrust in their ability as an instructor or assume that an administrator worries there may be a problem.  Determining a balanced approach to formative evaluations may vary based on the institution.  What works for one institution, may not necessarily work for another.  Be prepared to implement a plan and revise accordingly.

Metrics
Insitutions that are effectively using metrics, have found that it helps an institution monitor instructor behavior in an efficient way without the need of large numbers of online course obsevers.  As a result of these findings, we conclude that insitutions should make efforts to incorporate the use of metrics in their online instructor evaluations.  Automated metrics can meet the needs of an institution by allowing regular monitoring of behavior without the intrusiveness of peer or administrative observations.  They perform an important role in helping an evaluation system become efficient and scalable.  By implementing an automated system of producing metrics that populate a dashboard, administrators can regularly have a pulse on faculty and ensure that they are meeting baseline standards for faculty.  The use of metrics can allow course observations to focus more on quality rather than simply a baseline standard of performance.  Other metrics can also monitor student behaviors that provide additional insight into the quality of an instructor.  This kind of data may be difficult to use without establishing a system to not only retrieve the data, but also emply statistical analyses on the data.  This will help to translate the data into clear indicators of effective instructional behaviors.

Future Research
Additional research could focus on the specifics of online course evaluations to evaluate instructor performance.  In particular, which information is included on the course observation rubrics institutions use to help guide evaluators.  What teaching behaviors are similar and which ones are different?  It would also be helpful to know what institutions based their decisions on as they developed their instrument.  Was it largely based on research or were there items that they added after their own observations and experience?

Other research that would be extremely useful could focus on more specific details on how institutions utilize metrics to help evaluate instructors.  Which metrics are meaningful to collect and utilize in the regular process of evaluation to really inform effective practices of instructors?  Case study examples of how institutions developed their use of metrics, including their use of the LMS (whether it was developed by a third party or by the institution itself), could also help inform best practices of online instructor evaluation.

A final suggestion for future research includes establishing a consensus among post-secondary institutions regarding online teaching competencies.  This could facilitate the development of rubrics of online teaching observations.  Related to this, research could also explore current rubrics used by insitutions to evaluate online teaching.  These rubrics could be compared to help identify criteria being used across institutions and better inform online teaching competencies.

Conclusion

This study confirms that online instructor evaluation at post-secondary institutions has made tremendous improvements in recent years.  It is apparent that online programs are aware of the importance of online instructor evaluation as well as the challenges.  Many insitutions have been grappling with challenges for many years and have found important solutions to difficulties with which others are still struggling.  Additional research can help share these solutions and thereby continue to improve practices of online instructor evaluation.



References

Academic Senate for California Community Colleges (2013). Sound principles for faculty evaluation. Retrieved from https://asccc.org/sites/default/files/publications/Principles-Faculty-Evaluation2013_0.pdf.

Allen, I. E., Seaman, J., Poulin, R., & Straut, T. (2016). Online report card: Tracking online education in the United States. Babson Park, MA: Babson Survey Research Group and Pearson.

Attride-Stirling, J. (2001). Thematic networks: an analytic tool for qualitative research. Qualitative Research, 1(3), 385–405. http://doi.org/10.1177/146879410100100307

Bangert, A. W. (2004). The Seven principles of good practice: A framework for evaluating on-line teaching. Internet and Higher Education, 7(3), 217–232.http://doi.org/10.1016/j.iheduc.2004.06.003

Bangert, A. W. (2008). The development and validation of the student evaluation of online teaching effectiveness. Computers in the Schools, 25(1/2), 25–47.http://doi.org/10.1080/07380560802157717

Benton, S. L., Webster, R., Gross, A. B., & Pallett, W. H. (2010). IDEA technical report no. 15: An analysis of IDEA student ratings of instruction in traditional versus online courses, 2002–2008 data.

Benton, S. L., & Cashin, W. E. (2012). Student ratings of teaching: A summary of research and literature. The IDEA Center, (IDEA Paper #50), 1–22.

Berk, R. A. (2005). Survey of 12 strategies to measure teaching effectiveness. International Journal of Teaching and Learning in Higher Education, 17(1), 48–62. Retrieved from http://www.isetl.org/ijtlhe/

Berk, R. A. (2013). Face-to-face versus online course evaluations: A “consumer’s guide” to seven strategies. Journal of Online Learning and Teaching, 9(1), 140–148.

Cheski, N. C., & Muller, P. S. (2010, August). Aliens, adversaries, or advocates? Working with the experts (SMEs). Madison, WI: Proceedings from the Conference on Distance Teaching & Learning. Madison WI: University of Wisconsin Extension.

Compora, D. (2003). Current trends in distance education : An administrative model. Online Journal of Distance Learning Administration.

Dana, H., & Havens, B. (2010). An innovative approach to faculty coaching. Contemporary Issues in Education …, 3(11), 29–34.

Darling, D. D. (2012). Administrative evaluation of online faculty incCommunity colleges. Fargo, North Dakota: North Dakota State University.

DeCosta, M., Bergquist, E., & Holbeck, R. (2015). A desire for growth: Online full-time faculty’s perceptions of evaluation processes. Journal of Educators Online, 12(2), 73–102.

Donovan, J. (2006). Constructive student feedback: Online vs. traditional course evaluations. Journal of Interactive Online Learning, 5(3), 283–296.

Drouin, M. (2012). What's the story on evaluations of online teaching? In M. E. Kite (Ed.), Effective evaluation of teaching: A guide for faculty and administrators (pp. 60-70). Washington, DC: Society for the Teaching of Psychology. Retrieved from http://www.teachpsych.org/Resources/Documents/ebooks/evals2012.pdf

Dziuban, C., & Moskal, P. (2011). A course is a course is a course: Factor invariance in student evaluation of online, blended and face-to-face learning environments. Internet and Higher Education, 14(4), 236–241. http://doi.org/10.1016/j.iheduc.2011.05.003

Hill, P. (2012). Online educational delivery models: A descriptive view. EDUCAUSE Review, 47(6), 84–86. Retrieved from https://er.educause.edu/articles/2012/11/online-educational-delivery-models--a-descriptive-view 

Graham, C. R., Woodfield, W., & Harrison, J. B. (2012). A framework for institutional adoption and implementation of blended learning in higher education. Internet and Higher Education. http://doi.org/10.1016/j.iheduc.2012.09.003

Mandernach, B. J., Donnelli, E., Dailey, A., & Schulte, M. (2005). A faculty evaluation model for online instructors: Mentoring and evaluation in the online classroom. Online Journal of Distance Learning Administration, 8(3), 1–28. Retrieved from http://www.westga.edu/~distance/ojdla/fall83/mandernach83.htm.

Moskal, P., Dziuban, C., & Hartman, J. (2013). Blended learning: A dangerous idea? Internet and Higher Education, 18, 15–23. http://doi.org/10.1016/j.iheduc.2012.12.001

Palloff, R. M., & Pratt K. Effective course, faculty, and program evaluation. Paper presented at: 24th Annual Conference on Distance Teaching & Learning; 2008; University of Wisconsin; Madison, WI.

Piña, A. A., & Bohn, L. (2014). Assessing online faculty: More than student surveys and design rubrics. The Quarterly Review of Distance Education, 15(3), 25–34.

Roberts, G., Irani, T. G., Telg, R. W., & Lundy, L. K. (2005). The development of an instrument to evaluate distance education courses using student attitudes. American Journal of Distance Education, 19(1), 51–64. https://www.tandfonline.com/doi/abs/10.1207/s15389286ajde1901_5 

Rothman, T., Romeo, L., Brennan, M., & Mitchell, D. (2011). Criteria for assessing student satisfaction with online courses, 1(June), 27–32. Retrieved from http://infonomics-society.org/wp-content/uploads/ijels/published-papers/volume-1-2011/Criteria-for-Assessing-Student-Satisfaction-with-Online-Courses.pdf

Schnitzer, M., & Crosby, L. S. (2003). Recruitment and development of online adjunct instructors. Online Journal of Distance Learning Administration. Retrieved from http://www.westga.edu/~distance/ojdla/summer62/crosby_schnitzer62.html

Schulte, M. (2009). Efficient evaluation of online course facilitation: The “quick check” policy measure. Journal of Continuing Higher Education, 57(2), 110–116. http://doi.org/10.1080/07377360902995685

Schulte, M., Dennis, K., Eskey, M., Taylor, C., & Zeng, H. (2012). Creating a sustainable online instructor observation system: A case study highlighting flaws when blending mentoring and evaluation. International Review of Research in Open and Distance Learning, 13(3), 83–96.

Stanišic´ Stojic´, S. M., Dobrijevic´, G., Stanišic´, N., & Stanic´, N. (2014). Characteristics and activities of teachers on distance learning programs that affect their ratings. International Review of Research in Open and Distance Learning, 15(4), 248–262.

Stewart, I., Hong, E., & Strudler, N. (2004). Development and validation of an instrument for student evaluation of the quality of web-based instruction. American Journal of Distance Education, 18(3), 131–150. https://www.tandfonline.com/doi/pdf/10.1207/s15389286ajde1803_2 

Thomas, J. E., & Graham, C. R. (2017). Common practices for evaluating post-secondary online instructors. Online Journal of Distance Learning Administration, 20(4), n4.

Tobin, T. J., Mandernach, B. J., & Taylor, A. H. (2015). Evaluating online faculty: Implementing best practices. San Francisco: Jossey-Bass.

Weschke, B., & Canipe, S. (2010). The faculty evaluation process: the first step in fostering professional development in an online university. Journal of College Teaching & Learning, 7(1), 45–57.


Online Journal of Distance Learning Administration, Volume XXI, Number 2, Summer 2018
University of West Georgia, Distance Education Center
Back to the Online Journal of Distance Learning Administration Contents