Don’t Tell the Faculty: Administrators’ Secrets to Evaluating Online Teaching


Thomas J. Tobin
Northeastern Illinois University
t-tobin@neiu.edu


Abstract

Administrators at many colleges and universities have had online courses at their institutions for many years, now. One of the hidden challenges about online courses is that they tend to be observed and evaluated far less frequently than their face-to-face course counterparts. This is party due to the fact that many of us administrators today never taught online courses ourselves when we were teaching. This article provides six "secrets" to performing meaningful observations and evaluations of online teaching, including how to use data analytics, avoid biases, and produce useful results even if observers have never taught online themselves.

Secret Zero: One Secret to Rule them All

Before we can talk about the secrets to evaluating online teaching—or even why there are secrets to evaluating online teaching—we must confront a generational factor that, in the coming decades, will gradually disappear. Call this “Secret Zero.” For the most part, we who are administrators over distance-learning programs did not ourselves come up in educational systems where distance-learning methods were used to teach us when we were students. In plain English, we are largely the product of traditional-classroom, “chalk and talk” teaching.

In terms of our own teaching experiences, only a few of us taught early forms of distance-education courses—via postal correspondence, broadcast television, or videocassettes—and those few became the pioneers of Internet-based distance education in the late 1990s. But for the rest of us, our experience teaching online is minimal or nonexistent. Now think about our poor department chairs, who are even less likely than we are to have taught online.

The traditional model for face-to-face course observation is one in which an administrator, typically a department chair, visits the classroom of an instructor in order to perform a summative evaluation that will count toward promotion and tenure, or toward hiring an adjunct instructor back to teach in the future.

Although some department chairs, deans, and other administrators have taught online courses themselves (and thus have a feel for the challenges and flow of online teaching), many more administrators conducted their teaching careers exclusively in the face-to-face classroom. Especially for those administrators who moved away from teaching roles in the early 2000s, they are likely not to have developed or taught courses in a mode other than face-to-face (McCarthy & Samors, 2009).

Now, you might be thinking that online teaching has been established as a course-delivery option for long enough that most administrators will have had at least some first-hand experience with online teaching practices. Even when administrators have taught online courses, though, the more likely situation is that they have not addressed the challenge of “what is good online teaching” until forced to do so by being required to evaluate the online teaching of a colleague.

Also, even when department chairs have taught online themselves, there is often a gap between their own teaching practices and the institutional processes in place for evaluating those practices. Further, institutions will find varying levels of administrative familiarity with online teaching methods from department to department.

So, what if you aren’t part of the population of administrators described above? Are you one of the leading-edge online-program administrators who has taught online courses yourself, and you have done some evaluation of others’ online teaching, as well?

Stop reading this and skip to the next article in the issue.

No, no, just kidding. Please do keep reading.

The chances are still very good that you work with senior leaders at your institution who haven’t ever taught online courses themselves, but who are experts in their fields and who have long experience evaluating lots of different metrics around your university’s offerings.

That long experience with evaluation methods in general is both a blessing and curse for higher-education program administrators. To illustrate how evaluation of online teaching is the “odd one out” in the toolbox of approaches that most of us use, let’s look at areas of evaluation where we are already well prepared to handle evaluation related to our online offerings. In general, we’re responsible for assessing the need for various programs of study, causing those programs to come into being, and then evaluating how well our offerings are meeting the needs of the communities that we serve.

For our face-to-face programs, we have good models to follow for all three of these phases of the program life cycle, and, perhaps more importantly, nearly all of us administrators have studied, taught, and developed courses and programs within those models. We have a shared vocabulary and a common set of experiences that help us to be able to articulate how we want to measure the effectiveness of the curriculum, course design, and teaching that make up our offerings.

Now, think about your institution’s online courses. Chances are, you have a robust process in place for creating curriculum for online offerings: you assess how many people in your geographic service area are working adults with family responsibilities, and gauge whether online programs could help those who are not your students to enroll and succeed. You have also likely looked at potential learners outside your geographic area, and assessed whether other institutions’ online programs serve those people yet or not, so that you can offer online programs that fill a niche or compete on convenience or price point.

You probably also have a good system for figuring out whether the design and content of your online courses are up to snuff, as well. Whether you are using a home-grown evaluation instrument or a national model like the QOCI  instrument (Illinois Online Network and the Board of Trustees of the University of Illinois, 1998) or Quality Matters (MarylandOnline, 2014), we all have many choices for how we evaluate the design of our online courses.

So, what is the challenge with administrators evaluating online teaching practices? It has to do with one confusion and a few biases that we might not be aware that we have.

Secret 1: Data, Data Everywhere, Nor Any Useful Link

There are many purposes for evaluating online teaching that are largely apolitical: we evaluate our online teaching practices so that we can improve our teaching methods, retain students, and best support students in accomplishing their educational goals. Student, self, and peer evaluations—especially informal ones—fall into this category. However, we’re not talking here about these formative reasons to evaluate teaching, since administrators are rarely called upon to provide input in these ways.

In the specific situation of administrators and their proxies observing and evaluating online teaching, evaluations are typically performed in order to determine whether the instructor is re-hired for the following semester or whether the instructor progresses through the promotion-and-tenure process.

Because the primary purpose of administrative review is so narrowly conceived, many institutions have already created or adopted administrator-observation instruments that is separate from peer- and student-evaluation instruments, and which are specific to online courses. For example, Columbus State Community College (2009) includes a “Faculty Online Observation Report” form as a separate instrument in its Faculty Promotion and Tenure Handbook. The form instructs administrators to indicate “yes,” “no,” or “not applicable” on observed elements of an online course. Interestingly, the directions include some of the challenges of administrative observation of online courses (we’ll expand on these in a few minutes):

These directions would not make sense for an observation of a face-to-face course; they are linked to the online format of the course. Administrative evaluators for face-to-face courses seldom need guidance, for example, about determining the people with whom it is appropriate to conduct the review session, needing to differentiate between teaching behaviors and course materials, and defining the length of the observation period.

The existence of separate administrator-observation instruments—however open-ended—is an opportunity for opening the conversation about what behaviors constitute good teaching practices, what evidence of those behaviors can be observed, and how those behaviors can be quantified and evaluated (rather than merely noted as existing or not).

Secret 2: “I Love the Smell of Chalk Dust in the Morning”

Before we can create an instrument and a process for evaluating online teaching behaviors toward retention and promotion, we must confront several myths about the observable qualities of good teaching in general. The administrative-observation instruments developed for face-to-face teaching typically share some common observational biases, which are invisible until we start thinking about shifting the modality of teaching from face-to-face to online.

Bias: Good teaching is embodied. At many institutions, part of the promotion-and-tenure process is a visit by the department chair to observe the teaching practices of a candidate faculty member. For example, at a mid-sized public university, the adoption of online courses recently posed a challenge for “observing” online teaching. In early 2013, a faculty member was getting his portfolio ready for the retention, tenure, and promotion process toward becoming full professor, and he had asked his department chair to observe his online course.

The department chair stopped by the Center for Teaching and Learning with a “quick question” for the technology coordinator: “We just need to know one thing. Our observation form has an item on it: ‘Instructor demonstrates enthusiasm.’ How can instructors demonstrate enthusiasm in an online course? After all, the students can’t see the professor or hear his voice.”

The department chair was skeptical as to whether it would be possible to observe instructor enthusiasm in the faculty member’s online course, worrying that “the students can’t see the professor or hear his voice.” The bias inherent in the question is that body language and voice inflection are integral to effective teaching. While it is true that varied voice inflection and open body language help to keep face-to-face learners engaged (Betts, 2013), such indicators are not the only means of demonstrating instructor involvement with class participants.

For online courses that incorporate video of the instructor, another aspect of this “presence” bias is revealed. Evaluators may wish to observe online video content in the same way they would observe a face-to-face lecture. Evaluators with an embodied-teaching bias may be both swayed by professional-style production values in longer lecture-capture-style videos and disappointed by brief “bare bones” videos of instructors discussing course concepts. Flashy presentation skills can mask a lack of instructor subject knowledge even in a face-to-face environment, and chunking of video content is an established best practice for course-related multimedia regardless of the course-offering modality.

By expanding beyond the bias, we can see that the communication between the instructor and the learners is the key measurement here, especially with regard to its frequency, nature, and quality. Administrators can think of all of the signals that face-to-face instructors send to their students, and they can look for similar kinds of signals in online courses, such as the frequency of instructor posts to discussion boards and the regularity of follow-up communication with learners about posted video content.


Secret 2: I Know It When I See It

Bias: Good teaching is intuitive. The department chair in our example is lucky. At least his department has an instrument from which to begin the conversation about observing online teaching. In many cases, the evaluation of face-to-face teaching is based on the subjective feelings of the administrative observer. Even where there are score sheets, rubrics, or other observation instruments, the questions asked sometimes do not lend themselves to quantifiable responses.

Using “I know it when I see it” as an observation criterion exposes a bias for the observer’s own learning preferences. Administrators who themselves learned best in lecture courses will rate lecturers as more competent teachers than instructors who favor other teaching practices. This bias exists in face-to-face observations, and it persists even when departments use specific instruments as guides to the observation.

The impact of the bias is magnified when observing online courses: The department chair’s concern that “the students can’t see the professor or hear his voice,” is also a coded way of saying that the evaluator can’t see the professor or hear his voice, either. Especially when administrative evaluators’ experiences have been primarily as classroom-based instructors, they lose some of their ability to use an “I know it when I see it” gestalt to judge instructor quality when the course modality moves from the classroom to an online environment.

To expand beyond this bias, administrators can shift their thinking away from charismatic traits (e.g., ability to hold students’ attention, strong classroom “presence,” and student eagerness to be involved in the class) and toward the support-behavior analogues to those charismatic behaviors (e.g., providing multiple ways for students to consume course content, reaching out to every student with a personal communication at least once per unit, and supporting student achievement by recognizing effort, milestones, and accomplishments).

Secret 3: I’d Like a Table for 31 at 10:50 am on Tuesdays and Thursdays

Bias: Good teaching happens in real time. Questions often raised by administrators unfamiliar with online teaching include “how does one hold class online,” “does everybody log in to a live video feed, or something,” and “where do the students go to actually have a conversation with the instructor?” There is a strong bias toward synchronicity as a hallmark of effective teaching. While online teaching can happen synchronously (e.g., via Skype or Adobe Connect real-time class meetings), one advantage of online learning is its any-time, any-place nature.

While it is true that a real-time conversation provides instructors and students with the opportunity to explore issues together and have immediate feedback within the conversation, it’s not the case that every course member can be involved in a synchronous class meeting at the same level. In many face-to-face classrooms, it is only the instructor and a small core of students—between five and ten students, regardless of class size—who are engaged in the class discussion at any given time (Weaver & Qi, 2005). Many students can and do remain silent throughout the entire face-to-face class period.

Administrative evaluators can move beyond the real-time communication bias by focusing on opportunities for students’ participation in, and their direction of, the learning experience, as well as the instructor’s ability to engage students both through the course content and through ad-hoc interactions with students throughout the course. In fact, this ability to engage directly, one-on-one with learners asynchronously is a teaching behavior unique to online teaching (we’ll identify a few more later on). For example, online discussion forums offer all students the chance to reflect on the ideas and statements of others and offer instructors the opportunity to facilitate student learning in a dynamic environment. Administrators should look for evidence of teaching practices that invite learners and instructors to share and shape the conversation through discussions, collaborative group work, and reflection.

Secret 4: In Cyberspace, No One Can See You Sweat

Bias: Good teaching appears effortless. If you have taught for many years, you can probably remember the very first time you ever taught. It was likely a nervous time for most of us, preceded by a lot of preparation. Often, we entered the classroom with a legal pad filled with information and notes, or with a PowerPoint presentation bristling with notations and resource links—reminders for ourselves of the things about which we did not want to forget to talk with the class. Over time, as we taught the same kinds of courses again and again, that legal pad got put aside in favor of an index card with a few key phrases or bullet points to remember. Some of us now have retired the memory aids all together and rely on our experience and memory in order to facilitate each class session.

Theatricality, or the appearance of effortlessness, is the most common mental shortcut that administrators use to stand in for “effectiveness” in face-to-face teaching. Administrator-observers are often biased toward the faculty member whose ability just to “wing it” from memory indicates mastery of the subject and comfort with the processes of sharing it with learners. In online teaching, however, instructors are brought back to the legal-pad stage of their teaching: much of what instructors typically speak and perform in face-to-face class sessions ends up as documentation in the online environment—and is thus not observed as being an online teaching practice.

Further complicating this bias is the situation that in online courses, the person who designed the course outline, lecture content, assessments, videos, and initial discussion prompts may not be the person who is teaching the course. To the biased eye, this suggests that all that is needed to teach online is a warm body, one who can occasionally answer student questions, grade the tests and quizzes, and report on student achievement at the end of the course.

In order to work against the sage-on-the-stage bias, administrative evaluators should avoid confusing information delivery with teaching behaviors. Observers should define ahead of time what behaviors are to be evaluated as online teaching practices. One of the most common forms of face-to-face information sharing, even today, is lecturing. In an online environment, the lecture content (whether text, audio, or video) is more a source of information delivery, akin to the textbook readings in a face-to-face course: it’s a piece of media to be consumed by the learners in their own time, rather than an interaction to be shared with the class together. While it is important that media elements in online courses be expertly created, it is the delivery of the online course—the “teaching”—that is key to administrative reviews conducted for staffing and promotion decisions. We will come back to this distinction between content media and interactive experiences later on.

Secret 5: “Oh, Man, I Think the Clock is Slow!”

Especially when administrative observation of teaching occurs for the purpose of determining whether to re-hire or promote an instructor, the overarching goal is to make the observation process as standardized as possible: to observe each instructor under conditions as similar as possible to those used to observe his or her peers and to evaluate instructors using a common set of criteria. Hence, it is tempting to want to create a comparative table of equivalences between face-to-face and online course delivery. If one observes 90 minutes in a face-to-face course, where (and to what extent) should one look in an online course environment to see the same amount of teaching happening?

This would be a much shorter article if such a goal were possible to achieve. Part of the confusion about observing face-to-face and online versions of the same course has to do with the visibility of the content and behaviors that fall within (and outside of) the scope of what can be seen by the observer. For example, in a face-to-face class, the administrative observer typically does not come to the instructor’s office hours to observe one-on-one interactions with students, nor does the observer review a sample of the instructor’s e-mail communication with students. The observer does not typically ask to see the instructor’s notes for the class period. The observer can get copies of assignments or in-class worksheets only if the instructor shares them with the observer—and only then so the observer can follow along with the activities taking place in the classroom. Furthermore, the observer does not usually request a copy of the syllabus before the observation takes place or see samples of student assignments that are handed in during the class period being observed.

In an online course, however, the observer has access to all of these elements, and often more. He or she can see the course syllabus, the lecture content and multimedia for every unit of the course, students’ interactions with the instructor in the threaded discussion forums, and even student submissions for assignments and instructor feedback on these, as well as the grade book that the instructor is keeping for the course. In fact, pretty much the only element of the educational transactions for the entire online course that remains invisible to an administrative observer is the flow of e-mail between students and instructors (and even e-mail messages can be discoverable from within an LMS environment, in many cases).

Because of these differences in visibility and access between face-to-face and online courses, it is helpful to re-examine some general online-course-review strategies through the lens of what actions administrators can take that other reviewers cannot. For example, a department chair can

All of these actions take place outside of the observation itself, and administrative observers are in a unique position to be able to integrate the observation of online teaching practices into an overall program of feedback to the instructor.

Now that you know the secrets, when you and your administrative colleagues are the observers, you can follow a few key steps.

Step 1: Define Behaviors

Instead of looking for specific behaviors or affective elements of the instructor (such as “speaks clearly” or “maintains the interest of students”), administrative observers can find modality-neutral, measurable criteria for evaluation by focusing on the effects of instructor behavior. For example, “the instructor communicates in a way that students respond to throughout the range of observation.” By observing the behaviors of the instructor in terms of what those behaviors elicit from the learners, administrative evaluators can make a yes-no determination, and further assign a measurable value to the behavior. In their seminal article, “Principles for Good Practice in Undergraduate Education,” Chickering and Gamson (1987) analyzed a wealth of research on good teaching in colleges and universities. They revealed seven core principles of effective teaching practice that are themselves modality-independent:

1. Encourage student-faculty contact.
2. Develop reciprocity and cooperation among students.
3. Use active learning techniques.
4. Give prompt feedback.
5. Emphasize time-on-task.
6. Communicate high expectations.
7. Respect diverse talents and ways of learning.

By seeking instructor behaviors that help to meet each of these core areas, administrative observers can tailor their observations to the tools and methods being used, regardless of the course-offering modality. For online courses, especially, focusing on Chickering and Gamson’s principles allows administrators who may not have taught online themselves to look for evidence of effective teaching interactions throughout the online environment: everything that is not an interaction can be seen as a piece of media.

By categorizing elements of online courses as either media or interactions, administrative observers can make more fine-grained determinations about which parts of the online course are actually examples of teaching behaviors. The following chart illustrates one way to match teaching principles against commonly-observed teaching behaviors in online courses.

Teaching Principle (Chickering & Gamson)

Common Online Teaching Behaviors

Encourage student-faculty contact.

Set aside regularly scheduled times for online “office hours” or implement a maximum turn-around time for responses to communications.
Facilitate regular course discussions.
Post course announcements or news updates on a regular basis.

Develop reciprocity and cooperation among students.

Assign group or dyad projects.
Require discussion responses to peers.
Offer encouragement in public ways (e.g., on the course discussion forum); offer criticism privately (in grade-tool feedback seen only by individual students).

Use active learning techniques.

Ask students to summarize and propose next steps.
Assign “butts out of seats” tasks to give online learners tasks away from the keyboard (e.g., interview experts near students’ homes) and ask students to report back to the class.
Have students create and post study guides.

Give prompt feedback.

Respond to each student at least once in each graded threaded discussion topic, or for very large courses, at least once per course unit.
Keep to turn-around time expectations for instructor responses to graded work.
Give students encouragement, reflection, and correction feedback.

Emphasize time-on-task.

Give students estimates of how long assignments wills take.
Communicate progress of the whole class toward week/unit goals.
Provide individual-progress milestones for graded work.

Communicate high expectations.

Give preview, status, and review communications.
Provide samples of good practice on assignments & discussion.
Spotlight students who do good work or improve their efforts (e.g., post an “everyone look at Kevin’s response” message in discussions, or ask improved students to lead group study sessions).

Respect diverse talents and ways of learning.

Provide multiple ways for students to respond to assignments (e.g., write an essay, record an audio response, create a video).
Allow students to respond to discussions using a variety of media.
Present learning material in a manner that allows for a range of possible learning paths.

Step 2: Agree on the Scope of the Observation

There is no hard and fast equivalent in an online course to the 60- or 90-minute period typical of face-to-face observations. Because face-to-face courses are fixed in time and place, those parameters are the “givens” of the observation. The givens for online courses are not time or physical location (both of which are variable), but the online environment itself. In order to assist administrators who are observing online courses, agreement should be reached on five key factors:

Definition of Teaching Practices. In an online course, there are many analogues to face-to-face teaching practices that may not be considered “teaching” for the online course. For example, in a face-to-face class, lecturing is a key teaching practice. However, video clips or lecture notes in an online course are part of the course media, and are not themselves direct evidence of teaching behaviors—especially if the person who developed the lecture notes or videos is not the person facilitating the class.

As mentioned earlier in the discussion of the “effortlessness bias,” one strategy for making clear what counts as a teaching practice in an online course is to examine those elements that lead directly to interaction among the students, the instructor, and the course content. Items that present information but do not then directly ask the learner to respond may be considered as parts of the course design. Course content items may be either design elements or teaching practices, depending on their structure and use.

For example, a set of lecture notes that is presented as a single web page, and which presents information—in the manner of a textbook or article—is part of the course design, and would not be considered in an administrative observation of the online course. Likewise, videos, audio podcasts, and the like are also as part of an online course’s materials, and do not “count” as observable teaching behaviors.

However, if an instructor responds to student questions in an online-course discussion by posting a mini-lecture or video to explain a concept, that certainly “counts” as an observed teaching behavior, because the content is created or shared as a result of interaction between the learner and the instructor. The overall criterion to apply is one of “information presentation” versus “interaction.” As a final caveat, items that were created by a person other than the course instructor should never be counted toward administrative observation of online courses.

Consistent instructor presence in an online course is one of the most important components of online teaching practice, helping students feel less isolated and more supported in their learning. In fact, instructor presence supports each of Chickering and Gamson’s seven principles. In online instruction, where another course or even institution is just a click away, instructor presence goes a long way toward student retention, academic success, and building a sense of community. Piña and Bohn (2014) identify specific behaviors unique to the online environment that administrators perceive as effective indicators of teaching quality.

Our desire was to identify a set of criteria that would yield objective data easily examined by supervisors and peers during an online course observation and serve as a balance to the more subjective data gathered from student surveys. This study focused upon quantitative measures of instructor actions and behaviors that could be readily observed in the online course and/or collected using the reporting tools of the learning management system:

Thus, online teaching behaviors can be tied to interaction and communication between the instructor and the learners. This leads to the second area needing agreement: communication between evaluators and instructors.

Communication between Observer and Observed. For face-to-face classes, the usual communication that takes place prior to the observation is brief: the evaluator lets the instructor know that he or she will be observed on a given day and time. Perhaps the observer asks for a copy of the course syllabus or for any handouts that will be provided to the students. There is typically little communication between the observer and the instructor during the actual observation.

For online courses, similar needs arise: the observer must still notify the instructor that observation will take place. Instead of requesting copies of documents (which, de facto, may be accessible during the online observation), the observer must establish whether the instructor is also the author of the course content. Likewise, the instructor may communicate ahead of time to the observer about where the observer may wish to focus attention or about anything unique regarding the context of the instruction, especially if there are interactive elements in the online course environment that are in different places than, or go beyond, the usual places where interaction occurs.

A further difference for online courses is that communication between the evaluator and the instructor, in the form of clarifying and directional questions, is often beneficial during the observation period. For example, the administrative observer may want to see supplemental content that is released to students only after they accomplish various course tasks (and which the observer is unable to unlock). This brings up the next area where agreement is needed: the extent of the observation.

Which Elements are “In Bounds.” Agreement on which elements of the online course represent teaching practices is often the most contentious discussion on a campus, since many elements may be considered part of the course design or teaching practices, depending on their structure and function, as seen in the example of lecture content above. However, it is possible to create a core agreement that identifies elements of online courses

A secondary concern about the scope of what administrative observers may use for evaluation has to do with the boundaries of the course-delivery environment. Many instructors, whether teaching face-to-face or online, perform teaching actions outside of formal instruction. For instance, instructors in both face-to-face and online classes may meet with students for office-hour consultations and engage in student consultations via e-mail and telephone calls. In the face-to-face environment, such contact, although it definitely meets the definition of “teaching,” is not counted toward administrative observation because it is not readily visible and measurable to the observer.

However, in the online environment, these behaviors may or may not be visible, depending on the technical setup used at the institution. In institutions where the course delivery environment includes text-based “chat” and synchronous-environment features, faculty office hours may be recorded and stored in logs accessible to the instructor and/or students in the course. More commonly, many instructors have a “Q&A” or “water cooler” topic in their online discussion forums that is intended for general questions about the course—but such discussion topics are almost never a required element of the course design.

One way to resolve the question of where observers may look is to think about the boundaries present in both face-to-face and online class observations. In a face-to-face class, the boundary is the classroom itself. Interactions that take place outside of the physical location of the classroom, including office-hour consultations, phone calls, and e-mail messages, are not counted toward the observer’s evaluation. An easily-defined boundary in online courses would be to consider excluding those same types of outside-of-formal-instruction communications from the observation and evaluation process.

Consider that a discussion of where to draw the boundary lines will result in different combinations of interactions being “in play” at different institutions, and may even result in some interactions coming into consideration when a change is made to the institution’s course delivery environment that adds new features. For example, if an institution adds a synchronous-online-classroom software feature to its course environment, then logged recordings of the use of that feature would come into the scope of what is possible to be observed.

One final word is necessary about the level of access granted to the observer. In most online course environments, observers can be granted student- or instructor-level access to the environment. The best practice is to allow administrative observers student-level access to online courses, unless there is a compelling reason for access to an instructor-only area of the course. Agreement on this point, and a process for making the request to see instructor-access parts of a course, are best made in advance of the observation. Such agreement helps to keep the focus of the observation on the interactions accessible to students.

Duration of the Observation. For face-to-face courses, the temporal boundary of an administrative observation is well defined, usually one class-meeting period. The observer spends 50 to 90 minutes watching the class unfold in real time. The challenge for observing online classes is that the observer’s time spent examining the online environment does not correlate directly to the amount of time spent observing a face-to-face class covering the same scope of ideas and content.

The best practices for defining the time period for observation are to allow the evaluator access to the online course environment over a set period of days, and to communicate time-spent expectations up front. For example Penn State University advises evaluators to conduct their reviews toward the end of the semester, so that there will be a rich and complete set of interactions to evaluate. If observations take place too early in the course, there may not yet be a lot of teaching behaviors in evidence. At Penn State, administrative evaluators are also told that the observation instrument was designed to take approximately two hours to complete (Taylor, 2010).

Communicating time-spent expectations helps observers to know how much attention and detail is required for completing a thorough observation, allows observers to focus on the must-observe areas of the course environment, and offers an opportunity for evaluators to examine other areas of the course environment to determine whether they fall into the “leads to interaction” category.

Assistance Available to the Observer. In the face-to-face classroom, there is little concern about the observer’s required technical skills. He or she arrives at the classroom, and takes notes about the class. For online courses, however, administrative observers may not be skilled at navigating the online course environment, or may need technical help in observing various elements in the online course. Agreement about the availability, extent, and role of technical staff is needed prior to the observation.

If administrative observers of online courses require technical staff who will “drive” the observation by navigating the online course environment, first determine from what area(s) of the institution the technical assistants should come. For example, retention, tenure, and promotion observations may be facilitated by staff members from the teaching-and-learning center. Center staff would thus have to draw a “bright line” about being able to answer process-related questions when assisting administrative observers, leaving the domain of “what to observe” squarely in the hands of the administrative observers.

Further, the role of the technical assistant should be defined. The continuum of assistance can range widely, including

A final concern about assistance for the administrative observer is to make clear that any assistance offered is facilitative in nature and not evaluative. For instance, a technical assistant may show an evaluator the discussion forums in a given online course and may mention that the instructor appears to be responding to students at the rate of about one message per ten student messages. The assistant should not, however, provide evaluative or comparative advice during the observation, such as saying that a good benchmark for instructor postings is to post between ten and twenty percent of the total number of messages in online discussions.

This can be challenging for assistants who are, outside of the observation setting, resources for the institution on precisely these kinds of issues. In institutions where teaching-center staff members train administrators in the process of observing online courses, it is a good practice to source the pool of technical assistants from another campus unit, such as the information-technology area, to avoid potential conflicts regarding who is providing the evaluative response in an observation.

On Avoiding Quantity Bias. There is one factor in administrative evaluation of online teaching that is not typically encountered in observation of face-to-face classes, and which deserves separate consideration: quantity bias. An observer, particularly one who has not taught online himself or herself, can be tempted to equate several non-related factors with the quality of the online course experience for students. These factors include the amount of content in the online course environment, the amount of multimedia used in the course, or the number of communications from the instructor. (Tobin, 2004).

The primary means of avoiding quantity bias is to focus exclusively on the interaction among the students and instructor. Items that get students to take actions, as opposed to items that are to be consumed passively by reading or watching them, are those that can be evaluated for administrative observations. It is safest to evaluate only the “spontaneous” aspects of the course and not the “canned” materials at all.

In comparing two online instructors, one might assumes that each instructor has created the entire collection of materials for his or her course. However, Instructor A might not have authored the content of the course and may not have had any control over the presentation or order of the materials, either. Likewise, Instructor B might have inherited the structural aspects of the course from someone else who authored the content. By focusing on just the interactions between students and instructor, as well as on the instructor’s facilitation of student-to-student interactions, evaluators can get a true sense of how well online courses are being taught. This points to three take-away lessons:

Especially in online courses, it can be tempting to equate greater quantities of interaction with better course experiences. Be sure to take into account the number of students in the course when evaluating the number of instances of interaction seen in the online course environment, as well.

It’s No Secret: We Can Do This!

Remember the department chair at the public university, the one who wanted to know how to gauge his online instructor’s enthusiasm? When the technology coordinator finally met with the department chair to help prepare him to observe the faculty member’s online course, the coordinator came with a proposed plan to help narrow down the work that they would need to do before, during, and after the observation. The plan included specific requests that the department chair would be able to make of the faculty member prior to the observation, such as getting a copy of the course syllabus and any supplemental materials.

The coordinator also suggested that the chair observe a single unit of material—one that had already been completed—so he could get a good feel for the overall experience of being a student in the course. Finally, the coordinator set up time to give the department chair a general introduction to the university’s online learning environment, so he would know what to expect and be better able to decide where to look within the environment to observe the various criteria in the department’s observation rubric. The department chair left the consultation feeling better prepared, and confident that the teaching center’s staff would be there to support the observations.

It’s heartening to know that the core evaluation skills that we administrators use in other areas of our institutions’ offerings and programs will serve us well when we’re evaluating online teaching, too. We have only to keep an eye on our unconscious biases and think critically about what we want to measure and where we should look to find reliable and consistent indicators of teaching quality.

Acknowledgement

A fuller version of this article appears as a chapter in the book Evaluating Online Teaching: Implementing Best Practices (Wiley, 2015). The author is grateful for permission from the press to publish this edited version in the DLA conference proceedings.


References

Betts, K. (2013). Lost in translation: Importance of effective communication in online education. Online Journal of Distance Learning Administration 16(2). https://www.westga.edu/~distance/ojdla/summer122/betts122.html.

Chickering, A. and Gamson, Z. (1987). Principles for good practice in undergraduate education. The Wingspread Journal (Special insert, n.p., June). Racine, WI: Johnson Foundation.

Columbus State Community College. (2009). Appendix F: Faculty online observation report. Faculty Promotion and Tenure Handbook. 53-59. http://www.cscc.edu/about/faculty-staff/PDF/Faculty Promotion and Tenure Handbook.pdf.

Illinois Online Network and the Board of Trustees of the University of Illinois (1998). QOCI rubric: A tool to assist in the design, redesign, and/or evaluation of online courses. http://www.ion.uillinois.edu/initiatives/qoci/docs/QOCIRubric.rtf.

MarylandOnline. (2014). Quality Matters rubric standards fifth edition, 2014, with assigned point values. http://www.QMprogram.org.

McCarthy, S. and Samors, R. (2009). Online Learning as a Strategic Asset. 2 vols. APLU-Sloan National Commission on Online Learning. http://sloanconsortium.org/publications/survey/APLU_Reports.

Piña, A. and Bohn, L. (2014). Assessing online faculty: More than student surveys and design rubrics. Quarterly Review of Distance Education 15(4).

Southern Association of Colleges and Schools Commission on Colleges (SACS COC). (2012). Credit hours: Policy statement. http://www.sacscoc.org/subchg/policy/CreditHours.pdf.

Taylor, A. H. (2010). A peer review guide for online courses at Penn State. http://facdev.e-education.psu.edu/sites/default/files/PeerReview_OnlineCourses_PSU_Guide_Form_28Sept2010.pdf.

Tobin, T. J. (2004). Best practices for administrative evaluation of online faculty. Online Journal of Distance Learning Administration 7(2). https://www.westga.edu/~distance/ojdla/summer72/tobin72.html.

Weaver, R. R. and Qi, J. (2005). Classroom organization and participation: College students’ perceptions. Journal of Higher Education 76(5), 570-601.

 


Online Journal of Distance Learning Administration, Volume XVIII, Number 3, Fall 2015
University of West Georgia, Distance Education Center
Back to the Online Journal of Distance Learning Administration Contents