A New Spin on Quality: Broadening Online Course Reviews Through Coaching and Slow Thinking


Lisa McNeal
College of Coastal Georgia
lmcneal@ccga.edu

Jennifer Gray
College of Coastal Georgia
jgray@ccga.edu

Abstract

Many faculty struggle with designing and teaching online courses. Classic standardized course review models, such as the ones developed by Quality Matters (QM), are valuable tools, yet they often enforce rigid standards and lead to time-intensive course reviews. This paper offers a new solution as two faculty ask, “What would happen if an online course review were more like a writing coaching session?” The authors explore how combining standards from QM with techniques from writing centers could transform a rigid online course review process into an engaging coaching session. Inspired by key writing center principles and research literature on course reviews, the authors propose an alternative, feedback-focused process for online course reviews that encourages small yet crucial shifts in our thinking about the process. We recommend broad and flexible options that honor the person over the product.


Introduction

According to the New Models of Learning Task Force, higher education is in trouble (Crowdsourcing, 2019). They write: “Higher education is in the midst of turbulent change. An academic culture steeped in reflection and teaching is being disrupted and reconstructed into a globally connected ecosystem of networked 24 x 7 x 365 learning. Roles and paradigms held dear and true are challenged. The rate of change, and unpredictable and unrelenting emergence of new models …makes planning and preparing for the future even more conflicted, confusing – and critical” (Crowdsourcing, 2019). Like many faculty, we are weary from reading articles about the crisis in higher education. We are tired of being told by administrators to standardize our courses, treat students as customers, and bow to the twin gods of efficiency and effectiveness. So, how do we plan and prepare for a conflicted and confusing future? Furthermore, how do we promote slowness, reflection, and creative thinking among our students and colleagues?

In this paper, we advocate for resisting standardization and corporatization in online courses. First off, rubrics and online course reviews can be useful. However, they can be problematic if they are applied rigorously without reflection.  Rather than focusing on the rubric, or the final product, we shift the focus to the online course review process: the interaction between the professor and reviewer. Drawing inspiration from the Slow Food movement, writing workshops, and writing center philosophies, we recommend and illustrate flexible options that honor the person over the product. Readers will find examples of each option, which can be used at their home institutions.

Like Agular (2018) who writes about the critical need for cultivating emotional resilience in education, we seek small yet crucial shifts in our thinking about the course review process. We are not abandoning rubrics or other worthy tools but avoiding a blind overreliance on them. We wish to broaden the conversation around course reviews and remind readers of the origins of the Quality Matters rubric, which is rooted in collegiality and emphasizes continuous course improvement. Significantly, we can do our small part and counteract the higher education crisis by interacting warmly with each other, slowing down to reflect, and remembering Braidotti (2006) who advocates for interconnectedness, an ethics of care, and reminds us that “We are in this together” (p. 119).

Literature Review

One concrete way to do our part to counteract the higher education crisis involves the Slow Food movement and its philosophies that can be applied across many disciplines beyond food. The Slow Food movement and philosophies guided the development of our course review process. This movement started in 1989 as a reaction to fast food culture (Slow Food USA, 2018). According to their manifesto, Slow Food is necessary because “[w]e are enslaved by speed and have all succumbed to the same insidious virus: Fast Life, which disrupts our habits, pervades the privacy of our homes and forces us to eat Fast Foods.” Slow Food responds to the negativity around fast food by encouraging dialogue and conversation during the creation and consumption process, direct hands-on involvement in the creation of a food item, and uniqueness in the creation process. These elements are in stark contrast to the standardized unhealthy products that can be robotically replicated for customers (read: students). The snail is the mascot for slow foodies.

The Slow Food movement is not restricted to food only. Other fields, such as business and education have applied these philosophies. For example, Pfeiffer (2018) focuses on the corporate workplace and explains how unhealthy choices due to fiscal uncertainty and hectic schedules impact worker performance, job satisfaction, and quality of life. Honore (2004) explains that life in general has turned into an “exercise in hurry,” which requires people to operate at a hurried pace (p. 3). People work to cram in as much as they can in the fewest possible minutes, and we reward ourselves when we gain control over time. Not keeping up with time demands can be disastrous, as the fast can “eat the slow” (p. 4). Honore calls for us all to accept a slower pace; human survival is about being the “fittest” and not necessarily the “fastest” (p. 4).

Moving into education, professors Berg and Seeber (2017) illustrate how Slow Food concepts can be a way to combat time pressures faced in higher education. Their text, The Slow Professor: Challenging the Culture of Speed in the Academy shows how higher education creates a continuous standardized output of satisfied customers, crafted by solitary faculty factory workers. Berg and Seeber (2017) warn that the culture of speed removes the space needed for deep and creative thinking necessary for growth and reflection. Within the first pages of their text is a manifesto, modeled after the Slow Food movement’s manifesto, which calls for educators to cultivate “emotional and intellectual resilience” by taking time “for reflection and dialogue” with others (pp. ix-x). Because faculty exist in a state of “time poverty,” the “major obstacle to creative and original thinking...is the stress of having too much to do” (p. 28).

The Slow Food movement’s philosophies can combat the troubles of time poverty, as more time can result in creative endeavors based on conversation, casual discussions, collaboration, and reflection. Slowing our pace down can be a way to move through the time and standardization constraints facing higher education and the world at large. We directly pull from the Slow Food movement’s ideas of conversation, collaboration, hands-on involvement, and uniqueness, and we apply these elements to our course review options.

Collaboration, in particular, is writ large in this project, as one author is the Director of eLearning with a background in instructional design and educational leadership, and one author is an Associate Professor in English and directs the campus writing center. While these two fields sound different, there are points of similarities in terms of how we could review and speak back to faculty regarding their online courses. For example, the use of writing workshop and writing center methods can bring new options to online course reviews.

The traditional version of a writing workshop began with the University of Iowa and their internationally famous creative writing program. According to the program’s website, they have used this method since the 1930s, and this program is the oldest creative writing course in the United States. The workshop system creates space for writers to share their works with others for feedback. Writing often makes clear sense to the writer, but the challenge comes when the writing meets the reader. Writing workshops bring a bunch of readers to a writer. Participating in workshops help writers learn to give up control over their writing as the writing makes its way into the world. Ultimately, the writing must be able to stand alone and make sense to readers.

The general process for a writing workshop starts by submitting work to readers in advance. For example, a student (writer) provides a copy of her current draft to the teacher and the class. After plenty of time for a review, the class and teacher write a letter of feedback/reaction to the writer prior to any discussion. Then, the entire workshop group meets to provide feedback and discussion to the writer about their experience reading the work. This group could meet for hours or across multiple days. The writer says nothing while this feedback is delivered and takes notes. The writer stays silent because it is a waste of time to defend the writing—the reader has whatever reaction the reader has, and the writer must sit with that understanding of the effect of the work. We can grow as writers when we hear and understand how our writing is received by readers. The ending of a workshop can be reserved for direct questions from the writer. After the workshop, the writer takes in the feedback and privately decides what feedback to use and what feedback should be abandoned.

Writing center approaches use a similar focus on feedback to writers; however, the writing center has more of a teaching focus. Centers strive to help improve writers and not just assignments. Writer center scholar, North (1984), summarized the work of the writing center as focusing on improving the writer and not just the paper. In this case, writing feedback is more than just about the paper in hand. Instead, writing tutors help students improve the larger concepts of writing processes and skills, such as invention strategies, revision activities, or development of a paper’s focus. When a writer becomes more skilled, the papers will follow. Improvement happens over time; growth is not instantaneous. Many writing centers do not use the term, tutor, for their workers. Instead, they often use coaches or consultants, which combats the idea of a writing center being only for deficient writers. Coaches provide an outside perspective that is valuable and irreplaceable regardless of a writer’s ability.

Outside perspective can be provided in an unstructured manner, based on what a reader notices, or perspective can be shared via specific guidelines through the use of a rubric. Simply put, a rubric is “a format for expressing criteria and standards” (Walvoord, 2004). While rubrics are commonly used by faculty to assess student work, rubrics may also be utilized to assess programs or courses. In this paper, we write about how different types of rubrics can be used with professors to improve their online courses. Therefore, in the following section, we review the history of online course reviews and rubrics.

A Brief History of Online Course Reviews and Rubrics


Reviewing courses as well as observing professors in their classrooms are common practices in higher education. Many institutions have evaluation procedures in place to ensure that full-time instructional faculty are providing quality instruction to their students in their face-to-face classes. For example, at the College of Coastal Georgia, department chairs conduct classroom observations each semester and review the results with the full-time teaching faculty in their department. In addition to classroom observation, the evaluation process includes a review of student course evaluations for all courses taught and a comprehensive self-evaluation (Faculty Handbook). However, when online courses began, there were no procedures to measure course quality or evaluate faculty.

The number of online courses and fully online programs skyrocketed in the 2000s. According to Allen and Seaman (2013), in 2002 there were more than 1.6 million students enrolled in at least one online course in the United States. Expecting online enrollments to increase, administrators, faculty, students, and stakeholders began to ask, “How do we measure and guarantee the quality of an [online] course?” (A Grassroots Beginning, n.d.). The faculty at MarylandOnline (MOL), a consortium of Maryland community colleges, colleges, and universities began looking for ways to answer this question by measuring course quality. In 2003, this consortium received a grant from the U.S. Department of Education, which led to the development of the Quality Matters (QM) rubric for course design standards. According to their website, the QM review process would “ train and empower faculty to evaluate courses against these standards, provide guidance for improving the quality of courses, and certify the quality of online and blended college courses across institutions” (A Grassroots Beginning, n.d.). Online courses must achieve a score of 85 out of 100 to be considered successful.

The grant ended in 2006, but QM continued to grow on a state and national level. Today QM operates as a non-profit organization with more than 60,000 members. According to their website, the mission of QM is to “promote and improve the quality of online education and student learning nationally and internationally” (A Grassroots Beginning, n.d.).  QM is a subscription-based service. Participating institutions pay from $1,555 to $5,775, depending on the subscription type. Members have access to on-going training, the full rubric with the general and specific standards and their assigned point values, and other resources. In addition, users benefit from the name recognition of QM, which is “a research-based national benchmark in online quality assurance” (Ensuring Online Course Quality, n.d.). Importantly, QM was designed to be collegial process that focused on continual improvement. While reviews and rubrics are two cornerstones of the QM process, “fostering a culture of continuous improvement” was a crucial ingredient (A Grassroots Beginning, n.d.).

While QM pioneers were developing their online course rubric, a similar movement was occurring on the west coast, spearheaded by the University of California, Chico (UCC). Like their colleagues at MarylandOnline, the faculty at UCC asked, “What should a quality online course look like?” In 2003, they developed the Rubric for Online Instruction (ROI) as a framework to answer that question. According to their website, this rubric “represents a developmental process for online course design and delivery, and provides a means for an instructor to self-assess course(s) based on University expectations. Furthermore, the rubric provides a means for supporting and recognizing a faculty member's effort in developing expertise in online instruction as part of our commitment to high quality learning environments” (Rubric for Online Instruction, n.d.). UCC continued to refine the rubric and expand options for course reviews, certifications, and peer reviews. Their home-grown rubric is now called the Quality Learning and Teaching (QLT) instrument. With 9 sections and 54 objectives, this instrument is more comprehensive and detailed than the 6 sections in the original ROI rubric (Exemplary Online Instruction, n.d.). Unlike QM, the RIO and QLT instruments do not require a subscription or fee to use.  They are available online under a Creative Commons license 3.0, allowing anyone to share, adapt, redistribute, or build upon the material (Attribution 3.0 United States, n.d.). Currently UCC have the option of using the QLT or QM rubric for formal course reviews. 

Akin to UCC, other institutions have developed their own online course review rubrics, such as Wake Technical Community College in Raleigh, North Carolina. According to Berry, Consol, and Hatcher (2019), many faculty at Wake Tech were rushed to put classes online. Faculty received little direction and lacked established best practices to follow. Additionally, they were using evaluation tools that were designed for seated classes. In 2014, Wake Tech launched EPIC, eLearning Preparedness Initiative across the College (EPIC). EPIC, which was their Quality Enhancement Plan, had the following goal: “Reduce online learning barriers and support student learning, persistence, and success in online courses” (eLearning Preparedness Across the College: A Quality Enhancement Plan, n.d.). Similar to QM and Chico, Wake Tech’s approach was rooted in the desire to ensure online course quality. However, unlike QM and Chico, Wake Tech took at two-pronged approach and focused on both student and faculty preparedness. New online students are required to take an online course with three modules covering Expectations, LMS Skills, and Computer Skills. Students must successfully complete all three modules before registering online classes.

To ensure faculty were prepared to teach online, the Wake Tech team developed online standards for course design and delivery and created a mandatory certification program for online instructors. Similar to projects at QM and UCC, the development of the standards and rubric was team-based and based on research into best practices in online teaching and learning. The resulting professional development materials were based on four standards: navigation and design, communication and collaboration, assessment, and accessibility. Wake Tech created four publications to help faculty implement the EPIC eLearning Standards.

First off, the eLearning Quality Standards Rubric is a 17-page document containing the four standards, annotations explaining how to use the rubric, and a chart to use to indicate whether or not each standard was met (eLearning Quality Standards Rubric, n.d.).  Second, the Online Course Checklist is a simple listing of the four eLearning standards. Thirdly, the Top Ten List is a one-page list of the ten strategies for student success. Finally, the Course Construction Playbook is a 30-page booklet containing key facts as well as the checklist. This booklet was created to serve as a summary document to faculty who had completed the certification course but needed a refresher (Fussell, Popp, & Jordan, 2019).

According to Berry, Consol, and Hatcher (2019), the rubric and checklist are used in three ways. One, the checklist is used to collect data in support of their QEP. Second, the rubric and checklist can be used by departments to review courses or faculty under their supervision. Third, faculty can use the checklist and rubric to update or develop their own courses.

Unlike the QM and Chico rubrics, which focus on reviewing the course, the EPIC model certifies the instructor. Berry explained, “This process certifies the instructor. Once we’ve taught the professor that will extend the reach to many courses” (R . Berry, personal communication, June 26, 2019). Additionally, she continued, “This makes the process scalable and manageable. I teach six different courses. The eLearning team doesn’t have to review my six courses.” Instead, the EPIC model is focused on developing the professor, qualifying them to teach many courses. This mindset harkens back to the key writing center principle: develop the writer, not just the paper (North, 1984). If the writer improves, the paper will too. Similarly, if the online instructor improves, the courses will too.

Applying the Rubrics: A Tour of Three Rubrics

In the first part of this paper, we introduced the principles from the Slow Food movement and writing coaching that guided our thinking about the online course review process. In the next section, we will describe how we applied these principles using three rubrics, which we dubbed the Rigid Rubric, the Moderate Rubric, and the Non-Rubric Rubric.

For some faculty, the concept of an online course review is a mysterious process that occurs behind closed doors. To begin to unravel this mystery, we decided to take one course we did not teach and review it using our three selected rubrics. In addition, we stripped the course of identifying information, such as the professor’s name. To further ensure privacy, we selected an online course that was designed and taught by someone who no longer works at our college. Each author examined the online course separately and thoroughly. We reviewed a math course: Math 101. For this article, we will refer to the professor using the pseudonym Dr. Ima Professor.

After completing each review, we discussed how we reviewed the course and what we learned from the process. Here follows the summaries of each rubric used.

Rigid Rubric

One author reviewed Math 101 using the Quality Matters Higher Education Rubric, Sixth Edition. She opened the course on the learning management system and kept a printed copy of the rubric on her desk. As she navigated the course, she looked for evidence to support whether or not Dr. Ima Professor’s course met each standard. She made her way slowly down the list to see if the course met the general and specific review standards. If a standard was met, she assigned the full point value. If the standard was partially met, she awarded partial credit. If the standard was not met, no points were awarded. For example, standard 1.6 states that computer skills and digital information literacy skills expected of the learners are clearly stated. This information was not in the syllabus, so the professor earned zero points for this standard. Then the reviewer tallied the points. According to the QM review process, Dr. Ima Professor’s course scored a 56 out of 100 possible points. See Figure 1 for a visual representation of this review.

Figure 1. QM Rubric (aka the Rigid Rubric).

Although we used the QM rubric as our Rigid Rubric, we classify the Chico rubrics as rigid as well because they use pre-defined standards and the process is course centered rather than faculty centered. In other words, the course is reviewed not the process.

On the other hand, the EPIC rubric is most closely aligned with the upcoming moderate rubric. While standards are used and checklists are provided, the spirit of Wake Tech’s model is to improve the professor not focus on reviewing the course. The Wake Tech model is faculty focused rather than course focused.  

Moderate Rubric

The same author reviewed Math 101 using a version of the QM rubric she calls the Moderate Rubric. Like the Rigid Rubric, the Moderate Rubric is based on QM pre-defined standards. However, the Moderate Rubric only includes eight general standards and a few of the essential standards. Unlike the QM rubric, the Moderate Rubric is not a fixed document with point values. The Moderate Rubric is a flexible Google document that includes descriptions of each standard as well as space for a narrative explaining what should be modified or what the professor did right. There is no numerical score provided. Even though the standards are listed, the feedback is in the form of a narrative.

For example, standard 5.1 is about the alignment between the learning activities and the learning objectives. Rather than assign a score, the reviewer wrote, “The learning activities are appropriate, but there is not a lot of variety. There are only homework and activities. Let’s get together and brainstorm some ways to make this course more engaging for the students.”

In addition, there is a space at the bottom for the reviewer to provide additional  feedback, constructive criticism, and encouragement. For example, the reviewer wrote, “I would be happy to work with you to address these concerns. Again, you have made a good start. I look forward to working with you to make some tweaks to make this course more engaging.” See Figure 2 for the Moderate Rubric with the reviewer’s feedback.

Figure 2. Moderate Rubric.

Non-Rubric Rubric

Two years ago, one of the authors asked the other author in a casual hallway conversation if she had ever used a letter format or a writing workshop format to respond to online courses. The answer was no, and this entire project was born. The Non-Rubric Rubric was created by a merging of the writing workshop method and writing center theories discussed earlier. One of the authors had trouble relying heavily on numbers; for example, she wondered how a 2 might be different from 2.2 or 2.5, or she worried that different raters could have different opinions about what a number represented. She worried about reducing the review to just a number. Instead, she removed the numerical aspect by using a letter format to respond directly to the faculty member after reviewing the online class. The letter contains the same types of information as the rubrics, such as discussions about content, clarity, and missing information; however, the feedback is not provided in a numerical form. Instead, the letter synthesizes the main strengths and weaknesses found within the online course to provide faculty with rich details about their materials. The letter begins with gratitude toward the faculty member for sharing the course and cultivating this feedback experience. Then the letter provides an overview of the main content points of the letter and launches into detailed discussion about each main content point. For example, where there is a lack of detail, the letter names the lacking point, explains why the lacking detail is problematic, and provides examples for repair. The letter is organized with headers and bullet points, so the material is easy to read. Most importantly, this letter can emphasize what the faculty member has done well and how to transfer that positivity elsewhere in the class. The letter ends with gratitude and an invitation to continue talking. Much like the writing workshop model that builds on drafts and revision, so does this letter. The faculty member is encouraged to make adjustments and continue the conversation with the reviewer. Contact information is provided, so there is no mystery about who is reviewing the online course.

Figure 3. Non-Rubric Rubric

After completing the three rubrics, we cut and pasted the text from each into a wordle. This cut-and-pasted text included criteria points (ex: the course website included a personalized greeting) as well as written responses filled out by reviewers (ex: the newsfeed provided a personalized welcome message for students). According to Wordle (2014), the wordle is a visual map representing the frequency of concepts in a “word cloud.” The more a word is used, the larger the word will appear in the word cloud. This tool helps people quickly find patterns within texts, as larger words stand out in word clouds. Users can change the shape, font, size, and colors, which can highlight different themes present in the text.

Our three word clouds are pasted below:

Figure 4. Rigid Rubric Wordle.

Figure 5. Moderate Rubric Wordle.

Figure 6. Non-Rubric Rubric Wordle.

Results and Discussion

In addition to using the wordles to study the rubrics, we also used three more wordles to capture our initial reactions in describing the rubrics. We created a list of adjectives to describe each rubric (ex: rigid, wordy, personal), and we pasted each list of adjectives into a wordle to see what themes emerged. The three word clouds were then used in conjunction with the other wordles to come up with the descriptive themes for each rubric to reflect commonalities and differences, which are discussed below.

There were three major commonalities within the three rubrics: students/learners, course or content, and clarity. Most elements were present in all three rubrics, but the elements were presented in different ways.

Commonalities. The first theme showed that all three wordles discussed the people in the courses: the students or the learners. The Rigid Rubric referred to students in the class as “learners,” and the Moderate Rubric and Non-Rubric Rubric used the term, “students.” All three rubrics were focused on the student/learner, looking at points regarding access to information, clarity, and transparency. For example, the Non-Rubric Rubric has a section that discusses “access to the information.” The terms, learner or student, figured prominently in the word clouds, showing a high emphasis on these elements along with high use in the language of the rubrics.

 All three rubrics were focused on the content of the course. There were some differences in the word choice (ex: the course vs. the course content), but the focus is on the material and content presented for users. For example, the rubrics focused on assessing whether the content was detailed enough for students to follow instructions or understand shared material.

Finally, the three rubrics were highly focused on clarity. The rubrics examined clarity of instruction, clarity of information, as well as clarity of processes, such as how to log in to supplemental materials. For example, the Rigid Rubric talks about communication expectations being clearly communicated, and the Non-Rubric Rubric highlights a clarity failure in terms of grades as well as “read me first” tabs.

Differences. There were also three major differences within the rubrics: power, intention, and point-of-view. 

The first difference focused on power. The Rigid Rubric sets up a clear power dynamic between reviewer and faculty member. Reviewers are in a position of authority and power. The reviewer has the specific intention of determining if the online class has made a high enough score on the rubric, which can cast the reviewer as a surveiller. The faculty member either passes or fails, which could result in negative evaluations and/or loss of pay. Money may be on the line, as a failing score may result in a failure to rehire or retain the faculty member. For example, some faculty are paid to design and teach an online class. If the class does not meet the standards, it could result in a loss of income. The reviewer might have been hired to complete the review, so in many cases the reviewer could be an outsider with no local context. In some instances, the review might have occurred because there were complaints by students, so an outside person would conduct this formal review using the Rigid Rubric. 

The Non-Rubric Rubric and the Moderate Rubric are less aggressive in pitting the reviewer against the faculty member. These two rubrics are less focused on numbers and a pass/fail response. Instead, these rubrics focus on the positives and negatives within the course, such as whether the course works to build connections between students and faculty. There are many levels of performance, instead of pass or fail. In addition, the Moderate Rubric and the Non-Rubric Rubric focus on providing the faculty member with more verbal commentary about the work. There is plenty of space in both rubrics for more words than numbers. Instead of only assigning a number, the reviewer provides more information via written feedback about each category. For example, instead of receiving a “2,” faculty members could receive more specific information about how they could craft stronger welcome messages to improve clarity.

The second difference focuses on intention. After examining the wordles and rubrics closely, there appear to be differences in intention. The Non-Rubric Rubric uses language and strategies that cultivate a relationship between the reviewer and the faculty member, as well as suggestions for improving relationships between the faculty member and the students. There is a personalized greeting and expressions of gratitude by the reviewer, and the reviewer makes a clear announcement of the reason for the review. In contrast, the Rigid and Moderate Rubrics use language that is hyperfocused on assessment or surveillance. Reviewers provide a pass/fail score that reduces feedback to a number. Money and performance evaluations may be impacted if the number is not right, and the Rigid Rubric could be a tool used to retain or reject faculty. Since there is no personalization or relationship built in to the Rigid Rubric, the assessment can feel de-personalized, and reviewers can go “on autopilot” (Wilson, 2006, p. 39).

While all three of the rubrics are certainly working to improve instruction in the online course, they go about this approach differently. Upon examination of the wordles, it is clear to see the largest, and thus most used, word. For the Rigid and Moderate Rubrics, words like course, standards, competencies, and objectives are large. For the Non-Rubric Rubric, the term “students” is enormous. This term is the most frequently used term, as the Non-Rubric Rubric is focusing on providing feedback on how the course was experienced. Since the students are the ones experiencing the course, the focus is on how the material would be received by a student. This point harkens back to the writing workshop’s approach of audience. The writer might have many intentions, but the document has to stand on its own for readers. The writer, or the faculty member, needs to be focused on providing the course experience to students. In our case, the class has to be able to stand on its own for students, delivered with this in mind by the faculty member. Here, the product can shift from being about surveilling the isolated course to being about the students and the faculty member. This approach is long-term, as the stronger the faculty member gets, the better the courses will be in the future. The intention is not punitive or based on surveillance; instead, the intention for the Non-Rubric Rubric is focused on building on what is already present and strong.

Finally, upon examination of the three different rubrics, there was a distinct difference in terms of point-of-view. The Rigid Rubric and most of the Moderate Rubric relied on the use of third person point-of-view, which portrays a more objective presentation of the material. For example, an evaluation criteria point might say, “Learners are introduced to the purpose…” or “The instructor provided a course greeting.” In this case, there is no reference back to the reviewer; instead, focus is placed on the faculty member and/or the online course. Word choices are almost sterilized. Contrast this viewpoint with the Non-Rubric Rubric. In this case, the rubric is dominated by the use of first-person and second-person, such as “I appreciate the opportunity” or “You do a good job.” First-person and second-person points-of-view create an intimacy between the writer and the reader. When using “I,” readers get insight into what the writer is thinking or feeling. When using “you,” writers are creating a relationship between themselves and the readers, much like a conversation is occuring. First- and second-person points of view highlight the reviewer’s experience with the course. Sometimes this perspective of experience may be desired, and sometimes this perspective is extraneous. A key element between the different points-of-view is the relationship reviewers wish to create between the reviewer and the faculty member. If objectivity and distance are desired, the Rigid Rubric and the Moderate Rubric are good choices. If subjectivity and relationship building are desired, the Non-Rubric Rubric can be a good option.

Recommendations

We offer several recommendations for faculty, instructional designers, administrators, and others involved in the course review process. If you are using a rubric similar to our Moderate Rubric, think about how you could revise the rubric. Could you add more written feedback? Could you change the tone of the narrative so it is more focused on improvement? Carefully examine the language, tone, and point of view. See if you could personalize the feedback by using the first-person and second-person point of view. If you are at a school with a Moderate Rubric, consider letting faculty choose how they want their course reviewed as well as what rubric to use. Some faculty may prefer a simple, objective score and others may prefer the narrative format of the Non-Rubric Rubric. Other faculty may lean to the Moderate Rubric. Also, look at how you are teaching people to use the rubric. Provide them with examples of a completed Moderate Rubric with narratives that are well-written, gracious, and focused on improvement.

Second, change how you deliver the rubric and feedback. We recognize that some schools are locked into a certain type of rubric, such as Wake Tech, whose rubric is tied to the goals of their Quality Enhancement Plan. If someone is locked into one form of feedback, there are still ways to change the delivery of the required feedback, such as video conferences instead of an email or face-to-face meetings instead of interoffice mailings. Think about how you could make the interaction meaningful for both the reviewer and the professor. Consider where this conversation will occur because of power dynamics associated with offices and meeting rooms. Rather than meeting in someone’s office or a sterile conference room, suggest meeting over a cup of coffee in the bookstore.

Another option involves the electronic conversation focused around the review. All reviewers, no matter what type of rubric they use, can write an email full of gratitude that emphasizes the purpose of the course review process. It will only take a few minutes to create a personalized environment and set a positive tone. A few kind words can go a long way in creating and sustaining a positive working environment that honors the person over the product.

Our final suggestion focuses on a new spin on quality. Reviewers and writing faculty can partner to conduct a writing workshop using an online class. For example, an idea for a partial-day faculty development event could use the writing workshop method described earlier on an entire online class. The workshop begins by a faculty member volunteering to submit an entire online class for review. Workshop attendees can slowly review the online class one week in advance. The attendees craft a letter to the volunteer faculty member that highlights the positives and negatives they experienced as a learner in the class. At the workshop, the volunteer faculty member pulls the online class up and prepares to listen well to the reactions and discussions from workshop attendees. The entire workshop focuses on strong and weak practices, why they are working or not working, and how the receiver of the course, the student, might be experiencing the course. The volunteer faculty member takes notes and answers questions. Much like the writing workshop method, the workshop attendees have the reaction they have. The course must be able to stand on its own and be understood by a variety of users. In this manner, the writing workshop can be used as a model not at the paper level but at the entire course level.

Conclusion

In an era of increasing accountability and assessment initiatives, how courses are evaluated can be a way through pressures from upper administration and initiative fatigue. Higher education has become a corporatized entity, with external checks and balances, surveillance, and economic-based choices instead of pedagogical-based choices. Courses are becoming more standardized, so content can be regulated and delivered by automatons. In the face of these movements, working to provide multiple evaluation options can be a way through the murky top-down standardized one-size fits all approach. Our work provides reviewers and instructors with more choices. These options can be selected and developed based on the needs of individual users, which cannot always be predicted by people outside the reviewing experience. People who are actually involved in the process ought to be able to contribute to how they want their feedback. We respond to the corporatization of higher education by broadening our evaluation and feedback measures when possible instead of shrinking them to a one-size fits all instrument that forces higher education to conform to a standard someone external to the process created. Instead, we require the instruments to conform to our unique needs.


References

A Grassroots Beginning. (n.d.). Retrieved from: https://www.qualitymatters.org/why-quality-matters/about-qm

Aguilar, E. (2018). Onward: Cultivating emotional resilience in education. San Francisco, CA: Jossey-Bass.

Allen, I. E., & Seaman, J. (2013). Tracking the course: Ten years of tracing online education in the United States. San Francisco, CA: Babson Survey Research Group and Quahog Research Group LLC.

Attribution 3.0 United States (n.d.). Retrieved from: https://creativecommons.org/licenses/by/3.0/us/

Berg, M., & Seeber, B. K. (2017). The slow professor: Challenging the culture of speed in the academy. Toronto: University of Toronto Press.

Berry, R, Consol, A. & Hatcher, J. (2019). Course review vs. faculty evaluation for personal continuous improvement. Paper presented at the Distance Learning Administration Annual Conference 2019, Jekyll Island, GA

Braidotti, R. (2006). Transpositions: On nomadic ethics. Cambridge, UK: Polity Press.

Conference on College Composition and Communication. (2018). Position statements. Retrieved fromhttp://cccc.ncte.org/cccc/resources/positions.

Crowdsourcing the future of higher education. (2019). New Models of Learning Task Force web site. Retrieved from: http://afternext.completega.org/

eLearning Preparedness Across the College: A Quality Enhancement Plan. (n.d.). Retrieved from: https://www.waketech.edu/sites/default/files/ieandresearch/QEPProposalEPIC-FINAL_0.pdf

eLearning Quality Standards Rubric. (n.d.). Retrieved from https://www.waketech.edu/sites/default/files/page-file-uploads/EPIC-Quality-E-Learning-Standards-2018_0.pdf

Ensuring Online Course Quality. (n.d.). Retrieved from: https://cei.umn.edu/ensuring-online-course-quality

Exemplary Online Instruction. (n.d.). Retrieved from: https://www.csuchico.edu/eoi/

Faculty Handbook. (n.d.). Retrieved from: https://www.ccga.edu/

Faculty Resources. (n.d.). Retrieved from: https://www.waketech.edu/epic/publications

Fussell, K., Popp, J. & Jordan, C. (2019). Preparing faculty for EPIC results: Online faculty certification and assessment. Paper presented at the Distance Learning Administration Annual Conference 2019, Jekyll Island, GA

Gallagher, C. W. (2016). “What writers do: Behaviors, behaviorism, and writing studies” CCC 68(2): 238-265.

Gray, J. (2019). ‘It’s my closest friend and my most hated enemy’: Students share perspectives on procrastination in writing classes. Student Success in Writing, 1(1).

Honore, C. (2004). In praise of slowness: Challenging the cult of speed. New York: Harper.

North, S. M. (1984). The idea of a writing center. College English 46(5), pp. 433-446.   

Pfeffer, J. (2018). Dying for a paycheck: How modern management harms employee health and company performance—and what we can do about it. New York: Harper Collins.

Rose, M. (2012). Back to school. New York: The New Press.

Rubric for Online Instruction. (n.d.). Retrieved from: https://www.csuchico.edu/eoi/_assets/documents/rubric.pdf

Slow Food USA. (2018). Manifesto. Retrieved from https://www.slowfoodusa.org.

Walvoord, B. E. (2010). Assessment clear and simple: A practical guide for institutions, departments, and general education. San Francisco, CA: Jossey-Bass.

Wilson, M. (2006). Rethinking rubrics in writing assessment. Portsmouth, NH: Heinemann.

Wordle. (2014). Wordle. Retrieved from: http://www.wordle.net


Online Journal of Distance Learning Administration, Volume XXII, Number 4, Winter 2019
University of West Georgia, Distance Education Center
Back to the Online Journal of Distance Learning Administration Contents