Uniformed Services University of the Health Sciences
Uniformed Services University of the Health Sciences
Uniformed Services University of the Health Sciences
Uniformed Services University of the Health Sciences
Online programs are growing in number, and their success and sustainability are dependent on the quality of the courses they offer. Online program administrators need to ensure the quality of individual courses and the program as a whole. While several course quality rubrics and evaluation instruments are available, administrators need to select an instrument that would best suit the needs of their institution. In addition, administrators need to get faculty “buy-in” of the process to ensure its sustainability. This paper presents a participatory approach to developing a course quality rubric and designing a review process at the individual course level and the programmatic level.
Student enrollment in online courses continues to grow and has reached a period of sustained growth as institutions cater to the needs of the increasingly non-traditional college student demographics (Allen & Seaman, 2018). Nearly a third of these learners are now enrolled in at least one online course (Allen & Seaman, 2018). The success and growth of individual online programs depend on the quality of the courses they offer (Wang, 2006). The need for quality control in online courses is now widely accepted and mandated by state and federal regulators and accrediting bodies (Wang, 2006). To ensure quality control, a plethora of course review rubrics have been developed. Recently, Baldwin and Trespalacios (2017) identified 28 rubrics, also called evaluation instruments, currently in use at various levels, including national, state-wide, and institutional levels. These instruments have provided measurable metrics for assessing course quality.
Institution-specific instruments are more frequently in use compared to national or organizational level rubrics (Baldwin & Trespalacios, 2017). Institution-specific instruments can address the unique needs of each institution, and definitions of “high quality” can vary between and even within institutions (Bazluki, Gyabak, & Udermann, 2018). There are several examples of how various institutions have designed, developed, and implemented course review rubrics (Blood-Siegfried, et al., 2008; McGahan, Jackson, & Premer, 2015; Ozdemir & Loose, 2014).
Developing and implementing institution-specific rubrics poses challenges for online program administrators. First, the instrument should have the capacity not only to assess the quality of individual courses but also provide quality assurance across the program. Second, course review rubrics are often viewed by faculty as an administrative strategy to surveil their teaching practices (Goolnik, 2012) especially when instructional designers and administrators develop the rubrics without input from faculty. Third, mistrust of the rubric increases the challenge of getting faculty to own the process and accept these review tools as strategies for continuous improvement rather than faculty evaluation (Oliver, 2003, p. 89). If these challenges are not addressed, the results are course reviews that exist as discrete units with little participation from faculty. In the long term, this is detrimental to the quality of online courses.
The literature includes research on the development of course review rubrics (Blood-Siegfried et al., 2008) and the implementation of course reviews in various contexts (Hoffman, 2012; Little, 2009; Woods, 2014). Faculty want to be included in the development and deployment of online initiatives (Betts & Heaston, 2014). However, there is minimal research on engaging faculty in the course review process so that they can take ownership of the process and create a sustainable system. Thus, this paper presents a participatory approach to designing and implementing a course review rubric that was led by faculty ensuring ownership and acceptance of the process.
The Uniformed Services University of the Health Sciences (USUHS) is a federal university working under the United States Department of Defense (DoD). In 2016, the University launched the online Graduate Program in Health Professions Education (HPE) to cater to the needs of military medical personnel located around the country and world. The HPE learners are almost all active-duty military or federal health professionals (e.g., physicians, dentists, nurses) associated with USUHS. At a minimum, the learners have earned an MD or master’s degree prior to matriculation in the HPE program. They are faculty in the health professions, and more than 95% are practicing health professionals. The program now has 13 faculty members and 120 students.
The HPE program directors recognized from the outset that the program needed a reliable quality assurance instrument that could not only assess the quality of an individual course and guide course design, but also provide quality assurance at a higher programmatic level. Furthermore, they required a scalable tool to accommodate the needs of a growing program and it needed to be dynamic enough to handle the constantly changing technological landscape (Wang, 2006). USUHS is a subscribing member of the Online Learning Consortium (OLC) so, the HPE program had access to various rubrics developed and adopted by the OLC. For online course reviews, OLC provides the Open SUNY Cote Quality Review (OSCQR) rubric (https://onlinelearningconsortium.org/consult/oscqr-course-design-review).
The OSCQR rubric was developed by the Open SUNY Center for Online Teaching Excellence and is openly licensed for use and adaptation. The tool was informed by other rubrics such as Chico, Quality Matters, and iNACOL National Standards for Quality Online Courses (Pickett, 2020). Theoretically, it draws variously from the Community of Inquiry model (Garrison, Anderson, & Archer, 2010), the 7 Principles for Good Practice in Undergraduate Education (Chickering & Gamson, 1987), and The Adult Learner (Knowles, 1990). OSCQR assesses 71% of the seven principles of good education, placing it among the top two national-level rubrics to do so (Baldwin & Trespalacios, 2017). The OSCQR rubric comprises six sections (see Table 1) representing a total of 50 review criteria. The OSCQR process provides a comprehensive approach to the course and program evaluation. It includes an interactive rubric tool to assess individual courses, robust resources for each criterion in the rubric, and a dashboard tool that aggregates all course reviews and provides a program level overview of course quality.
As a DoD affiliate, USUHS must adhere to strict security guidelines for all software used within the university system. Therefore, the interactive rubric and dashboard tool could not be utilized within the institute-specific technological ecosystem. Furthermore, considering the learner population of the HPE program, administrators realized that implementing the rubric in its original form would not benefit the program. For example, the rubric prioritizes accessibility criteria as “Essential”. While accessibility is certainly important, these were not criteria that HPE needed to address immediately, given the physical performance requirements placed on its learners. The program preferred to categorize them at "Important," recognizing that they needed to be addressed but not immediately.
The following sections present the process by which the OSCQR review tool was adapted and adopted by the HPE program through a participatory faculty consensus process. Changes made to the tool are highlighted along with the implementation process and initial results.
The program directors identified adopting and utilizing a course review tool as a necessity for the program to meet accreditation requirements. A participatory approach framework was used to operationalize the initiative. The basic premise is that the people affected by a change should have a voice in designing and implementing the change (Russ, 2008). This approach was selected because utilizing a course review tool was a new initiative, and it required faculty to embrace it and feel comfortable using it. This resemblance to implementing change initiatives led to the use of the participatory approach as a conceptual framework. A participatory approach involves stakeholders in the change process by soliciting input during the development, decision making, and implementation phases from them.
Russ (2010) explains that participatory change contexts have "general overarching change objectives" (p. 774); they encourage stakeholders to "discover and build their own theories about the change and implementation process" (p. 774); there is a process focus rather than an outcome focus; and, they are not prescriptive but flexible to incorporate stakeholder voices.
The adaptation process was a grassroots effort by a three-member HPE faculty team comprised of AS, RC, and LM (referred to as the “lead team” from here on). All three hold doctoral degrees in education and represent the three faculty hierarchy levels: an assistant, associate professor, and full professor. RC and LM had both attended Quality Matters (QM) workshops, and AS is a certified QM reviewer. AS is also experienced in instructional design and various technology tools. Besides, they had all worked with other online course review rubrics and were familiar with several tools, ensuring that they brought a broad perspective to the adaptation process.
The lead team reviewed the literature on online course review processes. Through dialogue with other HPE stakeholders, including the program directors and faculty, the lead team identified four key objectives for the course review process:
The lead team reviewed each criterion of OSCQR through the lens of the program’s unique learner population and faculty. They proposed changes to the rubric, and these change suggestions were sent out to all HPE faculty. Then, a meeting was convened to foster an open dialogue in which faculty could provide feedback on the revisions. During discussions, the faculty reached consensus on key issues, and consensus is an essential element of the participatory approach (Russ, 2008). They requested a separate section that addressed alignment of course activities and assessments with learning objectives; they agreed that the categorization of criterion as 'Essential' or 'Important' could be revised, and suggested removing all criteria related to technical elements to a separate section. The lead team incorporated these suggestions and revised the rubric twice before the program faculty fully approved it. “Essential” criteria were defined as needing to be addressed immediately while “Important” criteria needed to be addressed, but not immediately. Table 2 shows the original OSCQR rubric sections and the sections in the adapted HPE rubric.
Table 1: OSCQR vs. HPE Rubric
Implementation of the course review process occurred on two levels: the course level and the programmatic level.
The lead team crafted an initial implementation plan which was sent to HPE faculty for review and discussion. Through dialogue, faculty decided that each course would be reviewed by 3-4 reviewers including the course faculty, peer faculty (i.e., those teaching in the HPE program, but not instructors of the course under review), a member of the lead team, and a current HPE learner or alumnus. Including the course faculty as the first reviewer allowed them to reflect on their course while peer faculty benefited from looking at how other courses were structured and could bring a fresh perspective. The learner perspective was crucial to providing a genuinely inclusive review, and, as end-users of the course, their experiences were informative. Finally, the lead team perspective was valuable because, as noted, members had detailed expertise in the capabilities of the institution’s technology, design of all the courses in HPE, and course design implemented external institutions. In the rubric, the ‘Technical Elements’ section was marked as ‘optional’ for faculty and learner reviewers. This was in response to faculty concerns that items such as font size, font type, and technology tools required a level of technical expertise that faculty reviewers might not possess. Making the “Technical Elements” section optional was also a selling point in getting faculty to adopt the course review process. A crucial final decision was related to how these results would be reported and what impact they would have on faculty evaluations. As the focus was on developing quality courses, it was agreed that individual course results would not be reported out at the individual level. The program director would only receive an aggregate report from AS who administrated the review process. Once these items had been agreed upon, AS developed the rubric on Google Sheets, using the SUNY model as an exemplar. The sheet provided space for all reviewers and aggregated reviewer comments in an ‘Action Items’ section.
Figure 1 presents the timeline for the review process. A course review begins three weeks after the end of the course. During week one, AS meets with the course faculty and explains the process. She also shares the review sheet, explains the different criteria, and directs the faculty to the support resources embedded in the rubric. She also highlights a section on the sheet where course faculty can request specific feedback from other reviewers. In weeks two and three, AS identifies reviewers for the course. The reviewers are given weeks four through nine to complete the reviews. In week 10, AS meets with the course faculty again. Together, they work through the 'Action Items' section, and the course faculty design an action plan. The course faculty make final decisions on which action items they will address first and which ones they will defer. The priority rankings help guide this decision. Once an action plan has been decided upon, AS creates a comprehensive course review report (see Appendix A) that helps course faculty when they are making changes to their course.
Figure 1: Timeline for course reviews
While it is important to ensure the quality of individual courses, from an administrative perspective, it is equally important to get an aggregate picture of how the program as a whole is meeting quality assurance standards. To this end, AS created a dashboard for the HPE course reviews, again following the OSCQR dashboard example. However, since AS was recreating it, there was flexibility to change the priority ratings for criterion, adjust reporting metrics, and create data visualizations that were most optimized for the HPE program. During the individual course review process, AS tracks reviewer progress through the dashboard.
The faculty agreed that each course would initially be reviewed twice – the first review would be a baseline, and the second, to assess the efficacy of changes made to the course. If items were identified for action over the next two iterations, then the course would be reviewed three times. For ongoing maintenance, it was decided to review established courses once every three years, all new courses, and courses with major changes.
The course review process was focused on the outcome of developing quality online courses. Therefore, the results of individual course reviews are not shared with the program director or other administrators. Program administrators are presented with a summary view of the performance of HPE courses across different criteria. This ensures that the review tool is not used as a part of faculty evaluation but remains solely a tool for course improvement.
Figure 2: Sample data charts
During the course debriefing meetings, informal feedback has been solicited from faculty regarding the tool and the process. Feedback from faculty has been overwhelmingly positive. Faculty reported that the review process provided them a space to reflect on and critically assess their courses. Following a review, instructors felt confident the review feedback would help them improve their online course, which is reflective of findings in other studies (Bazluki et al., 2018). Most faculty incorporated the feedback received and made changes to their courses. Faculty also had questions about features they had seen in other courses they had reviewed and wanted to incorporate them into their courses.
Discussion and Conclusions
As a relatively small program, HPE has been able to effectively utilize the participatory approach to designing and implementing a course review process. This approach has been beneficial to this program of assessment on multiple levels. First, the participatory approach has proved to be effective in getting faculty buy-in for the course review process. Drawing from Russ’ (2010) participatory approach allowed all stakeholders to have a voice in the process. The focus was placed on identifying “overarching objectives” (p. 774), and all stakeholders were involved in the development of the tool and designing the implementation process. This approach has allowed the focus to remain on the process of course review and improvement. In addition to the inclusive nature of the design and development process, the course review is also a peer-to-peer process. Faculty give and receive constructive feedback from their peers. This has created an environment where everyone is invested in the success of the program rather than on their individual courses alone. Being exposed to each other’s courses has also resulted in faculty learning from each other and emulating best practices. This cross-pollination of ideas is ensuring a stronger program as a whole. For administrators, this process has helped them ensure faculty commitment to the long-term maintenance of the process.
Second, the selection of the open-source OSCQR rubric has ensured that the program is basing course design quality on industry-accepted standards. As an open-source tool, it has afforded the flexibility to adapt it to USUHS’s unique needs. Implementation of the course review rubric has helped the HPE program offer well-designed, consistent, and easily navigable online courses (Hixon et al., 2016). The SUNY OSCQR process recommends completing reviews on courses before they are launched. However, beginning the course review after the completion of a course has removed the pressure of time; and faculty can use the review process as a reflective tool. Strictly adhering to reporting only aggregate results to the program director has ensured that faculty do not view the process as an evaluation of personal performance.
Third, at the programmatic level, adapting the OSCQR process has provided access to the robust support resources provided by SUNY while simultaneously allowing for the creation of a tool that is unique to the USU context. The dashboard has compiled data allowing for (1) quality assurance across all courses in the program and (2) analysis of compliance longitudinally over several course offerings. Although assuring individual course quality is essential, at a programmatic level, administrators need to ensure that quality is consistently maintained across all course offerings, and the dashboard provides this broad view of the program. For example, the initial results in the "Technical Elements" section with a number of "Not Applicable" criteria raise questions about the relevance of the criteria to the program. Alternatively, it could also mean that, for example, most HPE faculty do not use tables in their course site. Assessments such as this are only possible through the dashboard view. The course review report that is generated at the end of every course review guides the course redesign and is a historical record of the development and improvement of the course over time.
Finally, the rubric functions not only as an evaluation instrument but as a checklist when developing new courses. Faculty can use the rubric to guide their design, thereby ensuring a quality course. The rubric is not intended to highlight flaws in courses. Instead, it is a tool to assist faculty in developing their online courses.
There are some limitations to this process. In the case of large institutions, the participatory approach can be a long, drawn-out process to ensure that everyone's voice is heard during the development process. Faculty who helped develop the rubric might have been replaced by new members who were not a part of the original process. At the same time, technical expertise and robust administrative support are needed to develop the tool and implement the process. Furthermore, the institution-specific process—particularly in the case of the unique military learners in USUHS programs—means that the same outcomes explored here may not apply in other institutions, though the process itself can be redesigned for different contexts.
Institution-specific quality control instruments can assure the quality of individual online courses while providing quality assurance across the program—an essential element as online programs continue to grow in supply and demand. The participatory course-review process led by faculty in the HPE program at USUHS provides evidence of this effect. Using the input from multiple experts, the buy-in of multiple stakeholders, and the adjustments of an informed review process, HPE built a course review rubric that strengthens the program’s performance at both the course and programmatic level. Other programs might consider how they can develop similar institute-tailored quality control instruments based on these findings.
Baldwin, S. J., & Trespalacios, J. (2017). Evaluation Instruments and Good Practices in Online Education. https://doi.org/10.24059/olj.v21i2.913
Bazluki, M., Gyabak, K., & Udermann, B. (2018). Instructor Feedback on a Formal Online Course Quality Assurance Review Process. Online Journal of Distance Learning Administration, 21(2).
Betts, K., & Heaston, A. (2014). Build it but will they teach? Strategies for increasing faculty participation & retention in online & blended education. Online Journal of Distance Learning Administration, 17(2), n2.
Blood-Siegfried, J. E., Short, N. M., Rapp, C. G., Hill, E., Talbert, S., Skinner, J., ... & Goodwin, L. (2008). A rubric for improving the quality of online courses. International Journal of Nursing Education Scholarship, 5(1), 1-13.
Chickering, A. W., & Gamson, Z. F. (1987). Seven principles for good practice in undergraduate education. AAHE bulletin, 3, 7.
Garrison, D. R., Anderson, T., & Archer, W. (2000). Critical inquiry in a text-based environment: Computer conferencing in higher education model. The Internet and Higher Education, 2(2-3), 87-105.
Goolnik, G. (2012). Change Management Strategies When Undertaking eLearning Initiatives in Higher Education. In Journal of Organizational Learning and Leadership (Vol. 10, Issue 2).
Hixon, E., Barczyk, C., Ralston-Berg, P., & Buckenmeyer, J. (2016). Online Course Quality: What Do Non-traditional Students Value? Online Journal of Distance Learning Administration, 19(4).
Hoffman, G. L. (2012). Using the Quality Matters rubric to improve online cataloging courses. Cataloging & classification quarterly, 50(2-3), 158-171.
Little, B. B. (2009). The use of standards for peer review of online nursing courses: A pilot study. Journal of nursing education, 48(7), 411-415.
Mcgahan, S. J., Jackson, C. M., & Premer, K. (2015). Online Course Quality Assurance: Development of a Quality Checklist (Vol. 10).
Oliver, R. (2003). Exploring benchmarks and standards for assuring quality online teaching and learning in higher education. Proceedings of Open and Distance Learning Association of Australia Biennial Forum., 2003, 79–90. http://ro.ecu.edu.au/ecuworks/3279
Ozdemir, D., & Loose, R. (2014). Implementation of a Quality Assurance Review System for the Scalable Development of Online Courses. In Online Journal of Distance Learning Administration (Vol. 17, Issue 1). University of West Georgia Distance and Distributed Education Center.
Pickett, A. M. (2015). The open SUNY COTE quality review (OSCQR) process and rubric. Online Learning Consortium. https://secure.onlinelearningconsortium.org/effective_practices/open-suny-cote-quality-review-oscqr-process-and-rubric
Knowles, M. (1990). The adult learner: A neglected species. Gulf Publishing.
Russ, T. L. (2008). Communicating Change: A Review and Critical Analysis of Programmatic and Participatory Implementation Approaches. Journal of Change Management, 8(3–4), 199–211. https://doi.org/10.1080/14697010802594604
Russ, T. L. (2010). Programmatic and participatory: Two frameworks for classifying experiential change implementation methods. Simulation and Gaming, 41(5), 767–786. https://doi.org/10.1177/1046878109353570
Seaman, J. E., Allen, I. E., & Seaman, J. (2018). Grade Increase: Tracking Distance Education in the United States. Babson Survey Research Group.
Wang, Q. (2006). Quality Assurance - Best Practices for Assessing Online Programs. International Journal on ELearning, 5(2), 265–274.
Woods Jr, D. R. (2014). Applying the Quality Matters (QM)™ Rubric to Improve Online Business Course Materials. Research in Higher Education Journal, 23.
Online Journal of
Distance Learning Administration, XXIII, Number 3, Fall 2020
University of West Georgia, Distance Education Center
Back to the Online Journal of Distance Learning Administration Contents