Implementation of a Quality Assurance Review System for the Scalable Development of Online Courses

Devrim Ozdemir, Ph.D.
George Mason University

Rich Loose
Black Hills State University


With the growing demand for quality online education in the US, developing quality online courses and online programs, and more importantly maintaining this quality, have been an inevitable concern for higher education institutes. Current literature on quality assurance in online education mostly focuses on the development of review models and frameworks as well as the development of review rubrics. The development of comprehensive models in addition to the valid and reliable quality assurance review rubrics is very important for the development of quality online courses and programs. However, it is also important to maintain this quality once the quality is attained. Factors such as increasing number of online courses, dynamic faculty body delivering these courses, and disruptive innovations in online education continue to make the ongoing maintenance of the quality of online education particularly challenging. This article presents the development and implementation processes of an electronic quality assurance review system for the scalable development of online courses in a regional university in the Midwest US. In particular, we will introduce the context of online course development in the university, present the reasons for developing such a system, outline the framework of the system, and present the implementation process of the system. Finally, we will discuss the future recommendations for our existing system. Our goal is to present our case as a guide to those higher education institutes which are responding to growing demand in quality online education. 


In the last decade, online education in the US has gained a growing popularity (Allen & Seaman, 2013). The improvement of technology has played a pivotal role in the expansion of online education (Kim & Bonk, 2006). However, this new type of education has also brought some challenges (Howell, Williams, & Lindsay, 2003). In the last decade, we have experienced a changing approach to online education. Instructors took new roles to provide quality online instruction (Yang & Cornelious, 2005). The myths of online education such as lack of human interaction, limitation to content learning, and requirement of technology proficiency have been addressed (Li & Akins, 2004), and online education has matured as a more convenient, individualized, and engaging learning opportunity (Harasim, 2000). Part of the accomplishment is due to the increasing focus on the quality of online courses (Wang, 2006). There are organizations particularly focusing on quality of online education although more research in quality online education is needed (Mariasingam & Hanna, 2006). The approaches to assure quality in online education at the institutional level change as well (Britto, Ford, & Wise, 2013; Lee, Dickerson, & Winslow, 2012). As of today, the quality assurance of online courses and programs focuses on the development of frameworks and benchmarks to initiate a quality assurance review process (Boston, Ice, & Gibson, 2011). In addition to the initiation of quality assurance of online courses, this article also focuses on the maintenance of the quality assurance of online courses considering the increasing number of courses, dynamic faculty body, and rapidly changing landscape of online education. In particular, we focus on the implementation of a quality assurance review system for the scalable development of online courses.

Quality Assurance Review Process

In this section, we present background information regarding the process of quality assurance review of online courses in our particular university located in the Midwest US. Rather than the external reviews such as those conducted by accreditation agencies or state boards in some regions, the quality assurance review process discussed in this article is internal and conducted by the instructional designer in collaboration with the online course instructor. Overall, the initiation and maintenance of an internal quality assurance review process for online courses has been vital and challenging for the success of higher education institutions. According to Jara and Mellar (2007), online education has four unique aspects which makes quality assurance review vital. These aspects are the disaggregated nature of the online course development process, the necessity of the organization of the course development teams, visibility or openness of the online courses to review without obtrusiveness, and limited access of staff to students (Jara & Mellar, 2007). These aspects make the online courses unique compared to traditional face-to-face courses and urge the higher education institutes to establish an internal review process. However, developing a quality assurance framework is a challenging task. "Establishing quantifiable and meaningful metrics", "discovering strategies for assessing those measures which are hard to quantify", "coming to a common agreement and understanding on minimum standards", "having staff and schools take ownership of the process and standards and seeing them as agents and for quality assurance and continuous improvement" and "establishing the degree to which the use of such a system that contributes to the improvement of teaching and learning in the university" are the expected challenges (Oliver, 2003). Considering these challenges, we used an adapted version of the Quality Matters rubric in our quality assurance review.

This adapted rubric was determined by a committee representing the member universities of the South Dakota Board of Regents. This was the first step towards establishing quantifiable and meaningful metrics as Oliver (2003) mentioned. We also treated the rubric as a checklist and conducted context-bound evaluations of the online courses with the course instructors as Hosie, Schibeci, & Backhaus (2005) described. Our quality assurance review rubric provided us a checklist and a framework for the course design. However, after the development of the courses, the online courses were reviewed in their own contexts. The instructor of the course provided the context for the instructional designer whereas the collaboration between the instructor and the instructional designer resulted in a quality online course. Our approach can also be described as "the Basic Guidelines Approach" (Lee et al., 2012). Within this approach, we provided the quality assurance rubric to the course instructor in advance to inform the instructor of basic guidelines. Later, the instructional designer reviewed the course with the instructor when the course was ready.

Before the development and implementation of a quality assurance review system, the reviews were handled through a laborious process. The instructional designer was informed by his supervisor upon the receipt of a paper copy of the course development agreement form. The instructional designer made personal contact with the instructor to schedule meetings. All of the relevant forms including the quality assurance review rubric were paper based. The records of the evaluation were kept electronically as scanned copies of the evaluation forms as well as the original paper copies. The evaluation scores were also kept in a Microsoft Excel spreadsheet. This entire review process could have been manageable if there was no increase in the number of online courses each semester and the online courses were not expected to be reviewed after three years. However, our university experienced a huge growth in the online delivery of courses. Reviewing hundreds of course evaluation entries on a spreadsheet file, trying to determine which courses have expired reviews, and trying to locate a particular completed evaluation form among many others in electronic folders started to take more time than the actual reviews. Therefore, we were urged to come up with a more systematic approach and decided to develop a Quality Assurance Review System for the scalable development of online courses.

Development of the Quality Assurance Review System

After the decision was made to develop a quality assurance review system, the authors of this article started to work together and hold meetings for three months to discuss possible scenarios. The first meeting was held in July of 2009. Workflows for the overall system and subsystems were developed to explain the detailed steps involved in the quality assurance review process. After the approval of the former Dean of Educational Outreach Office in the university, the development of the system started in September. Throughout the process the development team continued to hold meetings to go over the project and make necessary modifications. The project was completed in October. Upon the completion of the project, 283 former course evaluation scores were entered into the system manually from the previous records.

The Quality Assurance Review System was designed to:

New Online Course Agreement Submission

The first stage of the project was to develop a flowchart for the new online course agreement submission workflow. Figure 1 below depicts the workflow of the new online course agreement submission. A major task in creating this flowchart involved determining the state and institutional requirements for course reviews. These requirements were largely undocumented and required communication with many people to ascertain. According to the flowchart, the quality assurance review system was made publicly available to all stakeholders with login required. Once the user entered the login information, the system asked the user the purpose of being in the system. When the user chose "New Course Agreement", the system directed the user to an electronic form to submit a new online course agreement. After clicking the submit button, the Office of Educational Outreach (the former Dean of Educational Outreach, the instructional designer, and the secretary for the payment request) was informed by email. At this stage, the instructional designer verified if the course was new. We had cases where the course was, in fact, new but not in the state central course registration system, or was not new but considered new because of a typo in the submission. Once the instructional designer confirmed the course, the system sent the request to the department chair, college dean, and the former Dean of Educational Outreach. The system sent the request in this particular order one at a time after the approval of each person. If the course was identified as not new or not approved by at least one administrator, the instructor was automatically notified by the system via email. If the agreement submission was approved by all the stakeholders, the system made the quality assurance review rubric available for the instructional designer and the instructor, otherwise, if an agreement was denied both parties were informed via email. At all times the status of the application and its current stage in the approval process were available to the parties by logging into the system.

Figure 1. New Online Course Agreement Submission Workflow

Course Reviews

In the second stage, we focused on the development of the flowchart for the course review process. In particular, we tried to identify the behavior of the system depending on the status of the course. There were five conditions that triggered the system to label the course as needing a review.

For new courses, the criterion was that a new course development request was previously submitted into the system. If this was the case, these submissions were sent to a holding tank, a database separate from the State central course registration system. With this workflow, the instructor and instructional designer were able to start working together, to develop the course, and to review the course without waiting for the course to be posted in the State central course registration system. The holding tank also allowed the system to keep records of the courses which were never taught even though a new course development request was submitted. We had cases in which instructors had personal issues or we experienced changes in the programs which changed the plans for particular courses. These courses never appeared in the State central course registration system but we kept the records in the quality assurance review system via the holding tank.

Our system also pointed out some online courses which were taught but never reviewed before. This was a surprising but eye opening experience for us. We confirmed these courses with their instructors and realized that our instructors were never contacted and were very interested in the course reviews.

The quality assurance system also handled applications for revision requests. The Office of Educational Outreach at the university provided revision stipends which were different from new course development stipends. Instructors who were planning to make major improvements in their online courses were able to submit requests for revision stipends. In addition to the existing quality assurance review rubric, we also reviewed these courses with a separate rubric which focused on the amount of work invested in the courses. With satisfactory scores, instructors were eligible for the stipend. Figure 3 depicts the workflow of the revision process in the system.

In our university, the instructors had the freedom and flexibility to come up with their own course designs. We did not use any strict master course template. Due to this freedom, we experienced that some instructors made significant changes in their courses once they were assigned to a course which was taught before. Therefore, we designed the system to signal the courses with instructor changes so that we can review these courses with the instructor.

Since strategies and applications in online education change rapidly, we also developed the system to identify the courses with expired reviews. We believed that three years was sufficient time to re-review the courses and make sure that the course was still effective. It was an opportunity for the instructional designer to contact these instructors and at least make sure that their questions were answered.   

Figure 2. Course Reviews Workflow

Figure 3. Course Revision Workflow

While all these processes were happening in the background, the instructional designer was using a simple yet effective interface for the reviews of courses. The design of the system allowed the instructional designer to quickly identify what courses needed attention through icons visualizing different review types. The system also allowed the instructional designer to see the old reviews of the existing course or the previous courses taught by the particular instructor. We developed an experience meter which also assigned a number of stars based on the number of online courses taught by a particular instructor in the university. Every two courses were counted for one star, and the instructor was able to get a maximum of 10 stars. This experience meter helped the instructional designer to evaluate how familiar the instructor was with the learning management system and the university procedures before collaborating with the instructor. Figure 4 below depicts the interface design of the system for the instructional designer.

Figure 4. Instructional Designer Interface Design

Future Plans

As with any new project, many ideas came up after the system was put in place. The Instructional Designers requested new features, and communication with faculty and administrators proved to be difficult. First, our university has a common pattern where two faculty alternate teaching a specific course. Our current system identifies these courses as needing review every time they are offered due to the changing instructors. The next version of the system will check the previous history of the course offering for that pattern and skip flagging the course for review if this pattern is found. Next, while the experience meter based on number of online courses taught has been useful, we'd like to create a version that better reflects faculties' abilities in online education. We plan on linking the experience meter to our faculty workshops, whereas faculty attend institutional workshops that support online teaching, this new knowledge would be reflected in our meter.  We have also found that many of the interactions our Instructional Designers have with faculty are repetitive in nature and could be automated to a degree. We plan on creating templates for specific interactions to save time for our Instructional Designers. Finally, email itself as a communication tool with faculty and administrators has proven less than ideal. Emails are often lost, leading to a constant need for resending them. We have considered a feature that repeatedly checks each stage of a faculty submission to see if it has been attended to. If not, another copy of the email would be sent. We have also considered other ideas besides email for a communication tool but have not found a suitable replacement.


In this article, we presented our case for the implementation of a quality assurance review system for the scalable development of online courses. Although there were a number of issues to be addressed for a more efficient system, the current system allowed us to overcome many challenges we faced in the past regarding the quality assurance of online courses. It is important to emphasize that our current approach focused on the initial quality assurance review of the online courses. A more comprehensive quality assurance review might involve a number of formative and summative evaluations which were conducted by the course instructor in addition to the initial quality assurance review. This way, the standards and benchmarks identified in the quality assurance review rubrics could be verified with the course and student data.

Our quality assurance rubric allowed us to emphasize some important points to the instructors to develop a more effective online course. However, it was never used as a prescriptive tool. As instructional designers, it is our experience that the instructional design approaches to develop an effective course change based on the context of the course. Although the content, the course goals, the learning objectives, the instructional strategies, the assessment methods, and the instructional tools change, the rubric allowed us to keep all these components aligned with each other for the most efficient online course. It also reminded us of some important features to be included which were unique to online courses such as the course communication policy and netiquette expectations.

Last, our current system allowed us to document and store a large number of evaluations. With the current system, the scalable development of the online courses was not disrupted by many laborious tasks. Both the instructors and the instructional designers directed their efforts on the quality design and development of the online courses rather than administrative work. Our work was also appreciated by many online instructors during the course reviews.


Allen, I. E., & Seaman, J. (2013). Changing Course: Ten Years of Tracking Online Education in the United States | The Sloan Consortium. Retrieved from

Boston, W. E., Ice, P., & Gibson, A. M. (2011). A review of paradigms for evaluating the quality of online education programs. Online Journal of Distance Learning Administration, 14(4). Retrieved from

Britto, M., Ford, C., & Wise, J.-M. (2013). Three Institutions, three approaches, one goal: Addressing quality assurance in online learning. Journal of Asynchronous Learning Networks, 17(4). Retrieved from

Harasim, L. (2000). Shift happens: Online education as a new paradigm in learning. The Internet and Higher Education, 3(1), 41–61.

Hosie, P., Schibeci, R., & Backhaus, A. (2005). A framework and checklists for evaluating online learning in higher education. Assessment & Evaluation in Higher Education, 30(5), 539–553.

Howell, S. L., Williams, P. B., & Lindsay, N. K. (2003). Thirty-two trends affecting distance education: An informed foundation for strategic planning. Online Journal of Distance Learning Administration, 6(3).

Jara, M., & Mellar, H. (2007). Exploring the mechanisms for assuring quality of e-learning courses in UK higher education institutions. European Journal of Open and Distance Learning, 1. Retrieved from

Kim, K., & Bonk, C. J. (2006). The future of online teaching and learning in higher education: The survey says. Educause Quarterly, 29(4), 22.

Lee, C.-Y., Dickerson, J., & Winslow, J. (2012). An analysis of organizational approaches to online course structures. Online Journal of Distance Learning Administration, 15(1). Retrieved from

Li, Q., & Akins, M. (2004). Sixteen myths about online teaching and learning in higher education: Don't believe everything you hear. TechTrends, 49(4), 51–60.

Mariasingam, M. A., & Hanna, D. E. (2006). Benchmarking quality in online degree programs status and prospects. Online Journal of Distance Learning Administration, 9(3). Retrieved from

Oliver, R. (2003). Exploring benchmarks and standards for assuring quality online teaching and learning in higher education. Retrieved from

Quality Matters Program. (2013). Retrieved July 17, 2013, from

Wang, Q. (2006). Quality assurance–best practices for assessing online programs. International Journal on E-Learning, 5(2), 265–274.

Yang, Y., & Cornelious, L. F. (2005). Preparing instructors for quality online instruction. Online Journal of Distance Learning Administration, 8(1). Retrieved from

Online Journal of Distance Learning Administration, Volume XVII, Number 1, March 2014
University of West Georgia, Distance Education Center
Back to the Online Journal of Distance Learning Administration Contents