A Review of Paradigms for Evaluating the Quality of Online Education Programs


Kaye Shelton Ph.D.
Dallas Baptist University
kaye@dbu.edu

Abstract

As the demands for public accountability increase for higher education, institutions must demonstrate quality within programs and processes, including those provided through online education. While quality may be elusive to specifically quantify, there have been several recommendations for identifying and defining quality online education that address common areas of program development and administration. This paper explores and compares 13 recommendations and paradigms found in the literature for identifying and evaluating the quality of online education programs in higher education.

Introduction

In the early years of higher education, quality education was defined as a small group of elite students living together and learning under the guidance of a resident scholar. Later, quality was believed to primarily exist in those institutions that were expensive and highly exclusive (Daniel, Kanwar, & Uvalic-Trumbic, 2009). However, that is no longer the case; today, public scrutiny for higher education is greater than ever before (Wergin, 2005) with the number of stakeholders and constituencies—all who have a vested interest in quality and accountability—continuing to increase. Because of this interest in quality, many institutions are finding that their standard processes for quality assurance are now inadequate and, often, not a continuous process for improvement (Dill, 2000).

Quality assurance and accountability for higher education institutions in the United States have been addressed primarily by the regional accreditors and discipline-specific accreditation organizations such as the Association to Advance Collegiate Schools of Business (AACSB) for business programs, the National Council for Accreditation of Teacher Education (NCATE) for education programs and teacher certification, and various others. Regional accreditors emphasize the review process with an institution’s self-study report, which demonstrates that established standards (e.g., faculty credentials, financial performance, student satisfaction, and the achievement of learner outcomes) have been met. The regional accrediting bodies also have guidelines and standards for evaluation for distance education programs (Howell, Baker, Zuehl, & Johansen, 2007).

With the establishment of the Spellings Commission in 2005, the federal government became more heavily involved in institutional accountability. Institutions are being asked to provide more transparent evidence of student achievement and institutional performance, to establish methods for comparing to and benchmarking against other institutions, and to establish threshold levels for learning standards (Eaton, 2007). As if administrators needed more motivation, Rice and Taylor (2003) assert that “shrinking budgets, achievement-based funding, and demands for assessment of student learning” (p. 2) alone should encourage the implementation of quality-based management strategies for continuous improvement. Because of the changing landscape and increased call for accountability, higher education is now being challenged to reconceptualize methods and processes used to indicate quality and excellence, including those used for assessing and evaluating online education programs.

Quality Evaluation for Online Education Programs

It has been said that delivering higher education courses online “holds greater promise and is subject to more suspicion than any other instructional mode in the 21st century” (Casey, 2008, p. 45). However, it has been said that “quality is a complex and difficult concept, one that depends on a range of factors arising from the student, the curriculum, the instructional design, technology used, [and] faculty characteristics” (Meyer, 2002, p. 101). While the total concept of quality for all program elements may be difficult to grasp, it is not an excuse to ignore the need for assessing and demonstrating quality online education. Moreover, as the growth continues as expected, the demand for quality will only increase (Cavanaugh, 2002). According to the literature, many different approaches exist to evaluate quality in online education. For example, Lee and Dziuban (2002) suggested that the overall success of online education greatly depends upon the quality evaluation strategies integrated with the program. Benson (2003) explored the different meanings of quality that stakeholders brought to the table when planning an online degree program. She found the following perceptions of quality were resonant with stakeholders: quality is overcoming the stigma associated with online learning; quality is accreditation; quality is an efficient and effective course development process; and quality is effective pedagogy. After paralleling the demise of some online education programs created as stand-alone units to the dotcom bust in 2000, Shelton and Saltsman (2004) postulated that the mark of quality for an online education program is not its growth rate but the combination of retention rate, academic outcomes, and success in online student and faculty support. However, after their study of program administrators, Husman and Miller (2001) argued that “administrators perceive quality to be based almost exclusively in the performance of faculty” (para. 17). Online education has been heavily critiqued and compared to traditional teaching since its emergence as an instructional technique, with veiled suggestions of inadequacies and low quality. Responding to those suggestions, various approaches found in the literature propose guidelines for evaluating quality online education programs.

Methodology

For this paper, 13 paradigms for evaluating quality in online education programs were located within the literature, and carefully examined and compared for similarities and differences. Each study or article reviewed listed certain areas of focus and themes considered basic for indicating quality in online education programs. Using an Excel spreadsheet and coding method, each paradigm was broken into the primary areas of focus or themes. The studies examined are not exhaustive but best represent the different efforts available to define and evaluate the quality of online education programs. The articles and studies examined are presented in chronological order of their appearance in the literature.

Existing Paradigms for Evaluating the Quality of Online Education Programs

The following studies and articles were examined:

IHEP’s 24 benchmarks for success in Internet-based distance education. Commissioned by the National Educators Association and Blackboard, Inc., the Institute for Higher Education Policy (IHEP) in their report, Quality on the Line: Benchmarks for Success in Internet-Based Distance Education (2000), identified 24 individual quality indicators chosen as absolutely essential by various respected online education leaders of higher education institutions out of an original 45 indicators determined through a literature search. While the study called each indicator a benchmark, they are, in reality, attributes of an online education program to indicate overall quality; they are not measureable against other institutional results. The study sought to prove that “distance learning can be quality learning” (Institute for Higher Education Policy, 2000, p. vii).

Considered foundational to quality distance learning, the Institute for Higher Education Policy’s (IHEP) research (Chaney et al.,2009) categorized the 24 quality indicators into seven themes: (1) institutional support, (2) course development, (3) teaching and learning, (4) course structure, (5) student support, (6) faculty support, and (7) evaluation and assessment. For example, under the Institutional Support (1) theme, the first indicator prescribed “a documented technology plan [in place] that includes electronic security measures to ensure both quality standards and the integrity and validity of information” (Institute for Higher Education Policy, 2000, p. 2). The Institutional Support (1) theme included reliability of the technology infrastructure and assurance that support is maintained for continued growth.

The Course Development (2) theme determined if guidelines are in place for the development of quality online course materials. Online course materials should engage the learner, encourage critical thinking, and undergo periodic revision. The Teaching/Learning (3) theme stipulated that interaction must occur during the teaching and learning process (student-instructor, student-student), and timely and constructive feedback is provided.

The Course Structure (4) theme addressed the quality of information, such as a student readiness indicator and course objectives, provided to a student prior to enrollment in an online class. Included in this theme was a provision of library resources for online students, which was also a requirement by all regional accrediting bodies. The Student Support (5) theme considered the kind of information students receive about the program, admission requirements, proctoring requirements, and if all student services were available to online students. Online programs should have a repository of materials, such as a list of frequently asked questions and information on where to get help, online students will need to be successful in the program.

The Faculty Support (6) theme included the resources provided to faculty for developing and teaching an online course. Faculty also need polices and a support structure, as well as training and mentoring. The final theme, Evaluation and Assessment (7), was concerned with if, or how, online education was being evaluated and what policies and procedures were in place for supporting an evaluation process. According to IHEP (Institute for Higher Education Policy, 2000), “data on enrollment, costs, and successful/innovative uses of technology are used to evaluate program effectiveness” (p. 3). Learning outcomes should be assessed and evaluated for clarity and appropriateness to support continued improvement.

Bates’ ACTIONS model of quality. To evaluate instructional technologies in education, Tony Bates (2000) coined the acronym ACTIONS: Access and flexibility, Costs, Teaching and learning, Interactivity and user friendliness, Organizational issues, Novelty, and Speed. The ACTIONS model was designed to help with the selection of instructional technologies and not to evaluate distance learning programs; however, each of these themes can be applied to online education. Bates’ ACTIONS model was one of the first to address cost factors, which affect both the institution and the student.

WCET’s best practices for electronically offered degree and certificate programs. One of the first attempts to identify and assess quality in online education was developed in 1995 by the Western Cooperative for Educational Telecommunications (WCET). Principles of Good Practice for Electronically Offered Academic Degree and Certificate Programs identified three primary categories for quality evaluation: curriculum and instruction, institutional context and commitment, and evaluation and assessment. Institutional context and commitment was further divided into five areas: role and mission, faculty support, resources for learning, students and student services, and commitment to support (Western Cooperative for Educational Telecommunications, 1997). A second report, developed in 2001 along with the regional accrediting bodies titled Best Practices for Electronically Offered Degree and Certificate Programs, expanded the prior report into five categories instead of three: institutional context and commitment, curriculum and instruction, faculty support, student support, and evaluation and assessment (Western Cooperative for Educational Telecommunications, 2001). In the prior report, faculty support and student support were considered subsets of the institutional context and commitment category. The 2001 report is one of the most frequently cited when quality indicators for online education programs are addressed.

The WCET standards developed in 2001 were not created as an evaluation instrument; rather, the standards demonstrated how basic principles of institutional quality already in place with the accreditors would apply to distance learning programs (Western Cooperative for Educational Telecommunications, 2001). These key elements of quality distance learning are still highly respected and have been used since their creation by regional accreditors to review programs for institutional accreditation.

Khan’s eight dimensions of e-learning framework. Badrul Khan (2001) examined the critical dimensions necessary for quality learning online and found eight primary categories: institutional, management, technological, pedagogical, ethical, interface design, resource support, and evaluation (Khan, 2001). Each dimension, presented in Table 1, is integral to a systems approach for evaluating quality.

Table 1

Khan’s Eight Dimensions of E-Learning Framework (2001)


Dimensions of E-Learning

Descriptions

Institutional

The institutional dimension is concerned with issues of administrative affairs, academic affairs, and student services related to e-learning.

Management

The management of e-learning refers to the maintenance of learning environment and distribution of information.

Technological

The technological dimension of the E-Learning Framework examines issues of technology infrastructure in e-learning environments. This includes infrastructure planning, hardware, and software.

Pedagogical

The pedagogical dimension of E-learning refers to teaching and learning. This dimension addresses issues concerning content analysis, audience analysis, goal analysis, media analysis, design approach, organization, and methods and strategies of e-learning environments.

Ethical

The ethical considerations of e-learning relate to social and political influence, cultural diversity, bias, geographical diversity, learner diversity, information accessibility, etiquette, and the legal issues.

Interface Design

The interface design refers to the overall look and feel of e-learning programs. Interface design dimension encompasses page and site design, content design, navigation, and usability testing.

Resource Support

The resource support dimension of the E-Learning Framework examines the online support and resources required to foster meaningful learning environments.

Evaluation

The evaluation for e-learning includes both assessment of learners and evaluation of the instruction and learning environment.

According to Khan, this comprehensive model may also be used for strategic planning and program improvement and has been widely adopted. Each dimension or category of quality indicators contained sub-dimensions (as shown in Table 3) that also may be used as quality indicators for program evaluation.

Table 2

E-Learning Framework Sub-Dimensions (Khan, 2001)


Sub-Dimensions

 

INSTITUTIONAL

  • Administrative Affairs
  • Academic affairs
  • Student services

MANAGEMENT

  • E‑Learning Content Development
  • E-Learning Maintenance

TECHNOLOGICAL

  • Infrastructure planning
  • Hardware
  • Software

PEDAGOGICAL

  • Content Analysis
  • Audience Analysis
  • Goal Analysis
  • Medium Analysis
  • Design approach
  • Organization
  • Methods and Strategies

 

ETHICAL

  • Social and Political Influence
  • Cultural Diversity
  • Bias
  • Geographical diversity
  • Learner diversity
  • Digital Divide
  • Etiquette
  • Legal issues

INTERFACE DESIGN

  • Page and site design
  • Content design
  • Navigation
  • Accessibility
  • Usability testing

RESOURCE SUPPORT

  • Online support
  • Resources

EVALUATION

  • Assessment of learner
  • Evaluation of the instruction/learning environment

Frydenberg’s quality standards in e-learning. Frydenberg (2002) summarized published quality standards for online education in the United States and found the following most common themes in the literature: institutional and executive commitment; technological infrastructure; student services; instructional design and course development; instruction and instructors; program delivery; financial health; legal and regulatory compliance; and program evaluation.  She observed the institutional and executive commitment theme to be one of the most common in the literature, and evaluation of a program to be the least written about, “since few fully developed programs have arrived at a stage where summative evaluation is possible” (p. 13).

Sloan consortium’s five pillars of quality. The Sloan Consortium, an organization dedicated to improving the quality of online education, identified the Five Pillars of Quality Online Education (Bourne & Moore, 2002) they considered the building blocks for quality online learning: Learning Effectiveness; Student Satisfaction; Faculty Satisfaction; Scale; and Access.

The Learning Effectiveness Pillar addressed the commitment to providing students with high quality education at least equivalent to that of traditional students and which includes interactivity, pedagogy, instructional design, and learning outcomes (Sloan Consortium, 2009b).  According to Lorenzo and Moore (2002), the Learning Effectiveness Pillar evaluates learning activities because they believed success is related to student interactivity with the instructor and creating a learning environment of inquiry. The Student Satisfaction Pillar focused on the experience of the student by providing necessary support services such as advising and counseling and opportunities for peer interaction (Sloan Consortium, 2009b). It also examined if students were satisfied with what and how they learned in either the class or overall program. In fact, “a number of studies show that online environments that effectively facilitate high levels of interaction and collaboration among learners typically result in successful online programs” (Lorenzo & Moore, 2002, p. 5).

The Faculty Satisfaction Pillar addressed the support and resources needed for faculty to have a positive experience in the online teaching environment. According to the Sloan Consortium (Sloan Consortium, 2009b), “Faculty satisfaction is enhanced when the institution supports faculty members with a robust and well-maintained technical infrastructure, training in online instructional skills, and ongoing technical and administrative assistance” (para. 5).

The Scale Pillar was originally entitled Cost Effectiveness and was later changed to “Scale”; a focus on the cost effective programs is considered central to institutions who desire to “offer their best educational value to learners and to achieve capacity enrollment” (Sloan Consortium, 2009a). They believed an institution should monitor costs to keep tuition as low as possible while providing a quality educational experience for both students and faculty. Strategies for quality improvement were also addressed in the Scale Pillar.

The Access Pillar assured that students have full access to the learning materials and services they need throughout their online degree program, including support for disabilities and online readiness assessment. This pillar examined barriers that may be in the way of online students having access to all resources necessary to achieve success.

Lee and Dziuban’s quality assurance strategy. Lee and Dziuban (2002) believed there were five primary components for evaluating quality in online education: administrative leadership and support, ongoing program concerns, web course development, student concerns, and faculty support.  Structured around the University of Central Florida’s online programs, their Quality Assurance Strategy (QAS) maintained the importance of administrative support and leadership for resources, training, and evaluation. They recommended that online programs be extensively planned through discussion, evaluation, and analysis, which is crucial to the overall success of the program.

Lockhart and Lacy’s assessment model. Lockhart and Lacy (2002) worked with faculty and administrators at several national conference meetings to develop a model that offered seven components needed to evaluate online education: institutional readiness/administration (budgets, priority and management); faculty services (support, outcome measurement, and training effectiveness); instructional design/course usability (technology must be user friendly and accessible); student readiness (assessment for student readiness and preparation); student services (effectiveness of provided services); learning outcomes (measurement of learning outcomes); and retention (comparing rates to face-to-face delivery and enrollment monitoring). Focusing on data collection and analysis, they suggested surveying areas of faculty support and training and student support. They also recommended that student grades and retention rates be examined as well as results of online learning outcomes, which have proven to be essential to evaluation. Finally, they challenged higher education  to understand “the critical element is that institutions should plan, evaluate, and then revise programs based upon assessment results rather than just being another institution to deliver classes at a distance” (p. 104).

CHEA’s accreditation and quality assurance study. The Council for Higher Education Accreditation (CHEA) (2002) examined the 17 institutional accreditors recognized by the United States Department of Education (USDE) or the Council for Higher Education Accreditation (CHEA) because each reviewed distance learning programs within their constituency. Their work resulted in what they believed to be the seven most significant areas for assuring the quality of distance learning programs:

  1. Institutional Mission: Does offering distance learning make sense in this institution?
  2. Institutional Organizational Structure: Is the institution suitably structured to offer quality distance learning?
  3. Institutional Resources: Does the institution sustain adequate financing to offer quality distance learning?
  4. Curriculum and Instruction: Does the institution have appropriate curricula and design of instruction to offer quality distance learning?
  5. Faculty Support: Are faculty competently engaged in offering distance learning and do they have adequate resources, facilities, and equipment?
  6. Student Support: Do students have needed counseling, advising, equipment, facilities, and instructional materials to pursue distance learning?
  7. Student Learning Outcomes: Does the institution routinely evaluate the quality of distance learning based on evidence of student achievement?
    (p. 7)

The CHEA report (2002) described three challenges that must be addressed for assuring the quality of online education programs: the alternative design of instruction, the abundance of alternative providers of higher education, and an expanded focus on training. 

Osika’s concentric model. Osika (2004) developed a concentric model for supporting online education programs using seven themes: faculty support, student support, content support, course management system support, technology support, program support, and community support. She validated this model with a panel of experts that consisted of administrators and those with various roles in online education programs including faculty and staff members.

Moore and Kearsley’s assessment recommendations. Moore and Kearsley (2005) postulated that while everyone within the institution has a role to play in quality education, they believed senior administrators should be responsible for measurement and quality improvements. While they did not offer a prescriptive plan for evaluation, they suggested assessment of the following areas: the number and quality of applications and enrollments; student achievement; student satisfaction; faculty satisfaction; program or institutional reputation; and the quality of course materials.

Haroff and Valentine’s six–factor solution. Haroff and Valentine (2006) explored web-based adult education programs and found six dimensions in program quality: quality of instruction, quality of administrative recognition, quality of advisement, quality of technical support, quality of advance information, and quality of course evaluation. Beginning with the IHEP (2000) 24 quality indicators as a foundation, they surveyed administrators and educators involved in teaching online, using 41 quality variables. The six dimensions identified resulted from 65% of the variance in responses.

Chaney, Eddy, Droman, Glessner, Green, and Lara-Alecio’s quality indicators. In a recent review of the literature, Chaney et al. (2009) identified the following as common themes of quality indicators: teaching and learning effectiveness; student support; technology; course development/instructional design; faculty support; evaluation and assessment; and organizational/institutional impact. (Table 3 provides the individual quality indicators listed for each theme.) They recommended that “the next step for professionals in the field of distance education is to integrate these quality assurance factors into the design, implementation, and evaluation of current and future distance education efforts” (p. 60).

Table 3

Common Quality Indicators of Distance Education Identified in the Literature (Chaney et al., 2009)

Theme

Indicator

Teaching and Learning Effectiveness

student-teacher interaction
prompt feedback
respect diverse ways of learning

Student Support

student support services
clear analysis of audience

Technology

technology plan to ensure quality is documented
appropriate tools and media
reliability of technology

Course Development/
Instructional Design

course structure guidelines
active learning techniques
implementation of guidelines for course development/review of instructional materials

Faculty Support

faculty support services

Evaluation and Assessment

program evaluation and assessment

Organizational/Institutional-Impact

institutional support and institutional resources
strong rationale for distance education/correlates to institutional mission

Theme and Paradigm Comparison. The 13 different articles and studies presented in this review of quality evaluation for online education programs have many commonalities among their findings. The Institutional Commitment, Support, and Leadership theme was the most cited when determining standards for online education programs. At least 10 of the paradigms examined pointed toward the Institutional Commitment, Support, and Leadership theme as being a primary indicator of quality. Teaching and Learning was the second most cited theme for indicating quality. However, the literature as a whole has focused on the quality of teaching and pedagogy far more than that of program quality. Early in the literature, most authors wrote about overall design of the course since individual courses moved online before complete programs. 

Faculty Support, Student Support, and the Course Development themes were the third most cited in the analyzed studies, with these being identified by eight of those examined. For success in teaching online, faculty require strong and ongoing support, training, motivation, compensation, and policy. Institutional support should also be available for online course development and for keeping materials updated and current. Online students require the same support services as traditional students; however, it is often more challenging to find ways to deliver those services and support in an online environment.

Technology and Evaluation and Assessment were identified in only 6 of the 13 studies reviewed. This is interesting to note since technology is foundational to the infrastructure of online education and should be considered a critical component to quality and success. Cost Effectiveness and Management and Planning were only identified three times in the studies; and Faculty Satisfaction, Student Satisfaction, and Student Retention were only listed twice out of the 13 examined. Various indicators, such as advising, government and regulatory guidelines, and user friendliness, were suggested just once.

Conclusions and Recommendations

This review of the existing paradigms suggests a strong need for a common method for assessing the quality of online education programs. Specific indicators for quality online programs vary from institution to institution; however, this review sought to find the most common themes and domains identified today by program administrators that will assist them with evaluating and improving the overall quality of their online education programs. While some of the themes were strongly considered to be significant quality indicators, others, such as faculty support, were not. A more consistent approach is needed.

Until recently, a researched-based rubric or scorecard designed to assess quality in online education programs, similar to such a tool for online courses, could not be located. However, as a result of a recent research study conducted by the author, a tool is now available that defines 70 elements of quality that may be quantified to assess an online program. This interactive tool, which produces a numeric score sheet that quantifies quality, should become an important resource for program administrators to identify and evaluate elements of quality within an online education program. The scorecard results may also point to recommended strategies of program improvement. A Quality Scorecard for the Administration of Online Education Programs may be accessed at the following website: http://sloanconsortium.org/quality_scoreboard_online_program. It is desire of the author that the Quality Scorecard will facilitate a more consistent method by which administrators and educators may evaluate and improve the overall quality of their institutions’ online education programs.


References

Bates, A. W. (2000). Managing technological change: Strategies for college and university leaders. San Francisco: Jossey-Bass.

Benson, A. D. (2003). Dimensions of quality in online degree programs. The American Journal of Distance Education, 17(3), 145-149. doi: 10.1207/S15389286AJDE1703_2

Bourne, J., & Moore, J. (Eds.). (2002). Elements of quality in online education (Vol. 3). Needham, MA: Sloan-C.

Casey, D. M. (2008). A journey to legitimacy: The historical development of distance education through technology. TechTrends: Linking Research & Practice to Improve Learning, 52(2), 45-51.

Cavanaugh, C. (2002). Distance education quality: Success factors for resources, practices and results. In R. Discenza, C. D. Howard, & K. Schenk (Eds.), The design & management of effective distance learning programs (pp. 171-189). Hershey, PA: Idea Group.

Chaney, B. H., Eddy, J. M., Dorman, S. M., Glessner, L. L., Green, B. L., & Lara-Alecio, R. (2009). A primer on quality indicators of distance education. Society for Public Health Education, 10(2), 222-231.

Council for Higher Education Accreditation. (2002). Accreditation and assuring quality in distance learning. CHEA Monograph Series 2002 (Vol. 1). Washington DC: Author.

Daniel, J., Kanwar, A., & Uvalic-Trumbic, S. (2009). Breaking higher education’s iron triangle: Access, cost and quality. Change, 41(2), 30-35. doi: 10.3200/CHNG.41.2.30-35

Dill, D. D. (2000). Is there an academic audit in your future? Reforming quality assurance in U.S. higher education. Change, 32(4), 35-41. doi: 10.1080/00091380009601746

Eaton, J. (2007). Institutions. accreditors, and the federal government: Redefining their “appropriate relationship.” Change, 35(5), 16-23. doi: 10.3200/CHNG.39.5.16-23

Frydenberg, J. (2002). Quality standards in e-learning: A matrix of analysis. International Review of Research in Open and Distance Learning, 3(2).

Howell, S. L., Baker, K., Zuehl, J., & Johansen, J. (2007). Distance education and the six regional accrediting commissions: A comparative analysis. Manuscript (ERIC Document Reproduction Service No. ED495650).  Retrieved from http://www.eric.ed.gov/ERICWebPortal/contentdelivery/servlet/ERICServlet?accno=ED495650

Haroff, P. A., & Valentine, T. (2006). Dimensions of program quality in web-based adult education. The American Journal of Distance Education, 20(1), 7-22. doi: 10.1207/s15389286ajde2001_2

Howell, S. L., Baker, K., Zuehl, J., & Johansen, J. (2007). Distance education and the six regional accrediting commissions: A comparative analysis. Manuscript (ERIC Document Reproduction Service No. ED495650).  Retrieved from http://www.eric.ed.gov/ERICWebPortal/contentdelivery/servlet/ERICServlet?accno=ED495650

Husman, D. E., & Miller, M. T. (2001). Improving distance education: Perceptions of program administrators. Online Journal of Distance Learning Administration, IV(III). Retrieved from http://www.westga.edu/~distance/ojdla/fall43/husmann43.html

Institute for Higher Education Policy. (1998). Assuring quality in distance learning: A preliminary review.  Washington, DC: Author. Retrieved from http://www.ihep.org/assets/files/publications/a-f/AssuringQualityDistanceLearning.pdf.

Khan, B. (2001). A framework for web-based learning. In B. Khan (Ed.), Web-based training (pp. 75-98). Englewood Cliffs, NJ: Educational Technology.

Kuh, G. D., & Pascarella, E. T. (2004). What does institutional selectivity tell us about educational quality? Change, 36(5), 52-58. doi: 10.1080/00091380409604986

Lee, J., & Dziuban, C. (2002). Using quality assurance strategies for online programs. Educational Technology Review, 10(2), 69-78.

Lockhart, M., & Lacy, K. (2002). As assessment model and methods for evaluating distance education programs. Perspectives, 6(4), 98-104. doi: 10.1080/136031002320634998

Lorenzo, G., & Moore, J. C. (2002). The Sloan Consortium Report to the Nation: Five pillars of quality online education. Retrieved from http://sloanconsortium.org/publications/books/vol5summary.pdf

Meyer, K. A. (2002). Quality in distance education: Focus on on-line learning. San Francisco: Jossey-Bass.

Moore, M. G., & Kearsley, G. (2005). Distance education: A systems view. Belmont, CA: Thomas Wadsworth.

Osika, E. R. (2004). The Concentric Support Model: A model for the planning and evaluation of distance learning programs (Doctoral dissertation). Retrieved from ProQuest Dissertations and Theses. (UMI No. 3150815)

Pond, W. K. (2002). Distributed education in the 21st century: Implications for quality assurance. Online Journal of Distance Learning Administrators, V(II). Retrieved from http://www.westga.edu/~distance/ojdla/summer52/pond52.pdf

Rice, G. K., & Taylor, D. C. (2003). Continuous-improvement strategies in higher education: A progress report. EDUCAUSE Center for Applied Research Bulletin, 2003, 20, 1-12.

Shelton, K., & Saltsman, G. (2004). The dotcom bust: A postmortem lesson for online education. Distance Learning, 1(1), 19-24.

Sloan Consortium. (2009a). The Sloan Consortium: A consortium of individuals, institutions and organizations committed to quality online education.  Retrieved from http://www.sloan-c.org/

Sloan Consortium. (2009b). The Sloan Consortium: The 5 pillars.  Retrieved from http://www.sloan-c.org/

Wergin, J. F. (2005). Higher education: Waking up to the importance of accreditation. Change, 37(3), 35-41.

Western Cooperative for Educational Telecommunications. (1997). Principles of good practice for electronically offered academic degree and certificate programs. Boulder, CO: Western Interstate Commission for Higher Education (WICHE).

Western Cooperative for Educational Telecommunications. (2001). Best practices for electronically offered degree and certificate programs. Boulder, CO: Western Interstate Commission for Higher Education (WICHE).


Online Journal of Distance Learning Administration, Volume IV, Number I, Spring 2011
University of West Georgia, Distance Education Center
Back to the Online Journal of Distance Learning Administration Contents