Toward an Effective Quality Assurance Model of Web-Based Learning: The Perspective of Academic Staff


Davey Yeung
Lecturer in Business and Information Technology
The Open University of Hong Kong
Hong Kong
dyeung@ouhk.edu.hk
Doctor of Business Administration candidate
The University of South Australia

 

Abstract

The increasing usage of Web-based learning has harvested in a speed beyond a proper control for its effectiveness and justification. Many institutions have invested heavily primarily for keeping their image of high technology in the eye of the public. The aim of this paper is to understand from survey the commonly agreed factors or elements that contribute and ensure quality in the Web-based learning environment. By understanding the key factors, management at Distance education institutions can evaluate and enhance their quality assurance mechanism so that they can improve the quality of their services to the students.

 

1. Introduction

Online education has generated tremendous excitement both inside and outside higher education. For some, it offers the potential to provide learning to new audiences; for others, it offers the opportunity fundamentally to transform learning delivery and the competitive landscape (Poehlein, 1996). The rapid expansion of the Web as a potential course delivery platform, combined with the increasing interest in lifelong learning and budget constraints, has created a significant incentive for universities to develop online programs. As the technology is now available and relatively user-friendly, those universities which do not embrace it will be left behind in the race for globalization and technological development. If we want universities to make the utmost use of the Web, it is essential to identify and understand the critical success factors affecting the online delivery of education. Indeed, if we continue to re-implement traditional models borrowed from classroom-based or distance education focused on passive transmission, we can expect only marginal improvements and may well simply increased the costs.

It has been widely recognized in the research literature (Bates, 1995; Fullan, 1993) that effective Web-based learning should include not just training in the use of technology. It must be integrated with pedagogical uses of technology to bring about learning for the development of lifelong learning skills and other emerging goals of education to meet the demands of the information age (He, 1998). Whenever Web technology is used in educational settings, it is vital to reflect on how this affects students, faculty members, courses and institutions (Barr & Tagg, 1995).

Geographically, Hong Kong is very small. No one has difficulty going to campus because of distance. However, life is hectic and fast in the city. Distance learners have increased rapidly in recent years due to demand of learning and the improvement of technology that makes distance learning easier through the Web. Furthermore, Web-based learning is seen by many as a transformative vehicle for increasing the pace of change and reform in higher education. For these and other reasons, analysis of quality assurance in Web-based learning is an essential topic for education policy makers. The common problem now in the higher education sector in Hong Kong is to identify an effective model to assure quality in the delivery of Web-based learning that can fit the various stakeholders’ expectations. The first stage for doing this is to identify the critical success factors that would contribute to quality assurance in Web-based learning.

This research focused on the perspective of academic staff who have been involved in managing, developing, monitoring or teaching Web-based courses in local tertiary institutions.

Research Questions

After a review of the relevant literature on the quality assurance of Web-based learning, this paper will try to answer the following questions:

The research was done by surveying a group of academic staff from various local tertiary institutions in Hong Kong. The process mainly collected their opinions on various key issues related to quality assurance of Web-based learning.

 

2. Literature Review

Since the Open University in United Kingdom first offered undergraduate degrees via a "virtual classroom" in 1969 (Educom Staff, 1996), many other universities in the world have moved in similar direction. The California Virtual University, which lists 1,000 distance education courses, and the Western Governor’s University, a consortium of 18 western states in the US are both classic examples of the partnerships being formed to promote distance education as a viable alternative to classroom instruction (Koss-Feder, 1998).

As the use of technology to facilitate and deliver distance learning courses has increased, new challenges have emerged for the administration, faculty, staff and students of universities developing and implementing distance learning programs (Drazdowski, 1998; Fulford, 1993). Many faculties fear distance learning is just a means of reducing their ranks, or a means to solve budget problems. Others fear the dehumanization and alienation of students as well as loss of critical thinking and social skills (Novek, 1996).

On the other hand, Swalec (1993) suggests that rather than feeling threatened, faculty should embrace distance learning as a way for more students to access their courses, resulting in a greater intellectual audience and less chance of a course being cancelled due to low enrollment.

It has been only 10 years since the Web was developed by Tim Berners-Lee in Switzerland and Wide Area Information Servers became the first tool for surfing the net. Since that time, educational institutions, research centers, libraries, government agencies, commercial enterprises, and advocacy groups have rushed to connect to the Web (Johnson, 1999). It seems clear to most observers that the Web profoundly influence society in general and universities in particular. One of the consequences of this tremendous surge in online communication has been the rapid growth of technology-mediated distance learning at the higher education level.

This extraordinary growth of technology-mediated distance learning in higher education has prompted several different agencies to develop principles, guidelines, or benchmarks to ensure quality distance education. These organizations included the American Council on Education, the National Education Association, the Global Alliance for Transnational Education (GATE), the Southern Regional Electronic Campus, the Commission on Higher Education for the Middle States Association of Colleges and Schools, and the Western Cooperative for Educational Telecommunications. The quality assurance benchmarks promoted by these organizations are designed to apply to a wide variety of institutional contexts and consists of fairly broad statements. However, virtually all of the strategies include such factors as course development, faculty training, student services, learning resources, infrastructure and assessment of outcomes.

These benchmarks initially developed for all types of distance learning have been in existence in various forms for a number of years. The crucial question that arises is whether they are applicable to Web-based learning in distance education setting. In short, are the current benchmarks appropriate and necessary to ensure quality for Web-based learning?

In addition to reviewing benchmarks that have been published by policy and educational organizations, several articles by prominent scholars in distance education were also examined. (The selected references found at the end of this paper provide a better understanding of the array of resources reviewed.)

In order to address this question, the Institute for Higher Education Policy has carried out a comprehensive study in year 2000 in an attempt to validate those benchmarks that have been published by various agencies, with specific attention to Web-based distance learning. The outcome is a list of 24 benchmarks that are essential to ensure quality in Web-based distance learning. The benchmarks can be grouped into the following seven categories:

Institutional Support

This category includes those activities by the institution that helps to ensure an environment conducive to maintain and develop quality Web-based learning.

Course Development

This category includes those activities for the development of courseware that is produced either by faculty members on campus, subject experts within the organizations or commercial enterprises.

Teaching / Learning Process

This category includes those activities related to pedagogy or the art of teaching.

Course Structure

This category includes those policies and procedures that support and relate to the teaching / learning process.

Student Support

This category includes those services normally found on a university including admissions and financial aids.

Faculty Support

This category includes those activities that assist faculty in teaching online.

Evaluation and Assessment

This category includes those policies and procedures that address how the institution evaluates Web-based distance learning.

 

Based on the results of the study conducted by the Institute for Higher Education Policy, it is clear that there are seven major areas to look at: Institutional Support, Course Development, Teaching/Learning Process, Course Structure, Student Support, Faculty Support and Evaluation and Assessment in ensuring the quality of Web-based learning. Since there are a lot of questions not yet clearly answered in previous studies of ensuring quality assurance for Web-based learning in Hong Kong. The study done by the Institute for Higher Education Policy looks like a useful approach for studying this issue in the local setting and will provide guidelines to explore the contributing factors in successful quality assurance model for Web-based learning.

 

3. Research Design and Method

A survey questionnaire was developed to measure academic staff’s perception of quality assurance in Web-based learning. The measurement of the items was drawn from a previous study carried out by the Institute for Higher Education Policy on the same issue. The questionnaire was structured using a 5-point Likert Scale. An example of the item is as follows:

    1. In your opinion, are the following benchmarks important to ensure quality?

       

    2. In your opinion, are the following benchmarks present in the University?

       

    3. A documented technology plan that includes electronic security measures (i.e. password protection, encryption, back-up systems) is in place to ensure both quality standards and the integrity and validity of information.

(A)

Not

Important

Somewhat

unimportant

Not Sure

Important

Very Important

 

1

2

3

4

5

(B)

Strongly

Disagree

Disagree

Not Sure

Agree

Strongly

Agree

 

1

2

3

4

5

 

 

The Likert Scale questionnaire listed the 24 quality benchmarks and requested each respondent to rank each benchmark on two criteria. First, to what extent is the benchmark important to ensure quality for Web-based learning (ranked from 1 = not important to 5 = very important)? Second, to what extent is the benchmark present in the institution (ranked from 1 = strongly disagree to 5 = strongly agree)? Those respondents who did not have sufficient knowledge or experience relating to the benchmark could check the "Not Sure" category.

At the extremes, this process could result in the following four hypothetical scenarios:

 

  1. A benchmark could be very important and present completely

     

  2. A benchmark could be not important and still present completely

     

  3. A benchmark could be very important and but absent completely

     

  4. A benchmark could be not important and also absent completely

     

The actual results are provided in the next section of this study. The full list of survey questions is provided in Appendix A.

Participants are identified and selected only if they have involvement with Web-based learning or teaching. They are sampled from universities, distance learning institutions and academic research circles. There is no prerequisite in years of experience as long as they can understand the research intention and the general features of Web-based learning. A total of 50 questionnaires were sent out to the identified and selected participants working in local tertiary institutions. A total of 34 questionnaires were collected of which all of them were used for the study and further analysis. It was very encouraging to the researcher since the return rate was 68% and it could be consider high for any kind of management research.

Limitations

In undertaking this research study, the researcher encountered only two limitations to the study. The first was the small sample size of 34 academic staff, and the second that not many respondents were willing to response the open-ended question at the end of the questionnaire. That may be due to the fact that all of the possible quality benchmarks were already listed in the questionnaire already.

 

4. Findings and Analysis

The 34 respondents came from 8 local tertiary institutions in Hong Kong. The questionnaire survey was done in January to February 2002. The following sections provide a summary of the quantitative analysis of this survey. It should be noted that all of the institutions are included in the data presented in this study. Because the intention of this study is to validate the benchmarks for the higher educational sector in general, it serves no purpose to separate the data for each institution. The following discussion represents a consensus of a majority of the institutions in the study. It is, therefore, not appropriate to assume that the attribute outlined in the discussion always represent each and every institution. No effort was made to apply any statistical tests to ascertain the degree of importance of a benchmark and its presence at the institutions, and the difference between the two. Instead, the researcher only used some simple descriptive statistics to guide the whole analysis of this study. The section is organized around the seven categories of benchmarks: Institutional Support, Course Development, Teaching/Learning Process, Course Structure, Student Support, Faculty Support, and Evaluation and Assessment. The responses from the open-end question are listed in Appendix B.

Institutional Support

All of the benchmarks in this category were considered important to ensure quality for Web-based learning. The benchmark addressing a documented technology plan (No. 1) received exceptionally high ratings both with regard to importance (100%) and presence (88.3%) at the institutions. There was a marked difference between the importance of the benchmark regarding a centralized support system (No. 3) and the actual presence of it at the institutions. The benchmark addressing the reliability of the technology delivery system (No. 2) received quite high rating with regard to importance (91.2%) and presence (76.5%) at the institutions. This can be easily explained in the following scenarios: "If the lights go out in a traditional classroom, it may just be an inconvenience but if the system fail then it will be a disaster for the Web-based learning environment."

Table 1. Survey statistics on the importance of the Institutional Support benchmarks

 

Not

Important

Somewhat

Unimportant

Not

Sure

Important

Very

Important

Q1(A)

0%

0%

0%

55.9%

44.1%

Q2(A)

0%

0%

8.8%

61.8%

29.4%

Q3(A)

0%

2.9%

14.7%

58.8%

23.5%

 

Table 2. Survey statistics on the presence of the Institutional Support benchmarks

 

Strongly

Disagree

Disagree

Not

Sure

Agree

Strongly

Agree

Q1(B)

0%

0%

11.8%

55.9%

32.4%

Q2(B)

0%

2.9%

20.6%

55.9%

20.6%

Q3(B)

0%

2.9%

47.1%

50%

0%

 

Course Development

The all three benchmarks relating to course development received a mixed reaction from the respondents. The benchmark addressing the guideline for minimum standards (No. 4) received even rating with regard to importance (50.2%) and presence (70.6%) at the institutions. The benchmark addressing the instructional materials (No. 5) received quite high rating with regard to importance (94.1%) but scored only (58.8%) on presence at the institutions. The benchmark addressing the course design (No. 6) received quite high rating with regard to importance (94.1%) but again only scored (58.8%) on presence at the institutions. This reflected that local academics agreed that instructional materials and course design were important benchmarks but there were still room for improvement to strengthen the quality assurance mechanism so that to increase the presence of those benchmarks in the local settings.

Table 3. Survey statistics on the importance of the Course Development benchmarks

 

Not

Important

Somewhat

Unimportant

Not

Sure

Important

Very

Important

Q4(A)

5.9%

29.4%

14.7%

35.5%

14.7%

Q5(A)

0%

5.9%

0%

55.9%

38.2%

Q6(A)

0%

0%

5.9%

44.1%

50%

Table 4. Survey statistics on the presence of the Course Development benchmarks

 

Strongly

Disagree

Disagree

Not

Sure

Agree

Strongly

Agree

Q4(B)

0%

2.9%

26.5%

55.9%

14.7%

Q5(B)

0%

14.7%

26.5%

38.2%

20.6%

Q6(B)

0%

2.9%

38.2%

50%

8.8%

 

 

Teaching / Learning Process

The majority of benchmarks regarding the teaching / learning process were considered important but were endorsed modestly in the local environment. The two benchmarks that address the process of interactivity (No. 7 and 8), all of which received high scores for importance (91.1% and 94.1%) and (61.8% and 76.5%) for presence at the institutions. It has become increasing evident that interactivity is the condition for quality in Web-based distance education. Indeed, many would say that it is crucial for any type of learning.

As Otto Peters, author of Learning and Teaching in Distance Education wrote: "If we take distance education seriously and understand it to be something more than the mere distribution and reading of study materials, we must provide sufficient opportunities for dialogues. If, in addition, we understand academic studies as a process in which the aim is education through knowledge, we cannot do without a considerable proportion of dialogical learning and teaching in distance education." (Peters, 1999, pg. 39) The notion of interactivity is highlighted here is not only because it is central to the quality of Web-based distance education, but also because it leads to the realization that Web-based distance education is evolving its own pedagogy. The benchmark addressing the method of effective research (No. 9) scored (76.4%) on importance and only (41.2%) on presence at the institutions.

 

Table 5. Survey statistics on the importance of the Teaching / Learning Process benchmarks

 

Not

Important

Somewhat

Unimportant

Not

Sure

Important

Very

Important

Q7(A)

0%

0%

8.8%

67.6%

23.5%

Q8(A)

0%

0%

5.9%

55.9%

38.2%

Q9(A)

0%

8.8%

14.7%

52.9%

23.5%

Table 6. Survey statistics on the presence of the Teaching / Learning Process benchmarks

 

Strongly

Disagree

Disagree

Not

Sure

Agree

Strongly

Agree

Q7(B)

0%

11.8%

26.5%

50%

11.8%

Q8(B)

0%

5.9%

17.6%

64.7%

11.8%

Q9(B)

0%

17.6%

41.2%

41.2%

0%

 

Course Structure

In general, the course structure benchmarks received modest rating in terms of importance and presence. The benchmark addressing the time requirement (No. 13) received (70.6%) on importance and (55.9%) on presence at the institutions. Given the fact that the dynamic and innovative nature of the Web-based learning environment, particularly the capacity for students to pace themselves in a variety of ways, hard and fast rules on how much work should be accomplished in a specific time period or the precise response time for a faculty member is totally inappropriate. The benchmark addressing student advising (No. 10) scored (79.4%) on importance and (44.1%) on presence at the institutions. Also, the benchmark addressing supplemental course information (No. 11) received (70.5%) on importance and (64.7%) on presence at the institutions. All these reflect that the institutions need to put up more effort in term of student advising. The low ratings for the benchmark regarding library resources (No. 12) scored (67.6%) on importance and (47.1%) on presence at the institutions are worth nothing. This may due to the fact that the technology and infrastructure of virtual library is still not very well developed like in the Western countries.

Table 7. Survey statistics on the importance of the Course Structure benchmarks

 

Not

Important

Somewhat

Unimportant

Not

Sure

Important

Very

Important

Q10(A)

0%

11.8%

8.8%

52.9%

26.5%

Q11(A)

8.8%

14.7%

5.9%

52.9%

17.6%

Q12(A)

2.9%

23.5%

5.9%

50%

17.6%

Q13(A)

0%

8.8%

20.6%

64.7%

5.9%

Table 8. Survey statistics on the presence of the Course Structure benchmarks

 

Strongly

Disagree

Disagree

Not

Sure

Agree

Strongly

Agree

Q10(B)

2.9%

23.5%

29.4%

41.2%

2.9%

Q11(B)

2.9%

20.6%

11.8%

38.2%

26.5%

Q12(B)

0%

17.6%

35.3%

35.3%

11.8%

Q13(B)

0%

20.6%

23.5%

47.1%

8.8%

 

Student Support

The all four benchmarks relating to student support received a mixed reaction from the respondents. The benchmark addressing information for student (No. 14) received high ratings (85.3%) on both importance and presence at the institutions. The benchmark addressing training for student (No. 15) scored low rating (50%) on importance and (20.6%) on presence at the institutions.

The benchmark addressing technical assistance (No. 16) also got low rating (61.8%) on importance and (35.3%) on presence at the institutions. The benchmark addressing complaint system (No. 17) received (82.4%) on importance and (26.5%) on presence at the institutions. It appears that for at least three benchmarks concerning training for students, technical assistance and complaint system, the institutions feel they have some long way to go.

Table 9. Survey statistics on the importance of the Student Support benchmarks

 

Not

Important

Somewhat

Unimportant

Not

Sure

Important

Very

Important

Q14(A)

0%

5.9%

8.8%

61.8%

23.5%

Q15(A)

5.9%

23.5%

20.6%

41.2%

8.8%

Q16(A)

2.9%

8.8%

26.5%

50%

11.8%

Q17(A)

0%

2.9%

14.7%

61.8%

20.6%

Table 10. Survey statistics on the presence of the Student Support benchmarks

 

Strongly

Disagree

Disagree

Not

Sure

Agree

Strongly

Agree

Q14(B)

0%

0%

14.7%

55.9%

29.4%

Q15(B)

8.8%

29.4%

41.2%

20.6%

0%

Q16(B)

8.8%

11.8%

44.1%

32.4%

2.9%

Q17(B)

2.9%

14.7%

55.9%

26.5%

0%

 

 

Faculty Support

The all four benchmarks relating to faculty support received modest rating on importance and low rating on presence at the institutions. The benchmark addressing technical assistance (No. 18) received (67.7%) on importance and (64.7%) on presence at the institutions. The benchmark addressing transitional assistance (No. 19) scored modest rating (38.3%) on importance and very low rating (14.7%) on presence at the institutions. The benchmark addressing training for instructor (No. 20) received (70.6%) on importance and low rating (32.4%) on presence at the institutions. The benchmark addressing written resources (No. 21) scored (64.7%) on importance and very low rating (23.5%) on presence at the institutions. Many institutions participated in the study have systematic processes for assisting faculty members to make the transition from traditional teaching to Web-based teaching environment. Therefore, the low level of presence of assistance can only be attributed to the fact of limited resource put in the area.

Table 11. Survey statistics on the importance of the Faculty Support benchmarks

 

Not

Important

Somewhat

Unimportant

Not

Sure

Important

Very

Important

Q18(A)

0%

8.8%

23.5%

55.9%

11.8%

Q19(A)

0%

17.6%

44.1%

32.4%

5.9%

Q20(A)

0%

11.8%

17.6%

58.8%

11.8%

Q21(A)

0%

17.6%

17.6%

47.1%

17.6%

Table 12. Survey statistics on the presence of the Faculty Support benchmarks

 

Strongly

Disagree

Disagree

Not

Sure

Agree

Strongly

Agree

Q18(B)

0%

8.8%

26.5%

55.9%

8.8%

Q19(B)

0%

32.4%

52.9%

14.7%

0%

Q20(B)

5.9%

17.6%

44.1%

32.4%

0%

Q21(B)

5.9%

23.5%

47.1%

23.5%

0%

 

 

Evaluation and Assessment

The all three benchmarks relating to evaluation and assessment received high rating on importance and modest to low rating on presence at the institutions. The benchmark addressing evaluation process (No. 22) scored (76.5%) on importance and modest rating (41.2%) on presence at the institutions. The benchmark addressing program effectiveness (No. 23) received (64.7%) on importance and low rating (23.5%) on presence at the institutions. The benchmark addressing learning outcomes (No. 24) scored high rating (88.3%) on importance and only (53%) on presence at the institutions. By large, all participated institutions had systems in place to address evaluation and assessment. In the Western countries, the institutions collected huge amount of data in the areas of financial efficiency, student achievement, faculty satisfaction, student satisfaction, student retention and student demand in order to evaluate their programs effectiveness. Therefore, it was surprised to see that program effectiveness scored such a low rating in the local environment.

Table 13. Survey statistics on the importance of the Evaluation and Assessment benchmarks

 

Not

Important

Somewhat

Unimportant

Not

Sure

Important

Very

Important

Q22(A)

0%

2.9%

20.6%

50%

26.5%

Q23(A)

0%

20.6%

14.7%

55.9%

8.8%

Q24(A)

0%

2.9%

8.8%

41.2%

47.1%

Table 14. Survey statistics on the presence of the Evaluation and Assessment benchmarks

 

Strongly

Disagree

Disagree

Not

Sure

Agree

Strongly

Agree

Q22(B)

11.8%

5.9%

41.2%

29.4%

11.8%

Q23(B)

2.9%

20.6%

52.9%

23.5%

0%

Q24(B)

2.9%

5.9%

38.2%

47.1%

5.9%

 

 

5. Conclusion

The Web is a major technological advancement reshaping not only our society but also that of universities worldwide. In the light of this, universities have to capitalize on the Web for both teaching and learning, and one progressive development of this is the use of Web-based learning in distance education settings.

This study for the most part revealed that the benchmarks for quality assurance of Web-based learning were considered important and in general the participated institutions strove to incorporate them into their policies, practices and procedures. At the same time, there were few benchmarks that did not enjoy consensus among the academic members and in some instances were not even considered mandatory to ensure quality for Web-based learning. In this sense, the quality benchmarks identified in the literature can be considered valid in the higher education sector in Hong Kong. Based solely on the feedback of the open-ended question of the survey, it would be difficult to conclude that there are any additional benchmarks that need to be included into the model.

As Davey Yeung, author of Quality Assurance of Web-based learning in Distance Education Institutions wrote: "There are as least two key stakeholders in any educational setting, namely the academic staff and the students." Therefore, in order to have an effective quality assurance model for Web-based learning, it will need to conduct study on student’s perception on this issue and incorporate the results with the perception of academic staff to form a more complete picture of the whole quality assurance model.

In conclusion, the author wishes to reiterate that each quality assurance system needs to be careful in analyzing the situation of the specific institution. It needs to be very flexible in its approach, and the combination of process and technology needs to be carefully considered, as unintended consequences in one area can originate from a bad choice in another. It may be worth noting that a firm adherence to an explicit view of what constitutes good Web-based learning, and an explicit view of issues of change and culture will furthermore influence the specific approach taken in assuring the quality Web-based learning.


Reference

Barr, R. B., & Tagg, J., (1995), From teaching to learning: A new paradigm for undergradute education, Change: The magazine of higher learning, Nov./Dec., p. 13-24.

Bates, A. W. (1995), Technology, open learning and distance education. London, UK: Routledge.

Boaz, M, (1999), Teaching at a Distance: A Handbook for Instructors. Los Angeles, CA: league for Innovation in the Community College, a Division of Harcourt Brace & Company.

Beottcher, J. V., & Conrad, R. (1999), Faculty Guide for Moving Teaching and Learning to the Web, Los Angeles, CA: league for Innovation in the Community College, p.16.

Center for Adult Learning and Educational Credentials. (1996), Guiding Principles for Distance Learning in a Learning Society. Washington, DC: American Council on Education, December.

Chickering, A.W., & Ehrmann, S.C., (1996), "Implementing the Seven Principles." AAHE Bulletin. Vol. 49, No. 2 (October). From AAHE website (www.aahe.org/technology/ehrmann.htm).

Commission on Higher Education (CHE), Middle States Association of Colleges and Schools. (1997), Policy Statement on Distance Education. Philadelphia, PA: CHE, February.

Daniel, J.S., (1996), Mega-Universities and Knowledge Media: Technology Strategies for Higher Education. London: Kogan Page Limited.

Drazdowski, T.A., Holodick, N.A., & Scappaticci, F.T., (1998), Infusing technology into a teacher education program: three different perspectives. Journal of Technology and Teacher Education, 6(2/3), pp. 141-149.

Educom Staff (1966), Should distance learning be rationed? Point counterpoint with Larry Gold and James Mingle. Educom Review, 31(2), 48-50, 52.

Fulford, C., & Zhang, S., (1993), Predicting student satisfaction from perceptions of interaction in distance learning. The Tele-teaching, ed. G. Davies and B. Samways, pp. 259-268. North Holland: Elsevier Science Publishers.

Fullan, M. (1993), Change forces: Probing the depths of educational reform. New York: The Falmer Press.

Gilbert, S.W. (1996), "How to Think about How to Learn." Trusteeship. Special Issue, pp. 16-19.

Hazle, L. & Jess, J., (1998), Quality Assurance in Distance Education. Washington, DC: Academy for Educational Development, April.

He, K., (1998), Modern education technologies and education innovation.

Johnson, J., (1999), "The thread of a great and long tradition." TechKnowLogic. Vol. 1, No. 1, pp. 9-12.

Koss-Feder, L. (1998 July 20), Brushing up. Times, 15-19.

Lewis, L., Snow, K., & Farris, E., (1999), Distance Education at Postsecondary Education Institutions: 1997-98. National Center for Education Statistics (NCES), U.S. Department of Education, NCES #2000-013. Washington, DC: U.S. Government Printing Office.

Novek, E.M., (1996), Do professors dream of electronic sheep? Academic anxiety about the information age (ERIC Clearinghouse on Resources ED399594).

Pascarella, E., & Terenzini, P., (1991), How College Affects Students. San Francisco, CA: Jossey-Bass.

Peters, O., (1999), Teaching and Learning in Distance Education: Analysis and Interpretations from an International Perspective. London: Kogan Page Limited.

Phipps, R. A., & Merisotis, J. P., (1999), What’s the Difference? A Review of Contemporary Research on the Effectiveness of Distance Learning in Higher Education. Washington, DC: American Federation of Teachers and National Education Association.

Phipps, R. A., Wellman, J. V., & Merisotis, J. P., (1998), Assuring Quality in Distance Learning: A Preliminary Review. Washington, DC: Council for Higher Education Accreditation.

Poehlein, G. W. (1996), "Universities and information technologies for instructional programmes: issues and potential impacts", Technology Analysis & Strategic Management, Vol. 8 No. 3, pp, 283-90.

Southern Regional Electronic Campus. (1997), Principles of Good Practice. Atlanta, GA: Southern Regional Education Board, October.

Sumler, D. & Zirkin, B., (1995), "Interactive or Not Interactive? That is the Question." Journal of Distance Education. Spring, pp. 95-112.

Swalec, J.J., (1993), Engaging faculty in telecommunications-based instructional delivery systems (ERIC Clearinghouse on Information Resources ED368418).

Tapscot, D., (1996), The Digital Economy: Promise and Peril in the Age of Networked Intelligence. New York: McGraw-Hill.

U.S. Army, (1997), Army Distance Learning Program Master Plan: Coordinating Draft. Volumes I-III, June. From U.S. Army website (www.tadlp.monroe.army.mil/dlmasterplan.htm).

Volery, T. & Lord, D., (2000), "Critical success factors in online education." The International Journal of Educational Management, Volume 14, Issue 5, pp. 216-223.

Western Cooperative for Educational Telecommunications. (1997), Distance Education: A Consumer’s Guide. Boulder, CO: Western Interstate Commission for Higher Education, April.

Wisher, R.A., (1999), Training Through Distance Learning: An Assessment of Research Findings. Alexandria, VA: United States Army Research Institute for the Behavioral and Social Sciences.

Yeung, D., (2001), "Quality Assurance of Web-based Learning in Distance Education Institutions", Online Journal of Distance Learning Administration, Winter 2001, Volume 4, Issue 4. (http://www.westga.edu/~distance/jmain11.html).

 

Appendix A – Survey Questions

Institutional Support Benchmarks

  1. A documented technology plan that includes electronic security measures (i.e. password protection, encryption, back-up systems) is in place to ensure both quality standards and the integrity and validity of information.

     

  2. The reliability of the technology delivery system is as failsafe as possible.

     

  3. A centralized system provides support for building and maintaining the distance education infrastructure.

     

    Course Development Benchmarks

  4. Guidelines regarding minimum standards are used for course development, design, and delivery, while learning outcomes – not the availability of existing technology – determine the technology being used to deliver course content.

     

  5. Instructional materials are reviewed periodically to ensure they meet program standards.

     

  6. Courses are design to require students to engage themselves in analysis, synthesis, and evaluation as part of their course and program requirements.

     

    Teaching / Learning Process Benchmarks

     

  7. Student interaction with faculty and other students is an essential characteristic and is facilitated through a variety of ways, including voice-mail and / or email.

     

  8. Feedback to student assignments and questions is constructive and provided in a timely manner.

     

  9. Students are instructed in the proper methods of effective research, including assessment of the validity of resources.

     

    Course Structure Benchmarks

  10. Before starting an online program, students are advised about the program to determine (1) if they possess the self-motivation and commitment to learn at a distance and (2) if they have access to the minimal technology required by the course design.

     

  11. Students are provided with supplemental course information that outlines course objectives, concepts and ideas, and learning outcomes for each course are summarized in a clearly written, straightforward statement.

     

  12. Students have access to sufficient library resources that may include a "virtual library" accessible through the World Wide Web.

     

  13. Faculty and students agree upon expectations regarding times for student assignment completion and faculty response.

     

    Student Support Benchmarks

  14. Students receive information about programs, including admission requirements, tuition fees, books and supplies, technical and proctoring requirements, and student support services.

     

  15. Students are provided with hands-on training and information to aid them in securing material through electronic databases, interlibrary loans, government archives, news services and other sources.

     

  16. Throughout the duration of the course/program, students have access to technical assistance, including detailed instructions regarding the electronic media used, practice sessions prior to the beginning of the course, and convenient access to technical support staff.

     

  17. Questions directed to student service personnel are answered accurately and quickly, with a structured system in place to address student complaints.

     

    Faculty Support Benchmarks

  18. Technical assistance in course development is available to faculties, who are encouraged to use it.

     

  19. Faculty members are assisted in the transition from classroom teaching to online instruction and are assessed in the process.

     

  20. Instructor training and assistance, including peer mentoring, continue through the progression of the online course.

     

  21. Faculty members are provided with written resources to deal with issues arising from student use of electronically accessed data.

     

    Evaluation and Assessment Benchmarks

  22. The program’s educational effectiveness and teaching/learning process is assessed through an evaluation process that uses several methods and applies specific standards.

     

  23. Data on enrollment, costs and successful / innovative uses of technology are used to evaluate program effectiveness.

     

  24. Intended learning outcomes are reviewed regularly to ensure clarity, utility and appropriateness.

     

  25. List down 3 most important quality benchmarks that are not listed in the questionnaire and you feel are relevant to Web-based learning.

     

Appendix B – Some of the important quality benchmarks that are not listed in the questionnaire but the respondent feel are relevant to Web-based learning

  • Attractiveness
  • Accuracy
  • Capacity
  • Consistency
  • Creativeness
  • Flexibility
  • Feasibility
  • Fun
  • Informative
  • Interesting
  • Interaction
  • Innovation
  • Motivation
  • Popularity
  • Reliability
  • Rich content
  • Stability
  • Technical Support
  • User Friendliness
   

Online Journal of Distance Learning Administration, Volume V, NumberII, Summer 2002

State University of West Georgia, Distance Education Center

Back to Journal of Distance Learning Administration Contents