Measuring Student Perceptions in Web-Based Courses: A Standards-Based Approach


Joe Jurczyk, Susan N. Kushner Benson, John R. Savery
College of Education
The University of Akron
jpj2@uakron.edu

Abstract

This paper outlines a method of identifying student perceptions throughout a distance learning course. Using a questionnaire based on standards from the Institute of Higher Education Policy (IHEP) as a guide, instructors and administrators can measure various aspects of the distance education experience and their importance to students. By implementing the questionnaire before, during, and after a course, the results can be analyzed at each individual point in time and also in terms of change over time. Similarly the results can be compared to benchmarks for the IHEP standards to assess class perceptions relative to other students. Such measurements can provide insight into the perceptions of the distance learning student during the educational process where few other accepted measurement methods exist.

Introduction

The purpose of this article is to describe a method that instructors and administrators can use to measure student perceptions in a distance learning environment using a set of standards from the Institute of Higher Education Policy (IHEP). Examining the perceptions of a target population is a widely used strategy based on the premise that perceptions matter and often influence behaviors. This approach has been used to study faculty perceptions of distance education (Belcheir & Cucek, 2002), and also student perceptions of online learning (O'Malley & McCraw, 1999; Peters, 2001; Schönwetter & Francis, 2003). These studies used surveys and traditional course evaluations to identify the critical elements in effective online instruction as perceived by the students. Cope and Ward (2002) used a phenomenological research approach to examine the importance of high school teacher perceptions on the integration of learning technology in the classroom and concluded that “teacher perceptions of learning technologies are likely to be key factors in the successful integration of learning technologies” (p. 72). They further noted that successful integration is more likely to occur when “teachers perceive learning technologies as part of a student-centered/conceptual change teaching approach” (p. 72). The questionnaire developed for this study is based on the IHEP standards and thus more closely aligned with the issues and concerns of distance learning. This approach allows teachers and administrators to conduct formative evaluations to gain an understanding of their learners in a situation where formal and informal feedback may not exist and in an environment where it is imperative to have two-way communication with the students or risk losing them. The method we describe in this paper may be applicable to other formal learning environments; however, the distance learning context upon which this work is based is in higher education. Specifically, a standards-based questionnaire was used in a graduate course in social science research methods.

As documented by educational leaders in the academic and corporate sectors, by the U.S. government (Web-based Education Commission, 2000), by publications, and by students themselves, distance learning is growing. Higher education institutions use distance learning both as an alternative channel to providing instruction to their existing students and as a way to expand their reach to new students. Learners enjoy the benefits of being able to take classes while still maintaining their jobs and their personal and family commitments (Scollin & Tello, 1999). Distance education offers institutions a way to increase enrollment, to reach more students, and to compete with other institutions, both academic and corporate, offering similar learning opportunities to students. Because distance learning is still relatively new and because of the remote nature of the teaching, instructors and administrators may have little insight into how students are faring during these courses. Unlike traditional courses where the instructor stands in front of students in a classroom, lectures, and receives physical cues or verbal feedback from students, an online instructor with remote students does not always get such information about the course content and the instruction process. Similarly, online instructors may not know how to "read" students during a course. Identifying the expectations of the students and their perceptions of this learning environment can provide the instructor with valuable feedback. Although the learning may be at a distance, the interaction between the teacher and students should not be.

Institute of Higher Education Policy (IHEP) Standards

The ability of instructors and students to adapt to the capabilities and constraints of the online learning environment will directly influence the success of the undertaking. One way to facilitate the quality of online instruction is to base both the development and evaluation of online learning on established standards.

In 1999 the Institute of Higher Education Policy (IHEP) commissioned a study by BlackBoard, a maker of web-based course management systems, and the National Education Association (Phipps & Merisotis, 2000) . The purpose of the work was to identify a list of standards within the distance education field that addressed issues involved in the process for students, instructors and administrators. The standards were based on literature reviews and interviews conducted with 147 individuals (faculty members, students and administrators) at six leading accredited institutions in distance education. These schools all offered multiple degrees through online distance learning. The standards were designed to provide a set of best practices for distance learning within higher education environments. Forty-five benchmarks were identified by the IHEP, and the benchmarks are organized into seven major categories. The seven categories are described in Table 1.

Table 1. IHEP Distance Learning Benchmark Categories

Category

Description

Institutional Support

Activities by the institution that help to ensure an environment conducive to maintaining quality distance education, as well as policies that encourage the development of Internet-based teaching and learning including technological infrastructure issues, a technology plan, and professional incentives for faculty.

Course

Development

The development of courseware, which is produced largely either by individual faculty (or groups of faculty members) on campus, subject experts in organizations, and/or commercial enterprises

Teaching

Learning Process

Activities related to pedagogy including interactivity, collaboration, and modular learning.

Course Structure

Policies and procedures that support and relate to the teaching/learning process, including course objectives, availability of library resources, types of materials provided to students, response time to students, and student expectations.

Student Support

Student services normally found on a college campus including student training and assistance while using the Internet.

Faculty Support

Activities that assist faculty in teaching online, including policies for faculty transition help as well as continuing assistance throughout the teaching period.

Evaluation and Assessment

Policies and procedures that address how, or if, the institution evaluates Internet-based distance learning including outcomes assessment and data collection.

Our Experience Using the IHEP Standards to Survey Students

We first used the IHEP standards in a web-based, graduate research methods course offered in the Summer and Fall semesters of 2002. The course was offered on the WebCT system, a widely-used course management system. As an instructor teaching her first web-based course, the second author was both exhilarated and nervous at the prospect: exhilarated in terms of the excitement of teaching material in a new environment and nervous in terms of not knowing how the students would take to the learning environment or how she would be able to identify and address their concerns. The IHEP standards represented a way to use an established framework to measure opinions at various intervals for comparative purposes.

Using the IHEP benchmarks as a guideline, a questionnaire was developed to measure student attitudes before, during, and after the course. Content validity was of primary concern for the questionnaire. Content validity is defined as the degree to which items match the content domain from which they are being sampled (American Educational Research Association, 1999) The IHEP benchmarks were developed using a comprehensive literature review to identify benchmarks recommended by other organizations and groups and those suggested in articles and publications; thus providing evidence of the content validity of the IHEP benchmarks and our questionnaire. To circumvent any suggestion of researcher bias, the first and third authors managed the entire data collection and analysis process. Questionnaires were distributed electronically to students in one class section, and the other class received the questionnaire by mail. The first questionnaire was distributed (electronically and by mail) two weeks before the beginning of course, the second questionnaire during the middle of the semester, and the final questionnaire distributed one week before the course was complete.

We determined that the questionnaires would be too long to include all 45 benchmarks, so we selected the three general categories related directly to student learning (i.e., the teaching and learning process, course structure, and student support). Each of the 22 benchmarks in these groups was reworded in the appropriate tense (future, present, past) to match the respective time of the survey. For example, one of the standards used in the study was "student interaction with faculty is facilitated through a variety of ways." The benchmark was worded as follows for the three surveys:

Two sets of questions were developed for each of the 22 standards; thus, the overall questionnaire consisted of 44 questions (see Appendix A). For the first set of questions, students were asked to consider their satisfaction with the benchmark along a 7-point scale from Strongly Disagree to Strongly Agree. For the second set of questions, students were asked to consider the importance of the benchmark along a 5-point scale from Not Important to Very Important. We chose these rating scales because they were the same as those used in the IHEP study. This allowed us to make comparisons between our data and the IHEP data. Each standard in the IHEP report includes mean scores and standard deviations for satisfaction and mean scores for importance. The IHEP scores provided reference points that allowed us to understand the instructor and students in our study and to compare our results to the distance learning population in the IHEP study.

Analyzing and Evaluating Instructor and Student Responses

Descriptive Statistics

One method for analyzing and evaluating instructor and student responses is to calculate means, standard deviations, and effect sizes. Table 2 displays instructor, student, and IHEP means and standard deviations for five of the questionnaire items at each administration. Because the purpose of this study is to describe a process for evaluating perceptions, we have chosen to report the results for only these five questionnaire items.

Table 2. Student Support: Student and Instructor Satisfaction Ratings & IHEP Benchmarks,

Time #1

Semester Start

n=21

Time #2

Mid-Semester

n=25

Time #3

End-of-Semester

n=24

IHEP Benchmark

Inst.

Student
Mean SD

Inst.

Student
Mean SD

Inst

Student
Mean SD

Mean SD

1. I am able to obtain assistance to help me use electronically accessed data successfully.

2

6.1 0.76

7

6.0 1.37

7

6.2 0.94

5.2 1.65

2. I have been provided with hands-on training and information to aid me in securing materials through online databases.

2

5.3 1.12

5

5.6 1.55

5

5.6 1.53

5.1 1.83

3. Written information is supplied to me about the course.

4

5.5 1.39

5

5.8 1.83

5

6.0 1.41

6.2 1.22

4. Easily accessible technical assistance is available to me throughout the duration of the course.

1

5.6 1.10

6

5.8 1.27

6

5.9 1.12

5.4 1.74

5. A structured system is in place to address my concerns.

3

6.1 0.79

7

6.1 1.47

7

6.5 0.68

5.3 1.66

The five questions fall under the IHEP category of Student Support. The first item “I am able to obtain assistance to help me use electronically accessed data successfully” is displayed on the first line of the table. By reading across the table, an instructor can compare their ratings of the item with their students throughout the semester. Keeping in mind that this item was rated on a 7-point scale (1 =Strongly Disagree and 7=Strongly Agree), it is apparent that at the beginning of the semester, the instructor in this study was concerned about providing her students with sufficient assistance (instructor rating = 2). As the semester progressed, however, the instructor grew considerably more confident (instructor rating =7). In contrast, the students had high expectations throughout the semester, with a mean score of 6.0 or higher. As the semester progressed, the instructor and student perceptions related to this standard converged. Throughout the semester, student ratings for this item were higher when compared to the mean IHEP score of 5.2. A similar pattern can be observed for the remaining four items in this category; thus, the instructor concluded that the support provided for students appeared to be quite satisfactory.

It might be tempting for instructors to use mean scores to make comparisons between student and instructor ratings, because this process would merely involve a simple subtraction calculation. Mean scores lack precision, however, because they do not take into consideration the variability within each measure. To gain further insight, effect sizes can be calculated to more precisely compare the differences between mean scores. An effect size is a descriptive statistic that estimates the magnitude of the difference between two groups. We used an online effect size calculator to calculate the effect sizes. The website is hosted by the University of Colorado at Colorado Springs (Becker, 1998). The effect sizes were calculated as follows:

d = M 1 - M 2

SD pooled

Where M 1 is the mean score for Group 1; M 2 = mean score for Group 2, and SD pooled is the square root of the average of the squared standard deviations.

Table 3 presents the effect sizes calculated for 20 intra-student comparisons for the five questionnaire items in the Student Support Category. The five items are listed in each row, and the various comparisons are listed across the columns. Effect sizes are reported in the cells, and they can be interpreted as follows. An effect size of .20 is considered small, .50 medium, and .80 large (Cohen, 1988).

Table 3. Student Support: Effect sizes between student ratings at three points of time during the semester and with IHEP Benchmark.

Student Time 1 x 2

Student Time 2 x 3

Student Time 1 x 3

Student & IHEP Benchmark

1. I am able to obtain assistance to help me use electronically accessed data successfully.

- .09

.17

.12

.74 *

2. I have been provided with hands-on training and information to aid me in securing materials through online databases.

.22 *

0

.22*

.30 *

3. Written information is supplied to me about the course.

.18

.12

.35*

- .15

4. Easily accessible technical assistance is available to me throughout the duration of the course.

.16

.08

.27*

.34 *

5. A structured system is in place to address my concerns.

0

.34*

.54*

.95 *

* Comparisons in which the effect sizes are greater than plus/minus .20.

For example, the effect sizes for the first item “I am able to obtain assistance to help me use electronically accessed data successfully” are reported across the first row. Comparing students' perceptions of this item between their first rating (before the semester began) and their second rating (half way through the semester) produced an effect size of -.09, meaning that there was no noticeable difference in student perceptions over this period of time. Continuing to read across the column, the instructor concluded that the small effect sizes were evidence that students' perceptions of this item did not change during the semester. In contrast, the effect size of .74 comparing students' end-of-course perceptions with the IHEP data indicates to the instructor that a large positive difference between how her students perceive the assistance they receive and how online students in general feel about this standard.

Graphic Display

In addition to reporting descriptive statistics, instructors can graph the results to provide a visual means of understanding the same data. Figure 1 displays the mean scores for the five questionnaire items in the Student Support category:

    1. Ability to obtain assistance to use electronic data
    2. Provided with hands-on training to conduct online research
    3. Written information is provided with the course
    4. Technical assistance is easily available
    5. Structure system is in place to address questions

Figure 1. Student Support -Satisfaction and Importance

 

Students' ratings of the importance of each item were plotted along the abscissa, and students' ratings of their satisfaction with the item were plotted along the ordinate. The pattern of responses for the 7-point Satisfaction scale showed mean scores for all five items greater than 5.0 indicating that students had high expectations for the support they would receive during the semester. Similarly, on the 5-point Importance scale, students perceived that the five items would be important features of the class.

Displays can also be generated that show trends over time. Figure 2 shows how student perceptions of the first item varied during the semester and compared to the IHEP standard. For this item the instructor noted that student satisfaction and the perception of importance decreased initially between the week before the course began and half-way through the semester.

Figure 2. Student Support Ratings Over Time and IHEP Benchmark

Because many of the students in this class were taking their first or second distance learning course, the instructor concluded that initial expectations about the course may have varied greatly from the results of the latter surveys when students would have gained experience and familiarity with their learning environment. In contrast, student satisfaction and perception of importance increased during the last half of the semester. At the end of the semester, students' rated their satisfaction with the support they received higher than the IHEP comparison rating; however, both groups of students believed in the importance of providing student support.

Some Important Considerations

Distance learning changes three important dynamics of instruction – the modalities of communication, the management of time, and the formats for assessment. Increasing the distance between the instructor and the student presents both challenges and opportunities that must be addressed by online instructors. See Appendix B for a more detailed examination of these three significant differences. Distance learning professionals can use the IHEP standards to gain an understanding of how their students feel toward their instruction, the support they receive, and the class in general.

Response Rate

In a traditional face-to-face classroom, course evaluations are typically distributed to students in class. Although it is rare to find an instance in which 100 percent of the students complete the course evaluation, the response rate is very high – with perhaps only a few students absent from class. Response rates are extremely important. If only a small number of your students complete a survey, but that group is comprised mostly of students who were satisfied with the course (with non-completers being dissatisfied), the results will disproportionately portray your class as having satisfied students and your results would be missing representative information from the dissatisfied students in the class. In a traditional classroom, students are a “captive audience”. In an online classroom, it is considerably easier for students to opt out of completing the course evaluation – whether it is a paper-and-pencil instrument that is mailed to students or an instrument that is available online. In our study, we used both a mailed questionnaire and an online questionnaire, and we did not find a statistically significant difference in the response rates for each method; however, our response rates appeared to be somewhat lower than what we would have expected in a face-to-face classroom. The average response rate for each of the three administrations was 57%, 68%, and 65% respectively. The mailed questionnaire was more costly and time intensive to administer, and in subsequent classes evaluations have been conducted solely online.

Length of the Questionnaire

In our study, some students complained that the questionnaire was too long. Our questionnaire had 22 items, but we asked students to rate both their satisfaction and their perceptions of the importance of the item. Monitoring the length of the questionnaire is especially important if you want to collect data throughout the semester, rather than just at the end-of-the-semester. You may also want to ask questions outside of the scope of the standards (e.g. on average, how much time do you spend online each week for this class?) Such questions can be added to the beginning or end of the questionnaire depending on their importance and the sensitivity of the content matter. Ideally, it should take students no longer than 15 minutes to complete the questionnaire. In addition to considering the number of questions, other factors to consider include the complexity of the reading material and response format. Short, clearly written forced-choice type questions that permit students to simply “click” on their response take considerably less time for students to complete than lengthier, open-ended questions that require students to type out their responses. If your questionnaire will take longer to complete, let students know ahead of time how much time they should allot to complete the survey. Either way, before actually administering the questionnaire, test it out yourself to see how long it will take students to complete.

Security and Privacy

If they don't trust the process, students will not honestly complete the questionnaires. As we learned from follow-up questioning of students in the class, some students decided not to complete the online questionnaires because they believed that their answers could be traced back to them, and their rating and any comments they made would be seen by the instructor. Although we assured students that their responses would not be shared with the instructor until after the semester had concluded (and grades had been submitted), some students were still concerned about the anonymity of their responses. This may have been more problematic for our study, because of the relationship of the instructor to the research study (second author). Instructors who choose to administer course evaluations online will need to address the issue of anonymity in a forthright manner.

Administration Method

Instructors will need to decide who will administer the questionnaires and tabulate and report the data. The decision to use web-based surveys or mailed questionnaires depends on a number of factors. Web-based surveys may provide time and cost savings, but assume you are able to develop and host it and that your students will not have problems accessing or completing it. Web-based surveys can be setup using the internal survey feature of the learning software, through third-party survey software, or through a custom developed survey. Student responses can often be tabulated quickly and efficiently and made available in a timely manner. This is especially important if you use pre-course or mid-semester surveys.

Mailed questionnaires require less technical skills, but usually require more time and costs. Paper instruments must be printed, envelopes must be addressed to students, and sufficient postage must be used. In addition, stamped return envelopes must be provided. Students should be assured of the anonymity of their responses. Although student addresses should not appear on the returned questionnaires, students might be concerned that postmarks could provide a clue to their identification. This can be avoided by having the questionnaires returned to a central office, rather than directly to the instructor. If you require students to record their response on a scantron form, the data can be tabulated relatively quickly. In contrast, hand-scored surveys may take considerably longer to tabulate.

Summary

The process we have described allows distance learning instructors and administrators to better understand how their students are faring in online classes. The use of the IHEP standards fulfills a need to get feedback to online teachers who may not get the information in any other manner. The IHEP standards provide objective criteria and a framework that allows comparisons over time to IHEP benchmarks, or even to the ratings of the instructor. Using measurements at three points in time, instructors are able to identify trends in a class and the impact of any changes made during the class. The standards allow for faster responses and changes than an end-of-class survey at which time it becomes too late to help the students.


References

American Educational Research Association (1999). Standards for Educational and Psychological Testing. Washington , DC .

Becker, L.A. (1998). Effect size calculators. University of Colorado at Colorado Springs . Retrieved from http://www.uccs.edu/~lbecker/psy590/escalc3.htm

Belcheir, M. J., & Cucek, M. (2002). Faculty perceptions of teaching distance education courses. Research Report 2002. (Boise State University Institutional Assessment Report 2002-02). Retrieved from http://www2.boisestate.edu/iassess/Reports/Reports%202002/RR%202002-02.pdf

Cope, C., & Ward, P. (2002). Integrating learning technology into classrooms: The importance of teachers' perceptions. Educational Technology & Society 5(1) 67-74.

Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Hillsdale , NJ : Lawrence Erlbaum Associates.

O'Malley, J., & McCraw, H. (1999). Students perceptions of distance learning, online learning, and the traditional classroom. Online Journal of Distance Learning Administration, 2(4). Retrieved on October 11, 2004, from http://www.westga.edu/~distance/omalley24.html

Peters, L. (2001). Through the Looking Glass: Student Perceptions of Online Learning. The Technology Source, September/October 2001. Retrieved from http://ts.mivu.org/default.asp?show=article&id=907

Phipps, R., & Merisotis, J. (2000). Quality on the line: Benchmarks for success in Internet-based distance education . Blackboard and National Education Association. Retrieved from http://www.ihep.org

Scollin, P.A. & Tello, S. F. (1999). Implementing distance learning: Frameworks for change. The Internet and Higher Education, 2 (1), 11-20.

Schönwetter, D., & Francis, H. (2003). Student Perceptions of Learning Success with Technology. McGraw Hill Ryerson. Retrieved from http://www.mcgrawhill.ca/highereducation/images/allstudents.pdf

Web-based Education Commission (2000). The power of the Internet for learning: Moving from promise to practice . Washington , DC : U.S. Department of Education. Retrieved from http://interact.hpcnet.org/webcommission.

Appendix A – Survey

Part I – Your Expectations About the Teaching and Learning Process

1. My interaction with the instructor will be facilitated through a variety of ways.

2. My interaction with other students will be facilitated through a variety of ways.

3. Feedback about my assignments and questions will be provided in a timely manner.

4. Feedback will be provided to me in a manner that is constructive and non-threatening.

5.Courses will be separated into self-contained modules that can be used to assess my mastery before moving forward in the course.

6. The modules will be of varying lengths determined by the complexity of learning outcomes.

7. Each module will require me to engage in analysis, synthesis, and evaluation as part of the course assignments.

8. E-mail addresses and a course message board will be provided to encourage students to work with each other and the instructor.

9. The course will be designed to require students to work in groups utilizing problem-solving activities in order to develop topic understanding.

10. Course materials will promote collaboration among students.

11. I will be provided with supplemental course information that outlines course objectives, concepts, and ideas.

Part II – Your Expectations about the Structure of the Course

12. Specific expectations will be set for me with respect to a minimum amount of time per week for study and homework assignments.

13. The instructor will grade and return all assignments within a certain time period.

14. Sufficient online resources will be made available to me.

15. I will be instructed in the proper methods of effective research, including assessment of resource validity.

16. Before starting the course, I will be advised about the course to determine if I have the self-motivation and commitment to learn at a distance.

17. Learning outcomes for the course will be summarized in a clearly written, straightforward statement.

Part III – Your Expectations about the Support Provided for Students

18. I will be able to obtain assistance to help me use electronically accessed data successfully.

19. I will be provided with hands-on training and information to aid me in securing material through online databases

20. Written information will be supplied to me about the course.

21. Easily accessible technical assistance will be available to me throughout the duration of the course.

22. A structured system will be in place to address my questions.


Appendix B

Differences Between Face-to-Face and Distance Learning Environments

Distance learning changes three important dynamics of instruction – the modalities of communication, the management of time, and the formats for assessment. Increasing the distance between the instructor and the student presents both challenges and opportunities that must be addressed by online instructors.

Changes to the Modalities of Communication

Given current bandwidth constraints and technology limitations, the nature of distance learning reduces much of the online communication between the students and the instructor to text messages rather than verbal conversation and the body language exhibited in a face-to-face classroom. As noted by communication researchers, nonverbal cues are the dominant source of meaning in interpersonal communication (Mehrabian, 1972). This does not mean that teachers and students understand each other better; only that they communicate on several levels when engaging face-to-face. With fewer channels of communication available it is essential that the teacher and the students practice writing for clarity and meaningfulness. Lewis (2000) proposes the WRITE way to communicate online. WRITE is an acronym for a style of writing that is Warm, Responsive, Inquisitive, Tentative, and Empathetic. Successful online instructors must model appropriate language and tone if they wish students to respond in a similar fashion.

Other issues that can impact on successful online communication include the typing skills of students and instructors. Although their typing speed may slow them down, the ability to use assistive technologies such as spell-checker or thesaurus programs to compose responses off-line and then paste those responses into the online forums somewhat mitigates this problem. Students in online environments tend to be more uninhibited (Siegel, Dubrovsky, Kiesler & McGuire, 1986) and willing to share personal information and become intimate with their classmates more quickly than in a face-to-face classroom. Students also tend to be less aware or respectful of status differences in the online environment, particularly in classes where the students are adults (Harasim, 1988; Sproull & Kiesler, 1992.). To truly express oneself in an online setting can be difficult. When instructors believe there has been a breakdown in communication, they may need to schedule phone calls or visits to re-establish the dialog.

Changes to the Use of Time in Online Teaching and Learning

Face-to-face instruction requires the student and the instructor to be in the same place at the same time. Online instruction removes time and place constraints. This asynchronous mode of communication requires both students and the instructor to be self-regulating and to check in on the instructional conversation on a regular basis. Unlike a face-to-face class where regular communication may be limited to once or twice a week during class meetings, distance-learning participants can send messages, ask questions and post responses at any time. There is an expectation that messages posted to a discussion forum or sent via email will receive a response within an agreed upon timeframe – usually within 24 hours. When properly implemented, students and teachers can communicate more quickly and easily throughout the class.

Instructors new to online teaching report increased time and effort is required to prepare instructional materials for online delivery (Web-based Education Commission, 2000). It is necessary for instructors to have the majority of the materials ready for the entire course on the first day of class – especially the detailed course syllabus. Online learners need to schedule their time and that assumes they have a complete schedule of assignments and learning tasks. Instructors must then monitor student learning during the rest of the course.

Students new to online learning often anticipate that an online class will be easier. (Hughes, 2001); however, the reality of online learning is that the workload is often higher. Students with no experience in a distance-learning environment can be unsure of how things are done and what is expected from them compared to a traditional class. It takes time for online students to truly understand the need to get organized and stay current with assignments. Many online instructors pre-screen their students to ensure that they are sufficiently self-regulated to be able to manage their time and be successful in the online learning environment. The University of Illinois Online (2003) provides several self-assessment surveys for students to take.

Changes to the Formats for Assessment

Traditional classroom based assessment where the instructor is present to proctor the exam in not possible in the online environment. However, most course management systems provide the instructor with tools for creating timed tests using a variety of question formats (multiple choice, true-false, short answer etc.) that are suitable for assessing some forms of learning. Some instructors chose to replace these quizzes and tests with team assignments, papers and/or participation in the class. The assessment of learning in online courses cannot be effective or efficient if the content within the online course is not appropriate for online delivery (Bothel, 2002). Simply stated, online learning should not be limited to factual recall but rather emphasize the application of skills and knowledge to problem situations that resonate with the knowledge domain and the assessment strategies should allow for the demonstration of that learning.

References for Appendix B

Bothel, R. T. (2002). A cautionary note about online assessment. In Anderson , R., Bauer, J. & Speck, B. (Eds.), Assessment Strategies for the Online Teacher: From Theory to Practice. San Francisco : Jossey-Bass.

Harasim, L. M. (1988). Online Group Learning/Teaching Methods (Technical Paper #7). Education Evaluation Centre, The Ontario Institute for Studies in Education.

Hughes, A. (2001). Online learning: Is it for me? Monroe Community College : Rochester , NY . Retrieved on 2/5/2003 from http://www.monroecc.edu/depts/distlearn/minicrs/

Lewis, C. (2000). Taming the Lions and Tigers and Bears: The WRITE WAY to Communicate Online. In K. White & B. Weight (Eds.), The Online Teaching Guide. Needham Heights , MA : Allyn and Bacon.

Mehrabian, A. (1972). Tactics of Social Influence. Englewood Cliffs, NJ: Prentice-Hall.

Siegel, J., Dubrovsky, V., Kiesler, S., & McGuire, T. (1986). Group processes in computer-mediated communication. Organizational Behavior and Human Decision Processes, 37, 157-187.

Sproull, L., & Kiesler, S. (1992). Connections. Cambridge , MA : MIT Press.

The University of Illinois Online (2003). Retrieved from http://www.online.uillinois.edu

Web-based Education Commission (2000). The power of the Internet for learning: Moving from promise to practice . Washington , DC : U.S. Department of Education. Retrieved from http://interact.hpcnet.org/webcommission/


Online Journal of Distance Learning Administration, Volume VII, Number IV, Winter 2004
State University of West Georgia, Distance Education Center
Back to the Online Journal of Distance Learning Administration Content