Distance Education: Better, Worse, Or As Good As Traditional Education?


Shelia Tucker
East Carolina University
tuckers@mail.ecu.edu

Abstract

This study examined pre-test and post-test scores, homework grades, research paper grades, final exam scores, final course grades, learning styles, and ages of distance education and traditional students enrolled in a business communications class to determine if distance education is better, worse, or as good as traditional education. Significant differences were found for post-test scores, final exam scores, and age. There were no significant differences in pre-test scores, homework grades, research paper grades, and final course grades. Both groups preferred clearly organized coursework and performing at an above-average level--ranking in the top 25 to 33% of their class. Recommendations for research include investigating student social interaction and increasing the number of classes studied to compare results.

Introduction

Distance education is becoming a more vital part of the higher education family. Just about every major American university offers these courses. Distance education reaches a broader student audience, better addresses student needs, saves money, and more importantly uses the principles of modern learning pedagogy (Fitzpatrick, 2001). Public as well as political interest in distance education is especially high in geographic regions where the student population is widely distributed (Sherry, 1996). In fact, public policy leaders, in some states, are recommending the use of distance education as opposed to traditional learning.

As distance education increasingly becomes a vital part of higher education, one must ask, if distance education is in fact better, worse, or as good as traditional education? A vehement argument is being waged, pitting distance education against traditional face-to-face education. Some argue that distance education is viewed as being different from other forms of education. Many educational-technologists view it as being linked to technology (Garrison, 1987), an aspect that may play a role in course development and acceptance problems (Jeffries, 1996). According to Fox (1998), what is in dispute is not whether distance education is ideal, but whether it is good enough to merit a university degree, and whether it is better than receiving no education at all. He alludes to an argument that states students learn far too little when the teacher’s personal presence is not available because the student has more to learn from the teacher than the texts. Thus, in order for the student to be taught well, does the teacher have to be personally present?

Many advocates of distance education are ardent about their venue and very critical of traditional education. These online education devotees view traditional classes as being unchangeable, inflexible, teacher-centered, and static (Fitzpatrick 2001). However, proponents argue that many simply would not be able to get a degree without distance education—the full-time police officer, the mother of four, or the individual living in a rural area approximately 100-200 miles away from any educational institution. Many individuals desperately need distance education courses because they "have jobs, families, civic responsibilities. They are thirsting. But some want us to say, 'Sorry you don’t want to drink the water there, but we can’t bottle our fresh spring water, so you’ll have to come here or drink nothing" (Fox, 1998, p. 5). Proponents contend that distance education is "as good as" traditional education. In other words, learning occurs as much in distance education as it does in traditional education. However, is this really so? Does distance education work better for some students as opposed to others? Does student assessment in distance education differ from that in the traditional classroom (Phipps and Merisotis, 1999)?

Opponents of distance education may agree that it is possible for some learning to occur through this medium, but that isn’t enough. They stress focusing on the fullness of learning (Fox, 1998).

Review of Literature

A profusion of online articles presents arguments both for and against distance education. Why such a dichotomy of opinions? It is because in spite of all of the research studies conducted as well as the large amount of written material focusing on distance education, "there is a relative paucity of true, original research dedicated to explaining or predicting phenomena related to distance learning" (Phipps and Merisotis, 1999, p. 2). Most original research focuses on student outcomes (grades, test scores), student attitudes, and overall student satisfaction toward distance education. Moreover, most of these studies conclude that distance education compares favorably with classroom-based instruction. In fact, Fox (1998) stated that only theories, not proof, allude to the fact that distance education students’ education is not worthy of a degree. He stated he found no actual evidence from a single study, from distance education teaching experiences, or from students has provided proof of such a deficiency. Fox, along with other distance education supporters, students, and professionals, support the idea that distance education classes are good enough and feel that students are not sacrificing an on-campus education in order to get an education through distance education.

With few exceptions, students using technology in distance education have similar learning outcomes to students in the traditional classroom setting (Beare 1989; McCleary & Egan 1989; Sonner 1999). Souder (1993) conducted a natural experiment that compared traditional students and distance education students in management of technology master’s degree programs. Results indicate that distance learners should not be viewed as disadvantaged in their learning experiences. Further, distance learners can perform as well as or better than traditional learners as measured by homework assignments, exams, and term papers. Equally important, as noted by researchers, is the fact that students in distance learning courses earned higher grades than those in the traditional classroom setting (Bartlett 1997; Bothun 1998; Heines & Hulse 1996; Kabat & Friedel 1990; Schutte 1996; Souder 1993). Gubernick and Ebeling (1997) stated that distance education students scored from five to ten percent higher on standardized achievement tests than did students in the traditional classroom setting. Conversely, as reported by other researchers, there are no significant differences in grades for distance education students versus traditional students (Freeman 1995; Mortensen 1995; McKissack 1997).

Wiesner (1983) notes that an important question still remaining to be answered is, what are the factors that account for student success or failure in distance learning programs? Is it possible that student learning style preferences have an affect on whether or not students succeed or fail? Students who had learning preferences (that is, strengths) that were not supported were identified by their instructors as being slow or poor achievers (Marshall, 1991). According to Sherry (1996), student preference for a particular mode of learning is an important variable in learning effectiveness, and effective learning requires knowledge of learner styles. What may work for one type of learner may not necessarily work for another. Learning style, as defined by Canfield (1992), is the moving component of educational experience that motivates students to perform well. Recognizing the existence of alternate learning styles may be helpful to the instructor in developing a local instructional theory and, according to Owens and Straton (1980), localized theory has a greater prospect of success as opposed to a general instructional theory. According to Dunn, Beaudry, and Klavas (1989), if learning preferences were supported through altering educational conditions to meet learning style preferences, statistically significant improvements in behaviors, grades, and attitudes would be observed. This philosophy can be referred to as "the match of critical learning style factors to environment and instruction" (Marshall, 1991, 226). In addition, there is a relationship between learning style variables and the satisfaction and completion of distance learning programs (Thompson 1984; Moore 1976).

Purpose of the Study

This study, which was conducted in 1999, compared traditional face-to-face education and distance education in an attempt to determine if distance education is better, worse, or as good as traditional education. Both groups were studied to determine whether there were significant differences in preferred learning styles, age, homework grades, research paper grades, final exam scores, final course grades, and subject matter knowledge as measured by a pre-test and post-test.

Methodology

Research participants were 47 undergraduate students enrolled in a business communications class at a large urban university in North Carolina. The university offers doctoral, masters, and baccalaureate degrees in liberal arts, professional fields, and sciences. The business communications course was designed to develop an understanding of the need for effective communications in business. Application of basic principles of written communications was utilized to solve specific business problems. Twenty-three students were enrolled in the traditional face-to-face class. Their ages ranged from 19 to 33 with the average age being 23. These students were comprised of different majors that included Business, Vocational, and Technical Education; Social Work; General College; Geology; and Library Science. Four percent of the students were freshmen, 13% sophomores, 35% juniors, and 48% were seniors. The distance education class consisted of 24 students. Their ages ranged from 22 to 51 with the average age being 38. These students were from different majors that included Foreign Language, General College, Nutrition and Hospitality Management, Social Sciences, and University College. Two percent of the students were classified as visiting students, 9% were freshmen, 29% were sophomores, 21% were juniors, and 33% were seniors. Students enrolled in the courses opted to take the course either because it was required or because it was to be used as a free elective. A quasi-experimental research design was used to collect data for the study.

The researcher and the instructor of the course are the same person. Both classes had the same instructor, studied the same course content, used the same course materials, completed the same assignments, and were allotted the same time frame for completion of assignments. All were given the same pre-test, post-test, homework, research project, and final exam. The pre-test and post-test were designed, by the author of the course textbook, to test students’ knowledge of grammar and punctuation as well as knowledge of the basic concepts crucial to business communication. Students were assigned the same homework problems taken from the end of each chapter covered during classes, and all were graded using the same grading criteria. Every student was required to complete a research project dealing with international business. Students had to select a country other than the United States and prepare a seven to nine page paper that focused on the culture of that country as well as how to successfully conduct business in that country. They were allowed to choose the country. The same grading matrix was used for both classes. Students completed the same final exam. Three-fourths of the final exam was composed of multiple-choice questions. These questions were designed as case scenarios or situations that required students to apply the knowledge learned throughout the course in order to provide correct responses. One-fourth of the test consisted of true/false questions. Other similarities are that both classes were able to contact the instructor by e-mail, telephone, during office hours, by appointment, and by FAX. Additionally, both classes were required to participate in class discussions. The traditional class participated orally; the distance education class participated through electronic threaded discussions. The traditional class handed in their assignments. The distance education class submitted assignments as attachments to e-mail. Assignments for both classes were graded in color ink—the traditional class by colored ink pen, the distance education class by color font.

The classes differed in terms of: scheduling (the campus class met on Tuesday evenings from 7:00 – 9:00 with the instructor present; course material for the distance education class was posted on Monday evenings by 6:30 p.m. However, all students were not required to be on-line at this time. They were not required to meet together as a class with the instructor. Course material was posted and students were given a one-week span of time in which to log into the course and complete assignments); class location (traditional students met in the classroom, distance education students worked from home or a nearby community college computer lab); instructional method (The traditional lecture was the instructional method used with the traditional class. The distance education class received lecture notes in the form of audio links and written notes); accessibility to the instructor; and instructional media. Instructional media used in the traditional class included computers, PowerPoint presentations, and transparencies. The distance education class downloaded course content that included audio links, which allows the students to hear voice recordings of lectures from the instructor; video links, which allows students to not only hear the instructor’s voice, but see the instructor as well while providing lectures; text links, which provided students with typewritten lecture notes; PowerPoint slide shows; and other technology such as RealPlayer (can be downloaded free from the Internet) which makes it possible for students to see and hear audio and video links. The instructor used QuickCam to record audio and video files.

The Canfield Learning Styles Inventory (CLSI) was used to determine preferred learning styles of the students. The CLSI is a 30-item assessment using a 4-point rank order procedure for each item. Students ranked these choices in the order that best described their preferences or reactions. Each item was ranked on a scale of 1 to 4: 1 = most preferred choice, 2 = second preferred choice, 3 = third preferred choice, and 4 = least preferred choice. A ranking process was used to obtain the raw scores. Thus, the lower the score, the stronger the preference. The lowest possible score is 6, and the highest possible score is 24. Therefore, the lowest possible score of 6 would denote the strongest preference for a scale. The least preferred scale would be denoted by the highest possible score, which is 24. Ranking of the four responses on each item equates to six paired comparison items in which the student chooses one item from each pair. For example: Peer, Organization, Goal Setting, and Competition each are ranked on a total of 6 items within the inventory (1, 6, 11, 16, 21, 26). The CLSI has 21 subscale variables that are grouped into four major categories:

1. Conditions for Learning (Peer, Organization, Goal Setting, Competition, Instructor, Detail, Independence, Authority) - constitutes about two-fifths of the items in the inventory. These items, phrased in typical classroom situations, are designed to measure student motivational qualities. These motivational areas center on affiliation, structure, eminence, and achievement.

2. Area of Interest (Numeric, Qualitative, Inanimate, People) measures students’ preferred subject matter or objects of study.

3. Mode of Learning (Listening, Reading, Iconic, Direct Experience) concentrates on identifying the specific modality through which students learn best.

4. Expectation for Course Grade (A, B, C, D, and Total Expectation) is designed to predict the failure or success of a learner. The A- to D- Expectation scales reflects the level of performance anticipated. See Appendix A. (Canfield 1977).

Validity

Validity is the extent to which an instrument measures what it is supposed to measure. Traditionally, validity refers to testing the relationship of a given measure to some standard measure of success such as comparing a new measure of math aptitude results to criteria of a widely accepted math achievement test or math course. The CLSI is not this type of traditional test. For example, there is no expectation that consequences in any broadly defined area will be derived from a student’s preference for the Iconic or Competition scale. Instead, the CLSI provides students with a detailed description of their characteristic preferred learning styles. The expected outcome is that greater success and satisfaction will be provided to the students when their learning style is matched to the instructional environment.

Collecting learning style preferences in a group for whom one has prior expectations is the most obvious test of whether those preferences are sensibly estimated. Is the Numeric scale a preference for math majors, or the Direct Experience scale for trade school students? Research studies reveal that there is a relationship between the academic and career choices of those tested and the preferences revealed by scales and sets of scales of the CLSI. For example, Llorens and Adams (as cited in Canfield, 1992) studied occupational therapy students and found that they had a higher preference for Direct Experience, Instructor, Goal Setting, People, and Independence than the normed group. These students had a lower preference for Numeric and Reading than the normed group. Additionally, Pettigrew and Zakrajsek (as cited in Canfield, 1992) studied physical education majors and found that they, in comparison to the normed group, had a higher preference for Direct Experience, Iconic, and Organization, but a lower preference for Numeric and Reading. These reports collectively reflect hundreds of administrations of the CLSI providing solid preliminary evidence that the academic and career choices of those tested are related to the preferences discriminated by scales and sets of scales.

The ability to demonstrate whether teaching students through techniques congruent with their learning style preferences will enhance achievement and satisfaction with the learning experience is another more critical kind of validity test (Canfield, 1992). Studies that used different curricular content and a variety of student characteristics demonstrated this concept in the affirmative. For example, Pettigrew and Heikkinen (as cited in Canfield, 1992). Students taught through techniques congruent with their learning style preferences were compared to psychomotor learning in junior high school students who were taught using eclectic techniques that took their learning style preferences into account. Students who were taught through techniques congruent with their learning style preferences performed better on 9 of 12 tasks. They did not perform any lower on the remaining three tasks.

Reliability

Brainard and Ommen (as cited in Canfield, 1977) conducted numerous standardization and reliability studies using the CLSI at a community college in Missouri in 1976. They administered the CLSI to over 3,000 community college students. A sample of 1,397 students was used to study internal consistency. To correct for the fact that the reliability of a larger scale was being estimated from a reduced number of items, coefficient correlations were used. Values ranged from a low of .87 to a high of .965. Split-half reliability scores obtained for each scale were higher than those for the analyses of individual items. The high was .99, and the low was .96.

Results and Discussion

To determine whether distance education is better, worse, or as good as traditional education, this study examined pre-test and post-test scores, age, preferred learning styles, homework grades, research paper grades, final exam scores, and final course grades. T-statistics, means, and standard deviations were obtained for both classes. Significant differences were found at the .05 alpha level for post-test scores, final exam scores, and age. No significant difference at the .05 alpha level between the two groups was found with regard to pre-test scores, homework grades, research paper grades, and final course grades. See Table 1 for the results. Effect size (practical significance) was measured using Cohen’s d. Effect size, according to Cohen (1988), can be categorized as small ( d = 0.2), medium (d = 0.5), and large (d = 0.8). And, (d = 1) indicates a change in magnitude equivalent to one standard deviation.

 

 

On-Campus

Distance Ed

F-ratio

Sig. 2-tailed*
Variables Mean Std. Dev. Mean Std. Dev.    
Pre-test

55.52

13.50

59.21

9.96

1.96

.291
Post-test

65.55

10.91

72.43

9.12

1.18

.026
Final exam

78.26

12.63

85.92

8.16
.93

.017
Final grade

80.57

16.16

85.42

13.11
.92

.263
Age

23.13
5.12

37.79

8.72

9.05

.000
Homework

78.55

15.99

85.22

12.02

1.42

.120
Research Paper

 

87.45

 

28.60

 

91.39

 

12.32

 

1.42

 

.549

*Significant difference at 0.05 level

Practical significance was small for the pre-test (d = .31), homework grades (d = .47), final course grades (d = .33), and research paper grades (d = .17). For post-test scores (d = .68) and final exam scores (d = .72) significance was medium. The practical significance for age was large (d = 2.0). The possibility of intercorrelation among the dependent variables was reviewed. Correlations were strongest and significant at the .05 level for final exams, final grades, research paper, homework, and post-test. With the exception of homework, they may be related because all of the other variables occur close together near the end of the semester.

T-test results revealed no significant differences between pre-test scores of the two groups. This data may suggest that the two groups were similar in knowledge of course content at the beginning of the semester. The F-ratio of .291 was not significant at the .05 alpha level. Due to the fact that the sample size is small and the variances for the two groups are different, Levene’s test for equality of variances was used to determine if there was heterogeneity of variance between the two groups. The only variable where variances of the two groups were heterogeneous was age. Results revealed F=9.1, which was significant at an alpha of .004. There were, however, significant differences between post-test scores of both groups. Distance education students scored higher than traditional students. The F-ratio of .026 was significant at the .05 alpha level. When looking solely at the pre-test scores, it appears as though the distance education students outperformed the traditional students by about 3.69 points. However, when considering the pre-test with the post-test, the traditional students gained 10.03 points between the pre-test and post-test, and distance education students gained 13.22 points. Thus, both groups increased from the pre-test to the post-test, yet they kept their same relative position.

T-test results revealed a significant difference in students' ages. The age of distance education students ranged from 22 to 51 with the average age being 38. The age of traditional students ranged from 19 to 33 with the average age being 23. The F-ratio of .000 was significant at the .05 alpha level. Age is the one variable that does not correlate with the other variables. Results of the correlation between the variables revealed that the variance between the two groups is heterogeneous; the variance is larger for distance education students. This variance may be due in part to the fact that students self-selected the class themselves. Older students may have chosen distance education because it fits well with their schedule. The instructor could not control the types of classes the students enrolled in. Age does not play a significant role in how well students did on the pre-test, post-test, final exam, final course grade, research paper, or homework. Students older in age did not do systematically better or worse than students other ages. In essence, being younger or older does not mean that you will do better or worse. For example, older students may not do consistently better than younger students because they may have been out of school for a while and may not realize the amount of time it will take to complete a task. Also, younger students may not do consistently better than older students because of lack of life experience. Thus, while age may make up for motivation, life experience, or the lack of educational experience, it did not make a difference between these two groups.

T-test results revealed a significant difference between final exam scores of both groups. The F-ratio of .017 was significant at the .05 alpha level. The mean score of distance education students was 86 and for traditional students, 78.

No significant differences were found between the two groups with regard to homework grades. Homework requirements were identical for both groups. The F-ratio of 1.42 was not significant at the .05 alpha level.

Both groups were required to develop a research paper on international business communication. T-test results revealed that there were no significant differences between the two groups with regard to the research paper grade. The F-ratio of 1.42 was not significant at the .05 alpha level.

T-test results revealed an F-ratio of .263 for final course grade at the .05 alpha level. There were no significant differences between both groups for final course grade.

Learning style preferences for both groups, as measured by the CLSI was obtained. Both groups preferred Organization and B-Expectation for Course Grade. Both groups desire well-organized course work, meaningful assignments, as well as a logical sequence of activities. They work well with lecture note-, course-, chapter-, and topical outlines. Both groups expect to perform at an above average level in a learning situation, but not necessarily at a superior level. They expect to be within the top 25 to 33 percent of the class. Additionally, both groups least preferred the Numeric scale.

Distance education students also preferred working with People and Direct Experience whereby they can have direct contact with materials, topics, or situations. They least preferred Authority and Listening. They tend not to like classroom discipline or maintenance of order, nor do they like listening to lectures, tapes, and speeches. The traditional students preferred Inanimate and Iconic. They like working with things, and they like interpreting movies, slides, and illustrations. They least preferred Independence and Reading.

Conclusion

It is important that research be done to determine if distance education is as effective as traditional education. The major goal of this study was to determine if distance education is better, worse, or as good as traditional education. The same instructor taught both classes and ensured that the requirements for both classes were the same. Both classes required the use of technology and provided considerable rigor and value to the education process. This study concurs with the general body of knowledge that distance education can be just as good as traditional face-to-face education. No significant differences were found between pre-test scores, homework grades, research paper grades and final course grades. However, there were significant differences between the two groups with regard to age, post-test scores, and final exam scores. Distance education students scored higher in all three categories. Yet, this is not sufficient evidence to conclude that distance education is superior to traditional education. Other factors may have contributed to these results. For example, the distance education delivery method catered to, in part, students' preferred learning styles. They preferred Direct Experience, and the structure of the course allowed for considerable hands-on experience in learning course content. They least preferred Authority, and the structure of the course allowed them the freedom to work Independently on course material.

Recommendations for Further Research

Policymakers, faculty, and students need to make properly informed judgments with regard to key issues in distance education. Distance education students preferred the People learning style scale. Thus, a study should be conducted to investigate the issue of social interaction within distance education courses. Do these students feel like they are alone while taking courses, or are they being made to feel like they are part of a community of learners. Additionally, this study focused on only one course. Further study should be done to determine if the kind of knowledge acquired is the same, particularly if more than one or two courses, possibly an entire academic program, is offered through distance education.

The age variable was intriguing because it did not produce expected results. Age was found to be heterogeneous, the Cohen d was very large (2 standard deviations), and there was no significant correlation to the other variables. Further study should be conducted to determine why. Other variables to study should include motivation, discipline, gender, etc.

Summary

This study adds to the growing body of research regarding distance education. It is important to note that a lack of significant difference in final course grades may indicate that one delivery method is not superior to the other. Thus, this study can conclude that while distance education may not be superior to or better than traditional face-to-face education, it is not worse than traditional education. It can be an acceptable alternative because it is just as good as traditional education.


References

Bartlett, T. (1997). The hottest campus on the Internet. Business Week, 3549, 77-80.

Beare, P. L. (1989). The comparative effectiveness of videotape, audiotape, and telecture. The American Journal of Distance Education 3(2), 57-66.

Bothun, G. D. (1998). Distance education: Effective learning or content-free credits? Cause/Effect, 21(2), 28-31, 36-37.

Canfield, A. A. (1992). Canfield Learning Styles Inventory Manual. Los Angeles: Western Psychological Services.

Clark, T. (1983). Attitudes of higher education faculty toward distance education: A national survey. The American Journal of Distance Education, 7(2), 19-31.

Dunn, R., Beaudry, J.S., & Klavas, A. (1989). Survey of research on learning styles. Educational Leadership, 46(6), 50-58.

Fitzpatrick, R. (2001). Is distance education better than the traditional classroom? Retrieved July 31, 2001 from http://www.clearpnt.com/accelepoint/articles/r_fitzpatrick_060101.shtml.

Fox, J. (1998). Distance Education: is it good enough? The University Concourse, 3(4), 3-5.

Freeman, V. S. (1995). Delivery methods, learning styles and outcomes for distance medical technology students. (Doctoral Dissertation, University of Nebraska-Lincoln, 1993).

Garrison, D. R. (1989). Understanding distance education: A framework for the future. New York: Routledge.

Gubernick, L., & Ebeling, A. (1997). I got my degree through e-mail. Forbes, 159(12), 84-92.

Heines, R. A., & Hulse, D. B. (1996). Two-way interactive television: An emerging technology for university level business school instruction. Journal of Education for Business, 71(2), 74-76.

Henson, K. T., & Borthwick, P. (1984). Matching styles: A historical look. Theory Into Practice, 23(1), 3-9.

Jeffries, M. (n.d.) IPSE – Research in Distance Education. Retrieved July 28, 2001 from http://www.ihets.org/consortium/ipse/fdhandbook/resrch.html.

Kabat, E. J., & Friedel, J. (1990). The development, pilot-testing, and dissemination of a comprehensive evaluation model for assessing the effectiveness of a two-way interactive distance learning system. ERIC, ED 322690.

Kaplan, E. J., & Kies, D. A. (1993). Together: Teaching styles and learning styles improving college instruction. College Student Journal, 27(4), 509-13.

Lane, C. (1992). Distance education. In P. S. Portway and C. Lane (Eds.), Guide to teleconferencing and distance learning. 2nd Ed. San Ramon, CA: Applied Business Telecommunications.

McCleary, I.D., & Egan, M. W. (1989). Program design and evaluation: Two-way interactive television. The American Journal of Distance Education, 3(1), 50-60.

McKissack, C. E. (1997). A comparative study of grade point average (GPA) between the students in traditional classroom setting and the distance learning classroom setting in selected colleges and universities. (Doctoral Dissertation, Tennessee State University, 1997).

Marshall, C. (1991). Teachers' learning styles: How they affect student learning. The Clearing House, 64(4), 225-227.

Moore, M. (1976). Investigation of the interaction between cognitive style of field independence and attitude to independent study among adult learners who use correspondence independent study and self-directed independent study. (Doctoral Dissertation, University of Wisconsin-Madison, 1976).

Mortensen, M. H. (1995). An assessment of learning outcomes of students taught a competency-based computer course in an electronically-expanded classroom (distance education). (Doctoral Dissertation, University of North Texas, 1995).

Perraton, H. (1988). A theory for distance education. In D. Sewart, D. Keegan, & B. Holmberg (Eds.), Distance education: International perspectives (pp. 34-45). New York: Routledge.

Phipps, R., & Merisotis, J. (1999). What’s the Difference? A review of contemporary research on the effectiveness of distance learning in higher education. Washington: THE INSTITUTE for Higher Education Policy.

Schramm, W. (1977). Big media, little media. Beverly Hills, CA: Sage.

Schutte, J. G. (1996). Virtual teaching in higher education: The new intellectual superhighway or just another traffic jam? Retrieved July 7, 2001 from http://www.csun.edu/sociology/virexp.htm.

Sherry, L. (1996). Issues in distance learning. Retrieved July 7, 2001 from http://www.cudenver.edu.public/education/edschool/issues.html

Sonner, B. (1999). Success in the capstone business course—assessing the effectiveness of distance learning. Journal of Education for Business. 74(4), 243-248.

Souder, W. E. (1993). The effectiveness of traditional vs. satellite delivery in three management of technology master’s degree programs. The American Journal of Distance Education, 7(1), 1993.

Thompson, D. G. (1984). Designing for diversity: The interaction between cognitive style of Field Dependence/Independence and the provision or absence of systematic telephone tutoring upon persistence and course evaluation scores of students enrolled in correspondence study. (Doctoral Dissertation, University of Wisconsin-Madison, 1984).

Verduin, J. R., Jr. & Clark, T. A. (1991). Distance education: The foundations of effective practice. San Francisco: Jossey-Bass.

Wiesner, P. (1983). Some observations on telecourse research and practice. Adult Education Quarterly 33(4), 215-221.

Willis, B. (1993). Distance education: A practical guide. Englewood Cliffs, NJ: Educational Technology Publications.


Appendix A

Description of Scales – Canfield Learning Styles Inventory (Canfield, 1992).

Conditions for Learning (8 Scales): Preferred situation or context of instruction.

Peer: Enjoys teamwork, maintaining good relations with other students, having student friends, etc.

Organization: Desires clearly organized course work, meaningful assignments, and a logical sequence of activities.

Goal Setting: Wants to set own objectives, use feedback to modify goals or procedures, and makes his or her own decisions on objectives.

Competition: Desires comparison with others, needs to know how he or she is doing in relation to others.

Instructor: Wants to know the instructor personally and have a mutual understanding and liking for him or her.

Detail: Likes to know specific information on assignments, requirements, rules, etc.

Independence: Prefers working alone, determining his or her own study plan, and doing things independently.

Authority: Desires classroom discipline, maintenance of order, and having informed knowledgeable instructors.

 

Areas of Interest (4 Scales): Preferred subject matter or objects of study.

Numeric: Prefers working with numbers and logic, solving mathematical problems, etc.

Qualitative: Likes working with words or language—writing, editing, talking.

Inanimate: Enjoys working with things—building, repairing, designing, operating.

People: Prefers working with people—interviewing, counseling, selling, helping.

 

Mode of Learning (4 Scales): Preferred manner of obtaining new information.

Listening: Prefers hearing lectures, tapes, speeches, etc.

Reading: Enjoys examining written information, reading texts, pamphlets, etc.

Iconic: Likes interpreting illustrations, movies, slides, graphs, etc.

Direct

Experience: Desires hands-on or performance situations, such as shop, field trips, practice exercises, etc.

 

Expectation for Course Grade (5 Scales): Level of performance anticipated.

A-expectation: Outstanding or superior level.

B-expectation: Above average or good level.

C-expectation: Average or satisfactory level.

D-expectation: Below average or unsatisfactory level.

Total Expectation: Weighted sum of A, B, C, and D expectations.


Online Journal of Distance Learning Administration, Volume IV, Number IV, Winter 2001

State University of West Georgia, Distance Education Center

Back to Journal of Distance Learning Administration Contents