Putting the Distance Learning Comparison Study in Perspective: Its Role as Personal Journey Research



Dr. Katrina A. Meyer
Assistant Professor of Educational Leadership
University of North Dakota
P.O. Box 7189
Grand Forks, ND 58202
katrina_meyer@und.nodak.edu



Abstract

Administrators of distance learning are faced with the challenge of encouraging reluctant faculty to consider online teaching. Is it possible that supporting faculty to conduct modest research studies is such an avenue? Currently, many research studies compare student grades or scores from an online course to a more traditional one, ignoring confounding variables, such as individual differences, and the impact of instructional design. Yet despite criticism of the comparison study, it continues to be used. Are faculty using these studies to produce replicable results or are they using this design as a way to explore web-based learning and to prove to themselves that it is an acceptable innovation? If so, these studies can be seen as more properly the province of qualitative research that chronicles a personal journey focusing on what the researcher has learned rather than producing valid and replicable research results.

Introduction

Research on the effectiveness of online learning is surely needed by faculty, administrators, and students. Students need to find the right online courses for their learning needs and look to research for guidance. Administrators use research findings to help make wise use of constrained resources while expanding and improving online courses. Faculty look to research that convinces them that their effort devoted to this new instructional technique will result in student learning. And yet, some of this important research is based on a comparison of a course delivered once over the web and once through more traditional (i.e., classroom) means. Why does the comparison study reoccur so regularly in the literature and why is it often poorly executed and roundly criticized? Arriving at an answer to this question will require that we understand the history of research on technology use, issues of technology and research design, and finally the personal needs of faculty.

No Significant Differences

Perhaps the most quoted body of research on distance education has been the work of Russell (1999), who reviewed 355 studies produced from 1928 to 1998. To a large extent, the studies compared instruction over videotape, interactive video, or satellite -- be they telecourses or television -- with on-campus, in-person courses. Students were compared on test scores, grades, or performance measures unique to the study and consistently, based on statistical tests, no significant difference between the comparison groups was found.

Surprisingly, a large number of studies completed after Russell's compilation continue to compare student achievement between technology versus in-person delivery models. Over 50 studies of this type were located for a more recent literature review (Meyer 2002) that focused on studies posted or published in 1999, 2000, and 2001 in peer-reviewed online journals and more traditional paper journals as well as conference web sites. These studies - all prepared and conducted by college and university faculty -- compared the same course taught in more traditional format versus a web-based model. Not surprisingly, the results of these studies remain largely the same as in Russell: comparing the two types of delivery methods leads to a conclusion of no significant difference in student achievement.

The long life and persistence of this research model is more surprising given that it has been soundly criticized by Phipps and Merisotis (1999) and Clark (1984) for its unsophisticated design and inability to achieve conclusive results (which may be setting the bar a little too high). Phipps and Merisotis (1999) attacked this research as lacking those elements that distinguish quality research, such as control groups, randomization of treatment groups, matching of student populations, statistical sophistication, and consistency in treatments (among others). They faulted these studies for focusing on courses rather than programs, not accounting for student differences (especially learning styles), the interaction of multiple technologies, and the lack of theoretical frameworks.

Moore and Thompson (1997) have also noted the weak research designs and lack of control elements in the early comparison studies. More recently, Joy and Garcia (2000) in a metaanalysis of studies comparing courses delivered by technology to traditional classroom models also found poorly designed research that did not control for many important variables. In other words, much of this research does not account for the types of complicating variables - student learning styles, maturity, multiple intelligences (to name but a few) - that may be important factors in student learning.

Administrators who review these research studies looking for help in identifying appropriate innovations must ask themselves if the problem is with a) the assumption that technology impacts learning, b) whether a good comparison study can be designed, or c) why faculty choose the comparison study to evaluate web-based instruction? The next three sections will address these questions in turn.

Does Technology Impact Learning?

Many early writers concluded that the technology does not have an independent impact on learning. "There is nothing inherent in the technologies that elicits improvements in learning" (Russell 1999, p. xiii). In other words, learning is not caused by the technology, but by the instructional method "embedded in the media" (Clark 1994, p. 22). Windschitl (1998) makes a similar point that web-based learning duplicates existing instructional activities and Morrison (2001) is even more blunt: "It is the instructional strategy, not the medium, that makes a difference."

These frustrations point to a confounding of technology with how it is used. Smith and Dillon (1999) call this the "media/method confound," or the inability to separate the technology from the way it is used in instruction. Many comparison studies describe the technology used, but not the pedagogical techniques. But if it is the instructional strategy that is effective in eliciting learning, why would one expect differences in outcome when the same instructional design (e.g., lecture, question-and-answer) is used in both technology-based and traditional courses?

The question remains whether instructional design can be separated from the technology used to implement it. Media and instructional approaches are essentially integrated and therefore, method must be confounded with medium. Kozma (1994) stated that both medium and methods influence learning and that they do synergistically by influencing each other (p. 11). This begs the question of whether the impact of media or method can be separately estimated. Clearly, we need better research that attempts to disentangle these relationships, if possible. But it is doubtful that the comparison study can do this.

Is There a Good Comparison Study?

So far, this discussion has been very critical of the comparison study, and the question is whether this is deserved criticism or whether the problem is with how the study is implemented. Administrators may need to possess some training in research design to judge whether the design is seriously flawed or just poorly done. If research is to be useful, it must avoid both problems.

Many research designs test treatments and thus make comparisons. There is no reason that a good comparison study could not be designed, if there were attempts to identify and control any number of intervening influences, randomly assign students to treatments, and collect data on several treatments multiple times. The challenge would be to find ways to unravel the "media/method" confound in order to isolate the unique impact of the web on learning, which might be done by scrupulously duplicating the instructional method (e.g., lecture) across both treatments. One would stay away from using a full course - delivered over an entire semester - in the two treatments, since it is unlikely that one can control all other factors over 16 weeks. In other words, a good comparison study is possible and it should be one approach to evaluating online learning.

But unfortunately, the comparison study is likely to be poorly designed and implemented. In most cases, the study was a one-time comparison of two courses, where both courses were taught by the same person (most often the researcher), where students self-selected into the course they wanted (which, admittedly, would be difficult to alter), and where the dependent variables tested were either grades or final exam scores, which may not discriminate specific skills or concepts learned. These conditions are not likely to produce useful or reliable results.

Why Do Faculty Choose the Comparison Study?

To find an answer to this question may well require a lengthy discussion into why faculty question the use of technology, why they wish to research technology, and finally, how they come to use the comparison study as a way to answer their questions about web-based instruction. Answering these questions will be the function of the next three sections.

Why faculty question technology
Whether faculty use technology depends on how well it modifies the faculty's rationalized myths (Jaffee 1998): the belief that classroom instruction is the "single best and necessary means" for student learning. (Others call this romanticizing the reality of the classroom, which for some students may be dull, boring, and frustrating.) These myths are powerful not because there is empirical evidence to support them, but because they confirm "deep-seated consensual beliefs and long-standing tradition" (Jaffee 1998). These myths may explain the significant resistance many faculty have to online learning, because it violates their identity as a professor and expert, a source of knowledge and information, and a performer at the classroom lectern. It may also explain why faculty research into using the web must deal with more issues than whether students learned as much as in another setting; in other words, faculty must spend substantial time understanding and adjusting their identities in order to see the usefulness of web-based instruction.

Jaffee's (1998) assertion may be the reason so many faculty look longingly to the time when there will be sufficient network bandwidth to allow for streaming video as a way to deliver their courses. In other words, they long for the same experience (for them) of a class: to lecture and to perform and to be seen as the expert. However, when videos of faculty teaching were compared to audio without video, Wisher and Curnow (1999) found that student learning did not improve with the video capability. Video, then, may be more essential for faculty to feel effective - to satisfy their self-identity -- rather than having an independent role in improving learning. In other words, the desire for video streaming may have less to do with the technology than the way the technology nearly duplicates the current classroom instructional experience for faculty. This means that administrators should be interested in finding ways that help faculty effectively question their professional role and instructional myths.

Why faculty research technology
Rogers' (1983) theory of the diffusion of innovations posits five factors that influence the adoption of an innovation, one of which includes an assessment of whether an innovation is better than its predecessor. Khan (1995) identified 10 obstacles to institutional change, one of which includes a lack of understanding about the innovation. For each of these authors, one major factor that influences whether faculty begin to use technology appears to relate to that faculty person's ability to understand the innovation and then have an opportunity to judge whether it is a good one. Given the importance of research to faculty and a preference for evidence (which may be construed as either qualitative or quantitative), this may be a sufficient explanation for why so many comparison studies continue to be conducted. In other words, a comparison study may be how faculty come to understand the innovation and prove to themselves whether it works well for student learning.

Furthermore, faculty satisfaction with a web-based class is strongly tied to evidence of student learning (Wegner et al. 2000; Hartman et al. 2000). This direct connection between student learning and faculty satisfaction may be crucial to faculty continuing to use the web, despite the greater workload it creates and in spite of all of the difficulties from technological glitches and network breakdowns. The importance of student learning for faculty satisfaction is an important reason for faculty to study online learning in order to prove to themselves that students really are learning.

Why faculty choose the comparison study
If there have been no conclusive results from comparison studies, why does this study design continue to be so popular? First, it may well be that the widespread attention paid to Russell's (1995) compilation of studies has acted to promote the visibility of comparison studies. Or perhaps, the lack of consistent criticism of the design may have encouraged others to try it.

Second, it may be that faculty are ignorant of good research design. This seems counterintuitive, since many faculty are trained extensively in research methods during their doctoral education and many are expected to produce research (albeit in their own discipline) to receive promotion and tenure at their institution. Of course, their training in research may be dated or inadequate, but most likely their training was in the preferred methods of research for their discipline.

Third, are faculty unaware of the volumes of previous research? This is a possible explanation, since much of the research has been published in educational journals and a few in educational technology journals that may be unfamiliar to faculty in the disciplines.

Finally, are faculty oblivious to the many important elements of instruction besides whether it involved technology or not? This seems more plausible, as some faculty preparing these studies - and this applies particularly to faculty in the disciplines -- may be unaware of the role of student learning aptitudes and motivations, prior knowledge and theories of knowledge construction, different pedagogical approaches, and instructor epistemological beliefs on the effectiveness of a course.

Or are some researchers simply using the study to assess an innovation as Rogers (1995) and Khan (1995) suggested, in an attempt to convince themselves that the innovation is a useful one? What else may explain the pervasiveness of this research design?

A Possible Explanation

It is plausible that the comparison study may be the faculty person's first foray into evaluating whether web-based instruction works for student learning or not. Such a study is relatively simple to conduct, as it usually ignores requirements for matching student samples or controlling other variables, may ignore instructional design as a factor in the analysis, and can be performed by the individual on his or her classes. Perhaps its primary usefulness is that it allays faculty fears that the experiment is working as well as other more traditional options. And this may be the design's main contribution: helping faculty test the technology for themselves and see the results with their own eyes. If this is the motivation for the enduring presence of these types of studies in conferences and on-line journals, then perhaps these comparison studies will continue until all faculty have tested this mode of instruction for themselves.
In this view, the comparison study may be acceptable if understood as the right of every faculty person to explore using the web. In other words, administrators may need to accept this study design as a natural and normal first step of faculty who must test for themselves whether this new technology is as good as other models. However, placing the comparison study in this personal context does not mean it is always (or necessarily) good research, just that the choice of doing a comparison study may be understandable in the context of faculty development.

Personal Journey Research

The question now becomes how an administrator or other consumers of research should understand and evaluate the comparison study. Because while the comparison study looks like experimental research, it is actually a self-study, where personal experiences are the legitimate object of the research questions (Goetz & LeCompte 1984). Faculty researchers may believe they are making methodological and data collection decisions based on a classic experimental design model but in fact the project is undertaken in a pursuit of a better personal understanding of technology rather than to research the technology itself.

As such, the comparison study is more clearly the faculty person's individual journey, a metaphor for a quest for meaning. Such journeys are basic to our inquisitive nature and fundamental to what makes us uniquely human. But because the journey can only be undertaken by one individual, it is a personal journey: the search of one individual for meaning, and in our specific case, the meaning that results from use of the web in education.

The individual may use a number of tools to elucidate meaning and here the administrator may have a role in encouraging faculty to use the comparison study as a way to learn from their experiences with online learning. They may also encourage faculty to consider other tools, including periodic interviews with students, online assessments of student learning and/or student perceptions, personal journal entries, even anthropological research methods to understand the environment of web classes as well as small, discrete action research studies that clarify their evolving understanding of web-based learning. In fact, it does not matter in some sense what tool is used - qualitative or quantitative - but only how meaning is developed and interpreted by the faculty researcher. This is what makes personal journey research - undertaken through the tool of a comparison study - aligned more nearly with qualitative research.

This metaphor also allows us to conceive of journeys that are difficult, circuitous, and/or stalled. Despite the respectability of any particular tool, meaning may escape us or make us more confused rather than less. And if we conceive of the journey as taking place over our own personal mental landscapes, we can get lost, stuck, waylaid, or forced to grapple with difficulties of ignorance or ennui. But a successful journey, like the myths of old, vanquishes ignorance and uncovers the gold of meaning within. (Unfortunately, meaning can also be lost, or situations change, and we must go off on the journey again.)

This approach aligns better with Mooney's (1957) description of research as "a personal venture which . . . is worth doing for its direct contribution to one's own self-realization" (p. 155). In other words, research is a journey one takes to learn about something, and it may involve travel through foreign terrain (like the web). For example, learning to use the web is an adventure akin to Lewis and Clark's trek to the Pacific Ocean, which involved crossing unknown territory. This analogy may more honestly capture the experience of faculty learning about the web and may actually lead to better research.

This is because once comparison studies are characterized as self-studies, then they should be evaluated by the guidelines for qualitative research (see Bullough & Pinnegar 2001). One such guideline asserts that these studies should promote insight and interpretation, understanding of ourselves and others, and the experience and analysis of "nodal moments" (p. 16). Another guideline states "Quality autobiographical self-studies offer fresh perspectives on established truths." In other words, a quality self-study must possess some novel insights, something new to help readers understand a situation or see issues in a different light.

There are certain advantages to characterizing comparison studies as personal journey research. It places the focus of the research on the growing understanding of the researcher, not the technology. It encourages faculty to experiment and better understand pedagogy and instructional design, which may be an important outcome for the administrator concerned about improving the quality of online learning. The disadvantage may be that personal journey research will not be influential in changing the perceptions of other faculty, especially those who prefer quantitative data or insist on evaluating the technology for themselves. In that case, we will not likely see the end of these studies until every faculty person has made his or her comparison study of web-based instruction.

Conclusion

Administrators and faculty concerned about the quality of web-based learning need to recognize that a poorly designed comparison study is not a true evaluation of web-based learning. And yet there may be a good reason that this research design will persist because faculty must pursue answers to their legitimate questions about using the web. In this case, it may be more helpful to cast the comparison study in the guise of a personal journey, a journey undertaken to better understand whether and how to use the web in instruction and a journey that each person must undertake for themselves, since no one can take a journey for another. This recognizes the faculty person's legitimate need to learn about the web and pursue personal evidence that it works in his or her class or discipline. The most important result will be what the faculty person learns about the process of learning and teaching online. In other words, the most likely outcome of the comparison study is a faculty person who has changed his or her understanding of the web and can now support its use.

These are important outcomes for the distance learning administrator: faculty who have tested online education, learned how to use it, applied it in their own courses, see its value, and now can support its use more broadly. Such useful outcomes might justify distance learning administrators providing modest funding for faculty research or supporting campus forums for faculty to discuss on-going research studies. Administrators might also consider finding ways for faculty to share their developing understanding of online education by creating weblogs for faculty to contribute to, having experienced faculty develop training for their peers, and/or asking faculty to serve as mentors to new faculty. These strategies both legitimize the importance of the research faculty are doing but also create the conditions whereby faculty can extend their learning, make it public, and discuss that learning with others. The aim would be to use the faculty's research studies as a tool for the professional development of faculty and to encourage more faculty to experiment with and support greater applications of online learning to distance and on-campus programs.


References

Bullough, R. V., Jr., and S, Pinnegar. 2001. Guidelines for quality in autobiographical forms of self-study research. Educational Researcher 30(3): 13-21. [http://www.aera.net/pubs/er/pdf/vol30_03/AERA300304.pdf

Clark, R. E. 1994. Media will never influence learning. Educational Technology Research and Development 42(2): 21-29.

Goetz, J. P., and M. D. LeCompte. 1984. Ethnography and qualitative design in educational research. New York: Academic Press.

Hartman, J., C. Dziuban, and P. Moskal. 2000. Faculty satisfaction in ALNs: A dependent or independent variable? JALN 4(3). [http://www.aln.org/alnweb/journal/Vol4_issue3/fs/hartman/fs-hartman.htm]

Jaffee, D. 1998. Institutionalized resistance to asynchronous learning networks. JALN 2(2). [http:// www.aln.org/alnweb/journal/Vol2_issue2/jaffee.htm]

Joy, E. H., and F. E. Garcia. 2000. Measuring learning effectiveness: A new look at no-significant findings. JALN 4(1). [http://www.aln.org/alnweb/journal/jaln_vol4issue1/joygarcia.htm]

Khan, B. H. 1995. Obstacles encountered during stages of the educational change process. Educational Technology 35(1): 43-46.

Kozma, R. (1994). A reply: Media and methods. Educational Technology Research and Development, 42(3), 11-14.

Meyer, K. A. 2002. Quality in distance education: Focus on on-line learning. ASHE-ERIC Higher Education Report Series, 29(4).

Mooney, R. L. 1957. The researcher himself. In Research for curriculum improvement, Association for Supervision and Curriculum Development, 1957 Yearbook (pp. 154-186). Washington, D.C.: Association for Supervision and Curriculum Development.

Moore, M. G., and M. M. Thompson. 1997. The effects of distance learning. (ACSDE Research Monograph No. 15). University Park, PA: American Center for the Study of Distance Education.

Morrison, J. L. 1999. The role of technology in education today and tomorrow: An interview with Kenneth Green, part II. On The Horizon 7(1). [http://horizon.unc.edu/horizon/online/html/7/1/editor.asp]

Phipps, R., and J. Merisotis. 1999. What's the difference? Washington, D.C.: Institute for Higher Education.

Rogers, E. M. 1995. Diffusion of innovations. NY: The Free Press.

Russell, T. L. 1999. The no significant difference phenomenon. Raleigh, NC: North Carolina State University.

Wegner, S. B., K. C. Holloway, and E. M. Garton. 1999. The effects of internet-based instruction on student learning. JALN 3(2). [http://www.aln.org/alnweb/journal/Vol3_issue2/Wegner.htm]

Windschitl, M. 1998. The WWW and classroom research: what path should we take? Educational Researcher, 27(1): 28-33.

Wisher, R. A., and C. K. Curnow. 1999. Perceptions and effects of image transmissions during Internet-based training. The American Journal of Distance Education 13(3): 37-51.

Biographical Note
Dr. Meyer is currently assistant professor of educational leadership at the University of North Dakota specializing in online learning and higher education. She is the author of Quality of Distance Education: Focus on On-Line Learning, a 2002 publication of the ASHE-ERIC Higher Education Report Series. For over three years, she was Director of Distance Learning and Technology for the University and Community College System of Nevada. Prior to this, she served 8 years as Associate Director of Academic Affairs for the Higher Education Coordinating Board in the state of Washington and was responsible for technology planning and online learning issues.


Online Journal of Distance Learning Administration, Volume VII, Number I, Spring 2004
State University of West Georgia, Distance Education Center
Back to the Online Journal of Distance Learning Administration Contents