Distance Education Readiness Assessments: An Overview and Application


Carolyn Gascoigne
University of Nebraska at Omaha
cgascoigne@unomaha.edu

Juliette Parnell
University of Nebraska at Omaha
jparnell@unomaha.edu

Abstract

With the rise in online and hybrid courses at the post-secondary level, many institutions are offering various online learning readiness assessments to students who are considering these instructional formats. Following a discussion of the characteristics often attributed to successful online learners, as well as a review of a sample of the publicly available online readiness surveys, an application of one representative tool is described. Specifically, the Distance Education Aptitude and Readiness Scale was administered in both hybrid and face-to-face sections of beginning post-secondary French across a two-year span. Differences in scores between groups, as well as the relationship between scores and grades are examined.

Introduction

As offerings of hybrid and online courses at the post-secondary level have increased over the past decade (Allen & Seaman, 2011), practitioners, researchers, and certainly administrators have noticed that both completion and persistence rates for these formats are often low (Dray, Lowenthal, Miszkiewicz, Ruis Primo, & Marczynski, 2011; Gascoigne & Parnell, 2014; Hall, 2011; Roper, 2007). Indeed, several side-by-side comparisons of traditional face-to-face sections to both hybrid and online sections of the same course have found lower completion rates for the online versions, at times ranging from 20-50% lower (Fryenberg, 2007; Patterson & McFadden, 2012; Wojciechowski & Bierlein Palmer, 2005).

In response to this trend, many institutions have developed or adopted student self-assessment surveys that are either recommended or required of any student who is considering enrolling in an online course or program. However, with limited research on their efficacy available, these instruments may be more useful in appeasing accreditation bodies than actually differentiating between students who will or will not be successful in online or hybrid courses. While some studies of content validity or internal consistency are available for certain instruments (Dray et al., 2011; Watkins, Leigh, & Triner, 2004), with very few exceptions, reports on their predictive ability are non-existent (Bernard, Brauer, Abrami, & Surkes, 2004; Hall, 2011).

Following a discussion of the characteristics commonly ascribed to successful online learners, as well as a review of a sample of the publicly available readiness surveys, an application of one representative tool in both hybrid and face-to-face sections of beginning post-secondary French is described.

The “Successful” Online Learner

Accounts of the general characteristics of the “successful” online learner abound. Following a review of the literature, several researchers have offered meta-analyses of the skills and characteristics often attributed to successful online learners. Dabbagh (2007), for example, offers the following list of characteristics of the successful online learner:

  1. Has a strong academic self-concept.
  2. Exhibits fluency in the use of online learning technologies.
  3. Possesses interpersonal and communication skills.
  4. Understands and values interaction and collaborative learning.
  5. Possesses an internal locus of control.
  6. Exhibits self-directed learning skills.
  7. Exhibits a need for affiliation. (p. 220).

In a similar review summary, Smith, Murphy, and Mahoney (2003) recommended that online learners;

  1. Use past experiences to develop new learning.
  2. Be motivated by intrinsic rather than extrinsic factors.
  3. Set their own goals for learning.
  4. Evaluate and monitor their own learning.
  5. Develop a problem-solving approach.
  6. Select their own learning strategies and materials (p. 58).

Roper (2007) surveyed successful post-secondary online learners directly (those receiving a grade of 3.5 or better) to determine what they would recommend to other students. Among the skills and actions recommend by students were:

  1. Developing a time-management strategy.
  2. Being active in online discussions.
  3. Using the materials, or finding a way to apply newly-learned concepts.
  4. Asking questions.
  5. Staying motivated.
  6. Sharing what works best for you with the instructor.
  7. Making connections with other students (pp. 63-64).

Maki and Maki (2003) studied nearly 1,000 introductory psychology students, 479 enrolled in a face-to-face lecture course and 437 in a web-based version of the same course in order to identify student characteristics that might predict both learning and satisfaction with each format. They found that when allowed to self-select course formats, humanities, business, science, and technology majors were over-represented in the online sections, whereas pre-professional and undecided majors were over-represented in the traditional sections. There were also more freshmen enrolled in the face-to-face sections. However, Maki and Maki found no significant differences between groups on any of the “big five” (p. 206) personality characteristics: extraversion, agreeableness, conscientiousness, emotional stability, or intellect and imagination. Older students were more successful in the course than younger students, but this did not correlate with course delivery format.

In the same study, Maki and Maki (2003) went on to target characteristics often attributed to successful online learners, as evidenced by the lists generated by Dabbagh (2007), Roper (2007), and Smith et al. (2003) above: computer skills, learning independence, and organizational skills. Maki and Maki found independence and organizational skills to be significantly related to performance. However, this held true for both instructional formats. Overall, they found that most variables were correlated with learning and satisfaction in both web-based and face-to-face courses, such that they “did not differentially predict learning and satisfaction in the two types of courses” (p. 216).

Wojciechowski and Bierlein Palmer (2005) also set out to determine whether or not specific individual characteristics might be predictive of online learning success. Wojciechowski and Bierlein Palmer, however, targeted 179 online business students, with no face-to-face comparison group. Similar to Maki and Maki (2003), Wojciechowski and Bierlein Palmer (2005) also found the age of the learner to be significantly related to success in the online course. They also found significant relationships between success, or final grade, in the online business class and the students’ overall grade point averages, participation in an orientation session for the course, the number of other online courses completed in the past, and ACT English scores. The two variables that were the best predictors of success were a student’s overall grade point average and participation in the pre-class orientation.

Of all of the traits catalogued and studied, the two overarching factors that are most often targeted in various self-readiness questionnaires are: comfort with technology and self-management skills (Bernard et al., 2004; Kerr, Rynearson, & Kerr, 2006; McVay, 2001; Dray et al., 2011; Smith, Murphy, & Mahoney, 2003; Watkins, Leigh, & Triner, 2004).

Online Readiness Surveys: A Survey

Myriad online learning readiness surveys are available. Among the most researched instruments available is the McVay Online Readiness Survey (McVay, 2000; 2001). The revised 2001 version of this tool consists of 13 questions to which students respond using a four-point Likert scale. According to Smith, Murphy, and Mahoney (2003), the McVay instrument is “congruent with more broadly-based work on the readiness of learners for research-based learning” (p. 59), specifically citing the traits outlined by Smith et al. (2003) above.

Several factor analyses of the McVay instrument have been conducted, each supporting convergence upon two factors: one related to comfort with e-learning, or non-face-to-face communication, and the second to self-management of learning (Blakenship & Atkinson, 2010; Smith, 2005; Smith, Murphy, & Mahoney, 2003). Hall (2011) sought to determine the extent to which the McVay instrument could predict students’ final course grades. Targeting both face-to-face and online students in an Introduction to Microcomputers course (n=164), Hall found that while a student’s declared major explained most of the variance, the McVay instrument “explained 10% of the observed variance in the final grade for the distance education student group” (p. 1).

Watkins (2003) and Watkins, Leigh, and Triner (2004) documented their development of an online self-assessment instrument. Their original instrument contained 40 statements grouped into ten scales: technology access, technology skills, online relationships, motivation, online readiness, online audio/video, internet chat, discussion boards, online groups, and importance to one’s success. Their revised instrument was reduced to 27 statements grouped into six scales: technology access, online skills and relationships, motivation, online audio/video, internet discussions, and importance to one’s success. The authors determined that the questions “consistently measure the desired scales that were initially derived from the e-learning literature” (Watkins et al, 2004, p. 75). They did not, however, study predictability.

Dray et al. (2007) and Dray and Miszkiewicz (2011) sought to develop both a more detailed and a more “current instrument that combined learner characteristics and technology capabilities” (Dray & Miszkiewicz, 2011, p. 31). Building upon earlier instruments, such as the McVay instrument, Dray et al. produced a 32-question survey with 14 items devoted to learner characteristics (belief in abilities, self-efficacy in writing and expression, time management, and behavior regulation), 18 items related to technology capabilities, and eight demographic questions. A validity study found strong translation and criterion-referenced validity for the questions in the learner characteristics subscale but produced inconsistent results for the technology capabilities subscale.

In addition to instruments developed and validated by faculty-researchers, commercial tools are available as well. One such instrument, the SmarterMeasure Learning Readiness Indicator, claims to have offered nearly 3,000,000 assessments at over 350 institutions as of this writing. Subscribers include various technical colleges, online institutions, community colleges, and state university systems. The SmarterMeasure tool contains 124 questions offered in a web-based format divided into seven components: individual attributes (24 items), life factors (20 items), learning styles (21 items), technical competency (10 items), technical knowledge (23 items), on-screen reading rate and recall (11 items), and typing speed and accuracy (1 item). Individuals can currently take the SmarterMeasure indicator for $24.95. Institutions purchasing a site license pay according to a scaleable formula based on the number of expected tests per year.

While both existing instruments backed by research, such as the McVay instrument, and commercial tools, such as SmarterMeasure are available, numerous institutions and individual programs within institutions have opted to create their own in-house tool instead. Even the most cursory web search will generate dozens of examples of these instruments. Certainly building one’s own survey allows an institution or even an individual faculty member to tailor questions to fit the characteristics of a specific institution or a particular course. And, while commercially-available tools often come with a wealth of research and support, the price, as well as the length, can be prohibitive for some.

A Brief Summary of In-house Instruments

A comparison of the characteristics of a sample of in-house online readiness self-assessment instruments used at various institutions of higher learning is presented below. Quite simply, the first ten distance education readiness surveys produced by a web search were selected for review. Eight of the 10 instruments were created by and employed at public, four-year, brick-and-mortar universities, and two were used at public community colleges. Each of the ten instruments were web-based and contained anywhere from ten to 45 questions. Two instruments contained yes/no questions only, while the other eight used Likert-scaled items ranging from three to five response options per statement. All 10 instruments contained at least one question devoted to the following topics: internet access, comfort with technology, self-directed learning, and time management. Eight of the 10 included questions pertaining to communication skills and seven of the 10 contained at least one question targeting students’ beliefs about online learning. Only one of the websites offered research on, as well as a history of, the development of their in-house tool. However, given the types of questions included on each instrument, the creators of each most likely consulted the available research on characteristics commonly ascribed to successful online learners (Dabbagh, 2007; Roper, 2007; Dray et al., 2011; Smith, Murphy, & Mahoney, 2003; Wojciechowski & Bierlein Palmer, 2005).

In addition to the survey instrument itself, each site also provided instant feedback to the test-taker. In some cases the feedback was simple and direct, such as “You seem to be an excellent candidate for hybrid or online learning,” or “Your learning style and skills may not be fully compatible with hybrid or online learning.” In other cases, specific feedback was provided for each subarea of the survey, for example, including individualized feedback on self-direction, learning preferences, study habits, technology skills, and equipment capabilities. One site also included a bibliography of readings on online learning, as well as general suggestions for learners not specifically tied to one’s score on the self-assessment.

An Application to Post-Secondary French

The University of Nebraska at Omaha does not employ a campus-wide online student readiness survey. As its Department of Foreign Languages is only in the beginning stages of developing and offering hybrid and online courses, it too has not instituted a department policy or recommendation on the matter. To date, the department has only offered two courses online: Spanish Grammar and Composition (SPAN 3040) and Advanced French Composition and Stylistics (FREN 4040), and four courses in a hybrid format: beginning Spanish 1 and 2 (SPAN 1110 and SPAN 1120) and beginning French 1 and 2 (FREN 1110 and FREN 1120). With the exception of French 4040, all online and hybrid courses have been offered alongside traditional face-to-face sections, such that students have the option of selecting the format they prefer. Hybrid and online sections are clearly labeled in the schedule of courses. In the absence of institutional guidance on the matter, faculty offering hybrid sections of beginning French sought to offer some type of online readiness feedback to students. While not an online course, the hybrid section did reduce the face-to-face contact time with the instructor from 240 minutes per week to 120 minutes. To compensate for this reduction, students were required to complete two additional hours of preselected online review and practice through the textbook’s companion website. Much of the online work done outside of class by the hybrid students was done in class under the instructor’s guidance in the traditional sections. Given this format, we believed that students in the hybrid sections needed to:

  1. be comfortable completing exercises independently, without the guidance of the instructor,
  2. possess self-management skills for completing assignments, and
  3. believe in the possibility of learning a language in a hybrid environment.

Because both the face-to-face sections and the hybrid sections employed the same limited technologies: a course management website, email, and the textbook’s website, we were less concerned with assessing students’ technology skills. Our internal criteria for selecting an online readiness survey were to find a non-commercial instrument that was brief, easy to administer, and offered immediate feedback. We also sought a tool that was not overly occupied with technology skills, and instead focused on students’ self-management skills, their comfort with learning and working independently, and their belief that learning in an online or hybrid environment was possible. For these reasons, along with instrument availability and ease of use, we ultimately adopted the Distance Education Aptitude and Readiness Scale (DEARS) (Kizlik, 2007). The DEARS is a brief 15-item self-assessment with a five-point Likert response scale (see Appendix A for items) designed to provide students with feedback on their predisposition and temperament for successful distance education experiences. While Kizlik states that the DEARS is intended only “to provide general guidance for those considering taking courses or even obtaining a degree via distance education regardless of the source” (p. 1), the DEARS’ focus on autonomy, self-management, and learning beliefs align with both the research, as well as the many other instruments available (Dabbagh, 2007; Dray et al., 2011; Roper, 2007; Smith, Murphy, & Mahoney, 2003; Wojciechowski & Bierlein Palmer, 2005). As such, the DEARS has been used in approximately 600 courses to date (B. Kizlik, personal communication, April 17, 2014). A student’s response for each item on this instrument is awarded between 1 and 5 points (1 point = never, 2 points = very infrequently, 3 points = sometimes, 4 points = frequently, 5 points = always). An overall DEARS score is easily calculated by adding the point value assigned to each item’s response. The maximum possible score is 75 and the minimum possible score is 15. Results of the DEARS are then matched to one of four possible feedback categories:

  1. You should have no difficulty with a distance education course. You have a pronounced sense of autonomy and self-direction (55-75 points);
  2. You will probably do well in a distance education course, but you will have to remind yourself to stay on task (45-54 points);
  3. Distance education will be a challenge. You will miss the interaction context a great deal (30-44 points); and
  4. Distance Education is probably not a good idea for you (29 and below) (Kizlik, 2007, p. 3).

During the 2012-2013 academic year, a slightly modified version of the DEARS was administered in four sections of beginning French across two semesters at the University of Nebraska at Omaha. The only change to the original instrument was the substitution of the word “hybrid” for “online” throughout. Fourty-three hybrid students, and as a point of comparison, 39 traditional students, agreed to complete the survey. Paper copies of the survey were administered in class to students during the first week of each semester. Students were given about 10-15 minutes to complete the survey, tally scores, and consider their individual feedback. Surveys were then collected and an average score for each group (hybrid and traditional) was recorded. The average DEARS score for the traditional group was 57.3. The average DEARS score for the hybrid group was 60.3 Although this difference was not statistically significant, the average score was marginally higher for the hybrid students. During the 2013-2014 academic year we sought to not only administer the survey, but to further examine its utility in our program. To this end, the DEARS was again administered in four sections of beginning French across two semesters: two sections of hybrid French and two sections of a traditional face-to-face course. This time, 33 students from the hybrid groups agreed to participate as did 33 students from the traditional groups. Again, paper copies of the survey were administered in class to students during the first week of each semester and students were given about 10 minutes to complete the survey. Surveys were then collected and scores were tallied and recorded by the instructor. The original surveys, along with the total score and personalized feedback according to Kizlik’s scale, were returned to students during the second week of class. During this second application, the average DEARS score for the traditional group was 57.78 and the average score for the hybrid group was 58.18. Although once again higher for the hybrid students, not only was this difference not statistically significant, it was also negligible.

At the end of the semester final course grades for participants from the hybrid group were compared to their DEARS score. Of the 33 participating hybrid students, 10 received a final course grade of A, 12 received a B, 7 received a C, two a D, and two students failed the class. The average DEARS score for the A students was 59.1, for the B students it was 60.6, for the C students it was 59.3, for the D students it was 61, and for the failing students it was 63.

Table One

Table 1
Average DEARS Score for Final Course Grade
A                     B                     C                     D                     F
59.1                 60.6                 59.3                 61                    63

 

Surprisingly, the students receiving the highest DEARS score performed the worst in terms of final course grades, and it was the A students who produced the lowest DEARS score. This outcome is precisely the opposite of what one would expect given that “the higher the number [on the DEARS], the higher […] the potential to benefit from distance education experiences” (Kizlik, 2007, p. 3).

Limitations

While exploratory in nature, there are several limitations to this investigation. First, Kizlik (2007) offers his instrument to provide general guidance for students as a means of reflection, consciousness-raising, and decision-making. He makes no claims as to any predictive validity. And, while employed by nearly 600 educators to date, there are no available studies devoted to it. Second, the DEARS was intended for use by distance education students and as such the questions focus on self-direction and autonomy. Instead, it was applied here in a hybrid context where, while some autonomy and self-direction is important, students still met with their instructor twice per week. Third, the sample size was small, with only 33 hybrid students participating in the second more detailed application. Finally, there was remarkably little variation in the DEARS scores produced by students in either the traditional or the hybrid groups. For example, of the 33 hybrid students, 31 scored a 55 or higher, placing them into the top “You should have no difficulty with distance education courses” category. Of the remaining two, one student scored a 53, placing him or her into the second highest, or the “You will probably do well in a distance education course” category and only one student scored a 37, placing him or her into the third, or the “Distance education will be a challenge. You will miss the interaction context a great deal” category. A similar lack of variation was found among the traditional course group. Essentially, nearly all of the participants viewed themselves as possessing the skills and the attributes associated with successful online/hybrid learning.

Discussion and Future Steps

A review of the literature found no models of distance education readiness self-assessments applied to foreign language courses in either an online or a hybrid context. On the contrary, the use of various self-assessment tools within the disciplines of business, computer science, and education are more common (Maki & Maki, 2003; Roper, 2007; Wojciechowski & Bierlein Palmer, 2005). This application of the DEARS instrument to hybrid post-secondary French did not find a significant difference between the scores produced by hybrid and traditional face-to-face groups, nor did it find a positive correlation between the DEARS score and success in the course as measured by final course grades. Nevertheless, it represents a first attempt at documenting the application of one such online learning readiness instrument to the language learning context. Before moving on to another instrument we plan to continue to examine the utility of the use of the DEARS in hybrid post-secondary French by interviewing students on the impact of the survey feedback. For example, did they reflect upon their compatibility with the hybrid format as a result of taking the survey? Did they change their study habits as a result of either the act of taking the survey or the act of considering their feedback? And, do they believe that taking the survey was useful?

According to Palloff and Pratt (2003), “the very elements that draw students to online classes—convenience in a busy work schedule, ability to continue to attend to family demands—are the elements that interfere” (p. 5) with their success. It could be that students who are drawn to online and hybrid courses will score more highly on an online readiness self-assessment because of this inclination, without realizing that what has drawn them to the format may also be an obstacle to their success. Therefore, even in spite of these findings, we continue to believe that the simple act of taking a self-assessment survey at the very least gives students an occasion to reflect upon the expectations of a hybrid or online course. What remains to be determined, however, is whether any reflection prompted by the self-assessment can be translated into action throughout the remainder of the semester.

References

Allen, I.E., & Seaman, J. (2011). Going the distance: Online education in the United States, 2011. Babson Survey Research Group Report, p.1-26. The Sloan Consortium. Available:http://olc.onlinelearningconsortium.org/publications/survey/going_distance_2011

Bernard, R., Brauer, A., Abrami, P., Surkes, M. (2004). The development of a questionnaire for predicting online learning achievement. Distance Education, 25(1), 31-47.

Blankenship, R. & Atkinson, R. (2010). Undergraduate student online readiness. International Journal of Education Research, 1-12.

Dabbagh, N. (2007). The online learner: Characteristics and pedagogical implications. Contemporary Issues in Technology and Teacher Education, 7(3), 217-226.

Dray, B., Lowenthal, P., Miszkiewicz, M., Ruiz Primo M., & Marczynski, K. (2011). Developing an instrument to assess student readiness for online learning: A validation study. Distance Education, 32(1), 29-47

Dray, B., & Miszkiewicz, M. (2007). The Intersection of learner characteristics and technology capabilities: Implications for online learning. Paper presented at the 2007 AERA Annual Meeting. Chicago, IL.

Fryenberg, J. (2007). Persistence in university continuing education and online classes. International Review of Research in Open and Distance Learning, 8(3), 1-9.

Gascoigne, C. & Parnell, J. (2014). Comparing enrollment and persistence rates in hybrid and traditional post-secondary French. The Online Journal of Distance Education Administration. Retrieved April 1, 2014 from http://www.westga.edu/~distance/ojdla/spring171/gascoigne_parnell171.html

Hall, M. C. (2008). Predicting student performance in web-based distance education courses based on survey instruments measuring individual traits and technical skills. The Online Journal of Distance Education Administration. Retrieved April 1, 2014 from http://www.westga.edu/~distance/ojdla/fall113/hall113.html

Hall, M. C. (2011). A predictive validity study of the revised McVay readiness for online learning questionnaire. The Online Journal of Distance Education Administration. Retrieved April 1, 2014 from http://www.westga.edu/~distance/ojdla/fall143/hall143.html

Kerr, M.S., Rynearson, K., & Kerr, M.C. (2006). Student characteristics for online learning success. The Internet and Higher Education, 9(2), 91-105.

Kizlik, B. (2007). Getting Ready for Distance Education. Distance Education Aptitude and Readiness Scale (DEARS). Adprima. Retrieved August 10, 2012 from http://www.adprima.com/dears.htm

Maki, R., & Maki, W. (2003). Prediction of learning and satisfaction in web-based and lecture courses. Journal of Educational Computing Research, 28(3), 197-219.

McVay, M. (2000). How to be a successful distance learning student. Needham Heights, MA: Pearson Publishing.

McVay, M. (2001). How to be a successful distance learning student: Learning on the Internet. New York, NY: Prentice Hall.

Palloff, R., & Pratt, K. (2003). The virtual student: A profile and guide to working with online learners. San Francisco: Jossey-Bass.

Patterson, B., & McFadden, C. (2012). Attrition in online campus degree programs. The Online Journal of Distance Education Administration. Retrieved April 1, 2014 from http://www.westga.edu/~distance/ojdla/summer122/patterson112.pdf

Roper, A. (2007). How students develop online learning skills. Educause Quarterly, 1, 62-65

Smith, P.J. (2005). Learning preferences and readiness for online learning. Educational Psychology, 25(1), 3-12.

Smith, P.J., Murphy, K.L., & Mahoney, S.E. (2003). Identifying factors underlying readiness for online learning: An exploratory study. Distance Education, 24, 57-68.

Watkins, R. (2003). Readiness for online learning self assessment. In E. Biech (Ed.), The 2003 Pfeiffer Annual Training. San Francisco: Jossey-Bass-Pfeiffer.

Watkins, R., Leigh, D., & Triner, D. (2004). Assessing readiness for e-learning. Performance Improvement Quarterly, 17(4), 66-79.

Wojciechowski, A., & Bierlein Palmer, L. (2005). Individual student characteristics: Can any be predictors of success in online classes? The Online Journal of Distance Education Administration. Retrieved April 1, 2014 from http://www.westga.edu/~distance/ojdla/summer82/ wojciechowski82.pdf

Appendix A

DEARS Items

1.

I take responsibility for my own learning.

2.

I am regarded by my peers as a self-starter.

3.

I can stay on task without constant feedback about my performance.

4.

I am a person who is curious about many things.

5.

I often figure out novel ways to solve problems.

6.

I enjoy helping others who have learning needs.

7.

Once I have goals or a set of objectives, I can determine what I need to do to reach them.

8.

I recognize and know how to use feedback about progress on a learning task that I have undertaken.

9.

I am good at visualizing how things would be when they are the way I want them to be.

10.

I am good at logistics. I can determine what is needed and devise a plan for getting it.

11.

I believe that knowledge is largely constructed by the learner, and that teachers are more facilitators of learning than dispensers of information.

12.

I understand how I learn best and often think of ways I can improve.

13.

I know what I believe but I am open to other opinions that may be contrary to my beliefs.

14.

I enjoy learning that is both interesting and challenging, and I am motivated in such situations to go beyond the minimum requirements.

15.

I am able to translate learning objectives that have been set for me into objectives that reflect my own personal style of learning.


Online Journal of Distance Learning Administration, Volume XVII, Number IV, Winter 2014
University of West Georgia, Distance Education Center
Back to the Online Journal of Distance Learning Administration Contents