Is a Quality Course a Worthy Course? Designing for Value and Worth in Online Courses


Robin E. Youger
Pennsylvania State University
ryouger@gmail.com

Terence C. Ahern
West Virginia University
terence.ahern@mail.wvu.edu

Abstract

There are many strategies for estimating the effectiveness of instruction. Typically, most methods are based on the student evaluation. Recently a more standardized approach, Quality Matters (QM), has been developed that uses an objectives-based strategy. QM, however, does not account for the learning process, nor for the value and worth of the learning experience. Learning is a complex and individualized process that course designers and instructors can capitalize on to increase the value and subsequent worth of a course for all stakeholders. This article explores the concepts of value, worth, and quality of online education, seeking a method to improve outcomes by increasing a course’s value and worth.

Introduction

The number of students taking online and distance education courses has exploded over the past decade. Currently, online course offerings provide 32% of total enrollment in degree-granting post-secondary institutions, while online-only students represent 49.9% of all new college enrollments (Allen & Seaman, 2013). With this explosion of online instruction, colleges and universities are committed to improving the design and delivery of their online instruction. However, this is a difficult task because the many factors that make a course successful can be elusive (Smyth, 2014). The success of a course is dependent on a confluence of key elements such as structure, content, tasks, instructional strategies, and learner consideration (Koohang, 2004). The online course must then be successfully deployed and delivered to students.

The concept of quality in online learning, which provides both a means of accountability and a route to improvement, is as complex as online learning itself (Smyth, 2014). One approach intended to improve online instruction has been the Quality Matters (QM) movement. While useful in designing objectives-based online courses, however, standardized methods do not account for the learning process (Swan, et al., 2012), nor for the value and worth of the learning experience itself. The process for determining the worth and value of an online course can be done but is complex and multidimensional.

Courses, online or face-to-face, do not exist in a vacuum. In most institutions, there are four primary stakeholders, each with a vested interest in the success or failure of the instruction: administrators, instructional designers, faculty, and students. Administrators provide the central resources to develop and deliver a course. Instructional designers create the course using accepted techniques and tools, together with content provided by subject-matter experts. Faculty also serve as subject matter experts and specialize in the delivery of the education to students, the consumers of instruction. This makes each course multidimensional as each stakeholder group sees it from a different perspective for a different purpose (Harvey & Green, 1993; Tam, 2001). Consequently, it is a difficult task to create successful, quality instruction.

Today’s course designers have at their disposal sophisticated technologies to design and deliver a truly remarkable learning experience. However, as Churches points out, it’s not just about the tools; “it’s about using the tools to facilitate learning” (2008). Education technologies continue to evolve and improve, yet online instructional design remains inconsistent and varied between institutes.

Value

Determining the “value” of an online course is an individual decision that varies depending on the stakeholder. At their core, all courses are valuable to students in degree programs for the credits earned through completion. Many courses are popular, not based on being of high- or low-quality, but because the subject matter is interesting or required to complete a program of study. Consider that a low-quality course can have high value simply because the course is required for the program of study. Conversely, even the highest-quality course will hold no value for a student with no reason to take it. Online availability in itself can provide value to a student or instructor facing a long commute to campus, again, regardless of “quality.” Similarly, asynchronous online availability can be highly valuable to a student with time obligations that conflict with a more traditional class schedule. Instructional designers have many options available to help students build an increased sense of value. Keller’s ARCS Model of Motivational Design is a proven tool to help accomplish this complex task (Kuan-Chung & Syh-Jong, 2010). The model is based on the expectancy-value theory of Tolman and Lewin which states people are motivated to participate in activities that fulfill personal needs and from which they have a reasonable chance of successful completion (Keller, 1987b).

Worth

The “value” of a course is more subjective to the audience perception of “worth.” Value causes the student to enroll in the course but “worth” determines the effort they put forth to be successful in the course: the greater the value, the higher the worth. For example, an Art History course may be extremely worthwhile to an Art major, regardless of its “quality.” They may learn new knowledge critical to their future courses and careers, and therefore will sustain their effort in assignments and assessments. In contrast, the only worth an Art History course may hold for a Business major is the fulfillment of a Humanities requirement for their degree program. That Business major has little motivation to excel in such a course, and may exert only minimal effort required to “pass” the class to fulfill the degree and credit requirements. Yet even an undeclared major may find great worth in an Art History course if the course design stimulates their interest.

Motivation accounts for between 16% and 38% variance in student achievement (Means, 1997). Keller’s ARCS Model of Motivational Design provides a proven strategy to account for learning differences by focusing on attention, relevance, confidence, and satisfaction (Keller, 1987a, 1987b, 2010; Hodges, 2004). He defines motivational design as “the process of arranging resources and procedures to bring about change in motivation” (Keller, 2006). Open-ended course design that allows for flexibility in course resources and presentation can help address some of the factors that play a role in predetermining effective student learning and allow students to develop a deeper sense of value about a course. High attrition rates can be directly attributed to low motivation (Kuan-Chung & Syh-Jong, 2010); therefore, it is vital that instructional designers understand principles of motivational design (Hodges, 2004).

Motivational design can help transform the worth of a course by increasing its value to stakeholder groups. Capturing student attention is the first step in encouraging motivation, and specific design techniques such as variety and inquiry can help maintain attention throughout the course. Hodges points out several studies that found relevance to work or personal goals are effective motivators for learning (2004). Establishing relevance can allow students to see future value of course content and may increase the course worth and motivation to excel.

Quality

“Quality” can be seen as an objective measure of the cost of doing something, for example, types of materials used, design format, or the process. The quality of any course can be measured through the notion of “best practice,” measurable against a process (Harvey & Green, 1993).

Over the last decade, Quality Matters (QM) has developed nationally-recognized standards for online course assessment, presented through customized rubrics and trainings for higher education, K-12 education, continuing and professional education, and educational publishing (Quality Matters, 2013a). The higher education rubric contains 40 individual standards organized into eight categories: course overview, learner objectives, assessment and measurement, resources and materials, learner engagement, course technology, learner support, and accessibility (Quality Matters, 2013a). Peer-reviewers trained in the QM evaluation procedure use the applicable rubric to assess online courses. Quality as measured by the QM rubric is more a production issue than an assessment of the learning experience. It measures a course against a set of standards where quality becomes an institutional matter defined objectively through the rubric. However, the goal of higher education is not an assembly line process that churns out identical widgets. Rather, the goal is to educate, train, and motivate people (Hunt, 1998). The quality of a course matters only if the course has value and is worthwhile to students. The value and worth of a course is constantly being reevaluated as each group of students bring their own set of needs and wants to the learning experience. Effective design requires a level of customization and tailoring in order to provide our students with learning opportunities that have value and worth (Ahern, 2002).

Quality Matters but…

The QM process does not measure "worth;" it was designed through research to reflect commonly-accepted "best practices." Simply adhering to QM standards does not ensure an instructional module is worthwhile – student engagement is vital to the worth of a course. To illustrate, we will explore a graduate research project conducted to determine quality and usability of a demonstration course, “Workplace Nutrition” (Youger, 2013).

The course was evaluated by a panel of three instructional design experts. Data for this mixed-methods study comes from two sources: a qualitative online survey and a quantitative cogntive walk-through (Wharton, et al, 1994). The online survey is a quality assessment of the course based on nationally-recognized QM Standards. It was created, distributed, administered, and analyzed using Qualtrics (http://www.qualtrics.com/) online survey software. The cogntive walk-through (Wharton, et al, 1994) was designed to assess effectiveness and usability of the course from an expert instructional designer perspective in real-time using a concurrent think-aloud protocol (Van Den Haak, De Jong, & Schellens., 2003) to record their instructional design perception while completing a learning task within the module.

“Workplace Nutrition” was designed as an 8-week online adult continuing-education demonstration course. The course was designed to maximize learning outcomes on workplace nutrition and help business-oriented students move beyond merely exploring the topic, leading them into more advanced stages of learning and application (Swan et al., 2008) by tasking them to match course resources with known company environment and attitudes.

Resources and assignments are designed to promote higher-order thinking skills (Churches, 2008) by building on users’ existing knowledge or experience. New concepts are presented using a scaffold approach that builds on previous materials (Shambaugh & Magliaro, 1997). Weekly discussions encourage an experience-sharing process designed to build a sense of community (Akyol & Garrison, 2011). Course content and assignments, designed to allow peer-tutoring and feedback elements, are intended to help students advance from knowledge acquisition to knowledge application (Swan et al., 2008). The education experience culminates in a final project that reflects overall learning achievements (Akyol & Garrison, 2011).

Expert participants were asked to view the “Workplace Nutrition” demonstration course using an institutional Learning Managaement System (LMS). To measure quality, the experts were asked to access and review the syllabus and two completed learning modules within the demonstration course and then complete the online Qualtrics survey based on nationally-recognized QM Standards.

The second data source, the cognitive walk-through, was designed to evaluate the usability of the learning modules using the embedded voice recording tool within the LMS. They were instructed to access the course syllabus, navigate and review the lessons. They were asked to complete a brief reading in Unit 1, then navigate to the discussion section of the course to create a brief response to an assignment question. The recording documented individual efforts in real-time as the course was reviewed and the learning tasks completed.

Implications

The experts found the demonstration course to be of high quality, based on the requirements of the QM rubric. However, the results of the cognitive walk-through seemed to indicate that the course may not be of high value to learners.

The experts were asked to rate how well the course syllabus adhered to QM guidelines. Results showed good to excellent ratings. However, responses from the cognitive walk-through indicated some level of dissatisfaction with the actual experience of the course. These responses indicate a shift in perspective when the experts were experiencing the course as a student and not as expert instructional designers. When viewed from a student perspective, suggestions were made to create a more “valuable” syllabus, such as adding due dates and clarification of point values. It was interesting that Expert A said the University policies (QM Standard 1.4), typically required on all syllabi, were “there and clear,” but admitted that “honestly, I don’t read this part as a student.” Active observation research confirms this is a true statement for many students, who seem most interested in assignments, due dates and point-values (R. E. Youger, personal communication, October 24, 2014)

When discussing lesson activities and course progression (QM Standard 4.1, 4.2), this same expert stated that they had been seeking a relationship between the units, and suggested that the course progression be provided in the syllabus. The fact that this progression relationship was provided in the syllabus reinforced their final comment that, as a student, they skim the syllabus quickly. This reflects one of the limitations of this study, in that experts did not have any instructor or peer interaction that would possibly clarify these issues.

Another expert raised an interesting dilemma facing all instructional designers: learning styles. While Expert A expressed a need for more bullet points in the course introduction, Expert C stated they would like the introduction “in plain English format with no bullet points,” adding they “have ADHD and need to be drawn in to focus” (Youger, 2013). Here again, instructor input and peer interaction may alleviate some of these types of concerns, but they raise a valid point that designers must be aware of differing learning styles when creating modules and activities.

In reviewing comments made during the cognitive walk-through done from a student perspective, it became obvious that some customization and instructor/peer interaction was needed to add value to the course and enhance the worth of the individual learning experience. This important facet was not apparent solely from the QM-based survey.

Recommendations

Online courses can vary greatly in organizational structure and should be evaluated in each section to best meet the needs of learners in that particular group. Courses can be designed to run asynchronous and include learning goals that can be achieved through directed independent readings, assessed via submitted assignments. Online courses can also be delivered synchronously, requiring students and instructors to be online at an established time to actively participate in course activities in real-time. While some form of standard delivery structure may be desired, the value of the learning experience depends on many factors that cannot be measured on a standards-based rubric, including student motivation and socioeconomic issues.

Motivation plays a large role in effective adult education. Student engagement can be heightened if tasks and assignments allow them to make connections between course resources and their existing knowledge or experience base. The new knowledge must have some value to the student to maintain motivation. One method of ensuring the “worth” of an online course is to build a needs-assessment phase into the instructional design process. Assignments can be structured so students discuss each lesson in terms of personal experience with or to the concepts. Active observation research (R. E. Youger, personal communication, October 24, 2014) has shown students become more vested in their own learning when they expand their knowledge of something that has value to them personally.

Socioeconomic factors can greatly impact student learning (Forsyth & Furlong, 2003). Resources can be interpreted differently and interactions will certainly vary between each class section. A successful learning experience requires each student to develop a connection between the course, the content, and their reason for taking the course. QM guidelines provide a standard, predictable format for online courses, but cannot increase the worth of a course to students. Needs-assessment measures can alert instructors to socioeconomic factors that can contribute greatly to a student’s success or failure in an online course. Despite the best efforts in connectivity, for example, some students have outdated equipment and software or intermittent online access. Financially-struggling students will place a greater value and worth on having a properly-functioning computer and reliable internet access than those without financial concerns, and instructors must be sensitive to these issues in order to provide a valuable learning experience.

Active instructor involvement is an important aspect of effective online learning (Rovai, 2002). Ongoing instructor participation is vital in creating a course that fulfills university requirements while keeping students actively involved in the learning process. Just as students are required to show up for a face-to-face course, they demonstrate an online “presence” through regular participation in online learning activities. Instructors are responsible for establishing an online classroom or community that encourages communication and learning opportunities. Exchanges and interactions can be designed and presented to enhance the learning process by creating a positive learning experience, particularly through the use of motivational design techniques.

Developing and teaching online courses can be a big adjustment for many instructors (Smyth, 2014; Hunt et al., 2014). Many dislike the lack of visual interaction available in face-to-face courses (Shea, 2007). Some have poor technical skills or are uncomfortable with technology in general, while others lack a technical instructional design background and are unsure how to transfer the course content online and present it so that achieves course learning goals in a positive manner. For these instructor-designers, a standardized rubric such as that provided by QM can serve as a guide for setting up their online course shell. Additional work and customization of content is still necessary to create a valuable learning experience, and instructors may become frustrated since they do much more than merely “teach.” Motivation to produce and teach high-quality online courses can often be overshadowed by simultaneous University need for ongoing research, conference, and publication activities.

Finally, confidence-enhanced materials such as personalized progress reports and emails may help increase the “value” of a course for students by recognizing their ideas and achievements and grounding these in the structure of the course individually (Keller, 1987a, 1987b, 2010). Successful course design is an iterative process in which the notion of value is assessed and actions are taken to make the course worthwhile while adhering to curriculum alignment. The needs-assessment process provides a student profile that can be measured against past course feedback to reach an aggregate notion of worth. Each course can be refined, based on the past “success” and current “need.” Because the process should be data-driven, it is vital that faculty learn to read and effectively use reports available in learning management systems to remain apprised of student performance and progress, which can lead to better design of the content, assignments, and presentation of the online course.

Conclusion

Each stakeholder audience has a different interpretation of the meaning of the words, “quality,” “value,” and “worth” when evaluating an online course, depending on their relationship to that course. Courses can be custom-designed to meet the needs of each target audience when education techniques and design knowledge are used to offset factors that can negatively impact adult learning. Motivational design attempts to make instruction more intrinsically interesting (Keller, 2010). Regulated quality of an online course has no bearing on student motivation - what matters is good course design, which leads to the creation of value and worth.

Each learning experience is a unique event centered on a designed “course” and the perceived worth to individual students. Just as a course will change in some ways when a new instructor takes over, the course also changes with the students in each section. The individuals that make up each learning event have different motivators and background knowledge; therefore, exchanges, ideas, and sense of worth will vary within class sections.

One strategy to blend quality with value and increase potential worth of a course is to conduct an informal evaluation of student needs in the opening week of each course section. Instructors may notice patterns or trends that allow them to customize course resources or presentation in order to have the greatest impact on student interests and pre-existing knowledge. Motivational design strategies suggested by Keller can help designers capture and hold learner attention and provide a perception of relevance, confidence and satisfaction that increases the value and worth of learning the content. This in turn leads to enhanced cognitive performance (Means, 1997; Keller, 1987, 2006, 2010).

Standardization of educational courses for measurable quality can be accomplished through the application of a rubric such as QM. However, customization is required to increase the value and worth of a course. Instructors need to know and understand their audience. When survey or textual needs-assessment are conducted during the first week of classes, there is a greater chance of creating a more valuable individual learning experience. This needs-assessment can be as informal as asking students to address specific inquiries when creating an introduction posting for the course. By investigating the motivation behind student enrollment, instructors can fine-tune the content, presentation, or sequencing so course and student goals align to provide high-quality, high-value educational experiences for both students and instructors.

Children are renowned for asking, “Why?” We don’t grow out of this curiosity as adults: our whole lives are spent in search of the answer to this illusive question. Adult learners want to know “why” something is important -- more specifically, why is something important to us personally, what do we gain from learning this “thing?” That is part of our instinctive effort to determine value and worth.


References

Ahern, T. C. (2002). Learning by Design: Engineering the Learned State. In N. Callaos & W. Lesso (Eds.), 6th World Multiconference in Systemics, Cybernetics and Informatics (Vol. XX, pp. 308-313). Orlando, FL: International Institute of Informatics and Systemics.

Akyol, Z., & Garrison, D. (2011). Understanding cognitive presence in an online and blended community of inquiry: Assessing outcomes and processes for deep approaches to learning. British Journal of Educational Technology, 42(2), 233-250. doi:10.1111/j.1467-8535.2009.01029.x

Allen, I. E., & Seaman, J. (2013). Changing Course: Ten Years of Tracking Online Education in the United States. Sloan Consortium. PO Box 1238, Newburyport, MA 01950, USA

Churches, A. (2008). Bloom's taxonomy blooms digitally. Tech & Learning, 1. Retrieved from http://edorigami.wikispaces.com/Bloom%27s+Digital+Taxonomy

Forsyth, A., & Furlong, A. (2003). Socio-economic disadvantage and experience in higher education. Joseph Rowntree Foundation.

Harvey, L. & Green, D. (1993). Defining Quality. Assessment and Evaluation in Higher Education. V18, 1.

Hodges, C. B. (2004). Designing to motivate: Motivational techniques to incorporate in e-learning experiences. The Journal of Interactive Online Learning, 2(3), 1-7.

Hunt, D., Davis, K., Richardson, D., Hammock, G., Akins, M., & Russ, L. (2014). It is (More) About the Students: Faculty Motivations and Concerns Regarding Teaching Online. Online Journal of Distance Learning Administration, 17(2).

Hunt Jr, J. B. (1998). Organizing for Learning: The View from the Governor's Office.

Keller, J. M. (1987a). The systematic process of motivational design. Performance+ Instruction, 26(9-10), 1-8.

Keller, J. M. (1987b). Development and use of the ARCS model of instructional design. Journal of instructional development, 10(3), 2-10.

Keller, J. M. (2006). What is Motivational Design? Florida State University. Retrieved from http://www.arcsmodel.com/#!motivational-design/c2275

Koohang, A. (2004). Expanding the concept of usability. Informing Science, 7, 129-141.

Kuan-Chung C. & Syh-Jong J. (2010). Motivation in online learning: Testing a model of self-determination theory. Computers in Human Behavior 26 (2010) 741–752

Means, T. B., Jonassen, D. H., & Dwyer, F. M. (1997). Enhancing relevance: Embedded ARCS strategies vs. purpose. Educational Technology Research and Development, 45(1), 5-17.

Quality Matters. (2013a). About Us. Retrieved from https://www.qualitymatters.org/about

Quality Matters. (2013b). Welcome. Retrieved from https://www.qualitymatters.org/welcome

Rovai, A. P. (2002). Building sense of community at a distance. The International Review of Research in Open and Distance Learning, 3(1).

Shambaugh, R. N., & Magliaro, S. (1997). Mastering the possibilities: A process approach to instructional design. Needham Heights, MA: Allyn and Bacon.

Shea, P. (2007). "Bridges and barriers to teaching online college courses: a study of experienced online faculty in thirty six colleges." Journal of Asynchronous Learning Networks 11(2):73-128.

Smyth, R. (2014). Daniel, J. & Uvali?-Trumbi?, S., ed. (2013) A guide to quality in online learning. Retrieved from http://www.contactnorth.ca/tips-tools/guide-quality-online-learning British Journal of Educational Technology, 45: E1. doi: 10.1111/bjet.12127

Swan, K., Matthews, D., Bogle, L., Boles, E., & Day, S. (2012). Linking online course design and implementation to learning outcomes: A design experiment. The Internet and Higher Education, 15(2), 81-88.

Swan, K., Shea, P., Richardson, J., Ice, P., Garrison, D., Cleveland-Innes, M., Arbaugh, J. B. (2008). Validating a measurement tool of presence in online communities of inquiry. E-Mentor , 2 (24), 1-12. Retrieved from http://www.e-mentor.edu.pl/artykul/index/numer/24/id/543

Tam, Maureen. (2001). Measuring Quality & Performance in Higher Education. Quality in Higher Education, V7, 1.

Van Den Haak, M., De Jong, M., & Jan Schellens, P. (2003). Retrospective vs. concurrent think-aloud protocols: testing the usability of an online library catalogue. Behaviour & Information Technology, 22(5), 339-351.

Wharton, C., Rieman, J., Lewis, C., and Polson, P. (1994). The cognitive walkthrough method: A practitioner’s guide. In Nielsen, J., and Mack, R. (Eds.), Usability inspection methods. New York, NY: John Wiley & Sons, Inc. Retrieved from http://www.usabilityfirst.com/usability-methods/cognitive-walkthroughs/

Youger, R. E. (2013). Evaluating usability in online learning design. Unpublished manuscript, West Virginia University, Morgantown, WV.

 


Online Journal of Distance Learning Administration, Volume XVIII, Number I, Spring 2015
University of West Georgia, Distance Education Center
Back to the Online Journal of Distance Learning Administration Contents