Examining the Elements of Online Learning Quality in a Fully Online Doctoral Program


Nathan R. Templeton
Texas A&M University
nate.templeton@tamuc.edu


Julia N. Ballenger
Texas A&M University
julia.ballenger@tamuc.edu


J. Ray Thompson
Texas A&M University
ray.thompson@tamuc.edu

Abstract

The purpose of this descriptive quantitative study was to examine the quality elements of online learning in a regional doctoral program. Utilizing the six quality dimensions of Hathaway’s (2009) theory of online learning quality as a framework, the study investigated instructor-learner, learner-learner, learner-content, learner-interface, learner-instructional strategies, and social presence in order to explore the frequency and importance of these elements. A likert-style survey administered through Qualtrics was used to report self-perceptions of the doctoral students and faculty members. Descriptive statistics for the survey and subscales indicated alignment with the review of literature. Course design, instructor’s facilitation, and student interaction were factors impacting learning outcomes (Eom, Wen, & Ashill, 2006). Faculty participation was also found to dramatically improve the performance and satisfaction of students (Arbaugh & Rau, 2007; Hrastinski, 2009). Resultantly, five conclusions emerged from the study. First doctoral students and faculty valued the frequency of corporate interaction, clear prompt feedback, and multiple opportunities to learn and demonstrate learning. Secondly, instructor to learner interaction has to be an intentional practice. Third, the inclusion of learning technologies is necessary for building relationships, making connections and giving credibility to the learning environment. The fourth conclusion revealed that students were more concerned with the quality of assignments than faculty; and finally, faculty responses to students’ discussions is an area for improvement in the online program.

Introduction to the Study

State University (Pseudo name) is a regional university located in Northeast Texas with an enrollment in excess of 12,000, of which 40 percent are graduate students. For years, State University enjoyed statewide acclaim for quality educational administration programs (Pseudo name, personal communication, August 2012). During the period of fall 2007 to August 2011, 194 doctoral degrees were conferred. As a regional university, many of these students were the first generation in their families to pursue higher education. In an effort to continue the commitment to quality education, State University explored avenues to meet the challenges of a changing world. At the core of this change was the offering of programs with the flexibility to attract working practitioners. Consequently, the doctoral program in the department of educational leadership of the State University was migrated to an online format and State University partnered with a national marketing firm to recruit students (Pseudo name, personal communication, June 11, 2013). Subsequently, beginning with the fall 2012 cohort, enrollment numbers spiked by 200 percent (Pseudo name, personal communication, October 29, 2015). While a significant increase in enrollment was realized, program faculty at State University desired to engage in assessment practices to evaluate content design and teaching effectiveness.

Statement of the Problem

While quality teacher characteristics have become a known standard, quality online educational programs have been challenged since their inception (Gardiner, Corbitt, & Adams, 2010; Allen & Seaman, 2010; Maloney & Oakley, 2010). Many published research studies have examined factors that affect the effectiveness of online learning in higher education. For example, Eom, Wen, and Ashill (2006) found that course design, instructor's facilitation, and student interaction were factors impacting the learning outcome. Student participation was also found to dramatically improve the performance and satisfaction of students (& Rau, 2007; Hrastinski, 2009). The researchers used Hathaway's (2009) theory of online learning quality, which includes six quality dimensions (instructor-learner, learner-learner, learner-content, leaner-interface, learner-instructional strategies, and social presence) as a framework for this research study.


Purpose of the Study

The purpose of this quantitative study was threefold. First, the researchers examined the elements of online learning quality and, secondly, we reported the frequency with which these quality elements are present within the online doctoral program. Thirdly, we explored the ability of students and faculty to structure their time, and to access and manage the technology in order to successfully engage in the online learning environment. The quality of the online learning environment and experiences was determined by quality elements such as (a) Instructor-Learner, (b) Learner-Learner, (c) Instructor-Learner Instructional Strategies, (d) Learner-Content, and Learner-Support.


Research Questions

The following research questions guided this quantitative study: (a) What do doctoral students report about the frequency of the quality elements used in the online doctoral program? (b) What do doctoral students report about the importance of these quality elements for their learning? (c) What do faculty members report about the frequency of these quality elements in the online doctoral program? (d) What do faculty members report about the importance of these quality elements for student learning? (e) What differences, if any, are there in doctoral students and faculty responses regarding the frequency and importance of quality elements in the doctoral program?

Significance of the Study

The original online doctoral program was implemented in the Fall Semester in 2011 (Pseudo name, personal communication, May 31, 2013). The impetus for this educational endeavor was in response to students' request to introduce flexibility into their face-to-face program (Pseudo name, personal communication, June 18, 2013). This motive aligns with the finding of Allen and Seaman (2011), who found that 80 percent of their study respondents viewed the online education program as superior to the face-to-face program due to the flexibility for scheduling of courses. Once implemented with nineteen students (Pseudo name, personal communications, June11, 2013), it became evident that the online doctoral program was student preference. Thus, the face-to-face program was retired for the educational administration doctoral program and the online format was embraced for doctoral studies.

Allen and Seaman's (2013) research involving ten years of tracking online education in the United States revealed the view that online education is just as good as face-to-face instruction is decidedly mixed. Their data from the period of 2003 through 2009 revealed a small decrease in the proportion of academic leaders reporting the learning outcomes for online education as inferior to those of comparable face-to-face courses. Furthermore, they found that from 2011- 2012, an increase in the proportion of academic leaders had a positive view of the relative quality of the learning outcomes for online courses as compared to comparable face-to face courses. However, there remains a sizable minority that continues to see the online option as inferior (Allen & Seaman, 2013). Thus, the significance of this study is to add to the body of research on online instruction by examining the quality elements of online learning in a fully online doctoral program.

Review of the Literature

Characteristics of Quality Online Programs

While quality teacher characteristics have become a known standard, quality online educational programs have been challenged since their inception. In a compilation of existing research focusing on the effectiveness of online learning programs (Barbour & Reeves, 2009; Romero & Barbera, 2011; Dunacn & Cator, 2010), a variety of themes surfaced about the impact of online learning on student achievement. Research has repeatedly shown that student achievement in a blended learning model with teacher instruction and online instruction is comparable to that of all online learning (Mashaw, 2012; Watson & Gemin, 2009; Allen & Seaman, 2011).

Likewise, Waters and Leong (2011) found that there were no significant differences in test scores, assignments, or participant grades between online and traditional educational programs. Specifically, 96% of online learners reported that their educational experience was as effective or more effective than their experiences in traditional learning programs. Moreover, when compared with traditional approaches, online learners reported more profound attitudes toward learning.

Effectiveness of Online Learning 

Studies are emerging that focus on the effectiveness and quality of online course design for learning (Mashaw, 2012).  Mashaw (2012) noted that the goal of such studies is to improve the quality of learning by focusing on factors that affect the effectiveness of delivery systems, including: instructional strategies, program content, and student and teacher social presence. Similarly, Watson and Gemin (2009) studied program quality and asserted that online program quality is strongly connected to the design of individual online courses.  Designing a quality online course included six key areas: course content, instructional design, technology, student assessment, course evaluation and management, and 21st century-skills.  The authors further contended that course quality is a result of forethought and diligent management.

Aspects of Online Learning

The research of Romero and Barbera (2011) examined the ways in which online learners engage with the content being taught.  Their research revealed quality-of-learning time as a key indicator of learner achievement. Key among their findings included: (a) frequency of interaction with instructors, (b) clear prompt feedback, and (c) multiple opportunities to learn and demonstrate learning.  In addition, learners benefited from clear goals, high expectations, and flexible options to satisfy course objectives.

Personalization of Instruction

Another aspect of quality online learning programs is that of personalization of instruction for the learner through multiple representations.  Multiple representations refer to the ability for staff and students to use multiple modalities or points of view to instruct or demonstrate learning.  Engaging students in meaningful learning activities that facilitate multiple perspectives increases the likelihood of ownership of the learning (Chen, 2007).  Further, Moloney and Oakley (2010) found that successful online learning programs also focused on individual assessment continual feedback toward progress.

Student-Teacher Interaction


Interaction with instructors remains a key factor in evaluating the quality of educational programs (Chaney et al., 2009). Students in online learning programs were more highly concerned with regular communication with their instructors when compared with students in a traditional program (Moore & Shelton, 2013). Furthermore, the concept of immediacy is necessary in the relationship between instructors and students and has been found to increase both cognitive learning as well as affective learning. For clarity of discussion, immediacy is achieved by participating in discussions with students, responding directly and promptly, and regularly monitoring student learning (Puzzifero & Shelton, 2008; Ward, Peters, & Shelley, 2010).

Student-Student Interaction and Social Presence


Finally, another strongly correlated indicator of quality online learning programs is the interactions between classmates.  Aragon (2003) reported that learning is ultimately a social endeavor and without social presence in the learning environment, achievement is less likely to occur. Aragon (2003) presented a number of instructor strategies to increase social presence, including: (a) welcome messages, (b) use of student profiles, including audio in lesson development, and (c) keeping class sizes to no more than 30:1. Likewise, Moore and Shelton (2013) identify social presence as creating a sense of community for students. This sense of community helps students support each other, keeps students motivated and improves success. 

Research Design and Methods


Quantitative research is “the collection and analysis of numerical data to describe, explain, predict, or control phenomena of interest” (Gay, Mills, & Airasian, 2012, p.7). Since the researchers were not interested in comprehensive narrative and visual data to gain insights into a particular phenomenon of interest, quantitative methodology was most appropriate for the study. Likewise, survey design was most appropriate for the study, as it provided “a quantitative or numeric description of trends, attitudes, or opinions of a population by studying a sample of that population” (Creswell, 2009, p.145).

Sampling


A purposive (non-random) sample is a non-representative subset of some larger population and is constructed to serve a very specific need or purpose. Ary (2010) describes purposive sampling as a form of non-probability sampling in which decisions concerning the individuals to be included in the sample are taken by the researcher, based upon a variety of criteria which may include specialist knowledge of the research issue, or capacity and willingness to participate in the research. A purposive sampling technique was used to select the participants for this study. 

Participants

The criteria for selection were that the participants had to be graduate students who had been consistently enrolled in doctoral classes from Fall 2012, Spring 2013, Fall 2013, and Spring 2014. The criteria selection for faculty included those full-time faculties and adjuncts assigned to teach courses in the doctoral program during the same periods. Of the potential pool, the actual study population was 29% student (n=52) and 38% faculty (n=15). Demographic information for the study population was 67% female and 33% male. Faculty demographic information is masked to ensure confidentiality.

Procedures

The lead faculty researcher and the coordinator for partnership initiatives obtained the names and email addresses from the Department of Educational Leadership doctoral students’ database. These students were invited via email to participate in the study.  The Qualtrics Mailer was used to distribute the online student and faculty surveys. Qualtrics automatically sent a custom link to each student and faculty member to access the survey. Clicking on the link to the survey demonstrated student and/or faculty agreement to take the survey. Reminders were sent to students and faculty members until the desired participation rate of 30% was achieved.

Quantitative Survey Instrument

To answer the quantitative research questions, the researchers utilized an online survey to gather perceptions of doctoral students and faculty regarding the quality of the online doctoral program at State University. The survey was administered via Qualtrics, a web-based survey program designed to capture customer insights in real time.  The quantitative survey instrument was initially developed from the online research by Dr. Dawn M. Hathaway, George Mason University, 2009. Location: George Mason University Dissertation and Thesis collection, Collection #R0122 (storage box 334), Special Collections and Archives, George Mason University Libraries. The collection is open to research.  Requests should be submitted to speccoll@gmu.edu.

Written permission was obtained from the author to use the survey in its entirety. The survey is a 40 item self-report that gathered data in five sections: (a) Instructor to Learner Quality Elements, (b) Learner to Learner Quality Elements, (c) Instructor to Learner Instructional Strategies, (d) Learner to Content Quality Elements, and (e) Learner Support Quality Elements. Excluding Learner Support Quality Elements in the faculty survey, each of the 40 likert-style questions was presented in two parts: one that sought to understand the frequency of specific descriptors contained within the online environment and one that addressed the importance of the items to the respondent regardless of frequency of use.

Regarding survey distribution, there were 382 e-mail invitations to students requesting participation in the online survey and of that number 183 (48%) were successfully delivered - meaning they were identified as received and opened. Therefore, 183 student surveys were distributed online and of that number, 52 participants agreed to participate in the survey for a response rate of 29%. Comparatively, there were 40 e-mail invitations to full-time faculty and adjuncts and of this number 15 agreed to participate in the survey for a response rate of 38%. Response rates bear more importance when the purpose of the study is to measure effects or make generalizations to a larger population. Conversely, the response rate is less important if the purpose of the study is to gain insight. As Ary (2010) reported, the average response rate for survey’s administered online is 30%; however, a response rate of no less than 25% is acceptable in survey research. Therefore, the response rate achieved aligns with the purpose of the study.

Quantitative Data Analyses

The Statistical Package for the Social Sciences (SPSS) version 23 was used for the quantitative analysis of this research.  All participant surveys and corresponding data were imported into SPSS from Qualtrics.  All survey responses and demographic data were coded in Qualtrics prior to exporting to SPSS.  Each participant was assigned a random numerical identifier by Qualtrics to ensure participant anonymity.  Descriptive statistics for the survey and subscales were used to report self-perceptions regarding each of the quality learning elements and experiences of the doctoral students and faculty members.

Responses to the survey were given using a 4-point likert scale ranging from “1” (never) to “4” (often); missing responses or NA response received a score of “0.”  To determine the internal consistency or average correlation of items in the survey instrument, reliability analyses using Cronbach’s Alpha were performed at the .05 confidence level. As noted by Lunenburg & Irby (2008) an internal consistency coefficient of .80 is acceptable for a survey instrument.  Therefore, the survey and each subscale were highly reliable with Cronbach’s Alpha ranging from .885 to .912.

Overview of Quantitative Findings


The quality of the online learning environment and experiences was determined by respondent responses to the categorized quality elements of online learning as posed on the survey instrument: (a) Instructor to Learner Quality Elements, (b) Learner to Learner Quality Elements, (c) Instructor to Learner Instructional Strategies, (d) Learner to Content Quality Elements, and (e) Learner Support Quality Elements. Using data from the likert-style questions on the survey instrument, student and faculty findings are presented according to each descriptor of online learning quality.

Table 1. Instructor to Learner Quality Elements (Student Responses)

Statistic

Instructor - learner interaction is valued and facilitated in a variety of ways

Instructors provide timely responses to concerns

Instructors provide frequent and timely feedback on assignments and activities

Instructors provide meaningful and relevant feedback on assignments and activities

Instructors led synchronous discussions (i.e. chat sessions)

Instructors led asynchronous discussions (i.e. discussion forums)

Learners are provided opportunities to ask questions of the instructors

Instructors provide opportunities to be engaged through emergent technology (i.e. Google+)

FREQUENCY

 

 

 

 

 

 

 

 

Agree

68%

76%

67%

66%

20%

57%

82%

40%

Mean

3.58

3.70

3.59

3.58

2.80

3.43

3.82

3.24

Variance

0.45

0.34

0.41

0.41

0.69

0.54

0.15

0.51

Standard Deviation

0.67

0.58

0.64

0.64

0.83

0.74

0.39

0.72

IMPORTANCE

 

 

 

 

 

 

 

 

Agree

80%

96%

92%

98%

40%

68%

90%

58%

Mean

3.78

3.96

3.92

3.98

3.16

3.58

3.90

3.50


Table 1.1. Instructor to Learner Quality Elements (Faculty Responses)

Statistic

Instructor - learner interaction is valued and facilitated in a variety of ways

Instructors provide timely responses to concerns

Instructors provide frequent and timely feedback on assignments and activities

Instructors provide meaningful and relevant feedback on assignments and activities

Instructors led synchronous discussions (i.e. chat sessions)

Instructors led asynchronous discussions (i.e. discussion forums)

Learners are provided opportunities to ask questions of the instructors

Instructors provide opportunities to be engaged through emergent technology (i.e. Google+)

FREQUENCY

 

 

 

 

 

 

 

 

Agree

93%

100%

86%

86%

21%

86%

100%

57%

Mean

3.93

4.00

3.86

3.86

2.79

3.86

4.00

3.57

Variance

0.07

0.00

0.13

0.13

0.80

0.13

0.00

0.51

Standard Deviation

0.27

0.00

0.36

0.36

0.89

0.36

0.00

0.72

IMPORTANCE

 

 

 

 

 

 

 

 

Agree

93%

86%

93%

36%

14%

62%

7%

79%

Mean

3.78

3.96

3.92

3.98

3.16

3.58

3.90

3.50

Table 2. Learner to Learner Quality Elements (Student Responses)

Statistic

Learner to learner interaction is valued and facilitated in a variety of ways

Opportunities/tools are provided to encourage learner to learner collaboration

Interaction with other students through threaded discussion board

Interaction with other students through web video

Interaction with other students through real-time chat

Interaction with other students through collaborative learning teams

Interaction with other students through blogs or wikis

Opportunity to evaluate and provide/receive constructive feedback to other learners

FREQUENCY

 

 

 

 

 

 

 

 

Agree

73%

71%

98%

25%

22%

45%

16%

47%

Mean

3.69

3.67

3.96

2.65

2.57

3.37

2.57

3.39

Variance

0.30

0.35

0.08

1.03

1.09

0.40

0.81

0.40

Standard Deviation

0.55

0.59

0.28

1.02

1.04

0.63

0.90

0.63

IMPORTANCE

 

 

 

 

 

 

 

 

Agree

64%

72%

72%

36%

34%

56%

26%

64%

Mean

3.54

3.66

3.66

2.86

2.90

3.44

2.66

3.56

Table 2.1. Learner to Learner Quality Elements (Faculty Responses)

Statistic

Learner to learner interaction is valued and facilitated in a variety of ways

Opportunities/tools are provided to encourage learner to learner collaboration

Interaction with other students through threaded discussion board

Interaction with other students through web video

Interaction with other students through real-time chat

Interaction with other students through collaborative learning teams

Interaction with other students through blogs or wikis

Opportunity to evaluate and provide/receive constructive feedback to other learners

FREQUENCY

 

 

 

 

 

 

 

 

Agree

93%

86%

93%

36%

14%

62%

7%

79%

Mean

3.93

3.86

3.93

2.93

2.57

3.46

2.00

3.71

Variance

.07

0.13

0.07

0.99

0.88

0.60

0.92

0.37

Standard Deviation

0.27

0.36

0.27

1.00

0.94

0.78

0.96

0.61

IMPORTANCE

 

 

 

 

 

 

 

 

Agree

100%

100%

100%

86%

14%

79%

7%

100%

Mean

4.00

4.00

4.00

3.86

2.57

3.71

2.00

4.00


Table 3. Instructor to Learner Instructional Strategies (Student Responses)

Statistic

Opportunity for critical thinking about course content

Opportunities for applying knowledge to professional practice

Clear connection between course content and professional practice

Use simulations and case studies

Use of collaborative group projects

Activities designed to promote reflection on content

Availability of ample information and resources to complete assignments

Current and emerging technologies are integrated for online teaching and learning

FREQUENCY

 

 

 

 

 

 

 

 

Agree

90%

96%

94%

80%

50%

84%

92%

76%

Mean

3.78

3.69

3.65

3.37

3.34

3.70

3.78

3.55

Variance

0.33

0.42

0.00

0.52

0.33

0.38

0.21

0.37

Standard Deviation

0.58

0.65

0.00

0.72

0.58

0.61

0.46

0.61

IMPORTANCE

 

 

 

 

 

 

 

 

Agree

90%

100%

100%

76%

50%

76%

100%

100%

Mean

3.78

4.00

4.00

3.55

3.34

3.55

4.00

4.00


Table 3.1. Instructor to Learner Instructional Strategies (Faculty Responses)

Statistic

Opportunity for critical thinking about course content

Opportunities for applying knowledge to professional practice

Clear connection between course content and professional practice

Use simulations and case studies

Use of collaborative group projects

Activities designed to promote reflection on content

Availability of ample information and resources to complete assignments

Current and emerging technologies are integrated for online teaching and learning

FREQUENCY

 

 

 

 

 

 

 

 

Agree

93%

86%

100%

57%

57%

93%

100%

71%

Mean

3.93

3.86

4.00

3.50

3.50

3.93

4.00

3.64

Variance

0.07

0.13

0.00

0.42

0.42

0.07

0.00

0.40

Standard Deviation

0.27

0.36

0.00

0.65

0.65

0.27

0.00

0.63

IMPORTANCE

 

 

 

 

 

 

 

 

Agree

93%

100%

100%

86%

57%

93%

100%

100%

Mean

3.93

4.00

4.00

3.86

3.50

3.93

4.00

4.00


Table 4. Learner to Content Quality Elements (Student Responses)

Statistic

Use of instructor - generated content summaries

Use of timelines and due dates

Course objectives are clearly stated

Use of online quizzes and/or tests

Alignment of learning outcomes from course to course exists

Learning objectives describe outcomes that are measurable

Learning objectives are appropriately designed for the level of the course

Learning outcomes are applicable to professional practice

FREQUENCY

 

 

 

 

 

 

 

 

Agree

49%

92%

86%

51%

57%

67%

82%

76%

Mean

3.25

3.90

3.84

3.33

3.49

3.61

3.78

3.71

Variance

0.71

0.13

0.17

0.63

0.41

0.36

0.25

0.37

Standard Deviation

0.81

0.36

0.42

0.79

0.64

0.60

0.50

0.61

IMPORTANCE

 

 

 

 

 

 

 

 

Agree

76%

98%

96%

56%

86%

88%

94%

92%

Mean

3.74

3.98

3.96

3.34

3.80

3.84

3.92

3.88


Table 4.1 Learner to Content Quality Elements (Faculty Responses)

Statistic

Use of instructor - generated content summaries

Use of timelines and due dates

Course objectives are clearly stated

Use of online quizzes and/or tests

Alignment of learning outcomes from course to course exists

Learning objectives describe outcomes that are measurable

Learning objectives are appropriately designed for the level of the course

Learning outcomes are applicable to professional practice

FREQUENCY

 

 

 

 

 

 

 

 

Agree

71%

100%

100%

36%

86%

93%

100%

100%

Mean

3.71

4.00

4.00

3.14

3.71

3.93

4.00

4.00

Variance

0.22

0.00

0.00

0.59

0.68

0.07

0.00

0.40

Standard Deviation

0.47

0.00

0.00

0.77

0.83

0.27

0.00

0.63

IMPORTANCE

 

 

 

 

 

 

 

 

Agree

93%

100%

100%

36%

57%

93%

100%

100%

Mean

3.93

4.00

4.00

3.14

3.50

3.93

4.00

4.00



Discussion of Quantitative Findings

A descriptive summary of findings is presented in response to each research question.
(a) What do doctoral students report about the frequency of the quality elements used in the online doctoral program?

Instructor-to-Learner Quality Elements

As indicated in Table 1, a majority of respondents (93%) agreed that instructor to learner interaction was frequently valued and facilitated in a variety of ways. Moreover, 76% agreed that instructors provided timely responses to student concerns, while 66% (mean score of 3.59) agreed that they received timely and relevant feedback. Likewise, 82% of the respondents agreed that learners were provided opportunities to pose questions. However, an area of concern was evident in the lack of synchronous discussions with 74% of the respondents reporting this was an issue that needed to be addressed. Additionally, 60% disagreed when asked if instructor provided opportunities for students to be engaged through emergent technologies.

(b) What do faculty members report about the frequency of these quality elements in the online doctoral program?

By contrast, instructors agree that the use of synchronous discussions is also an area that needs to be improved. Seventy-eight percent (mean score 2.79 out of a possible 4) stated that they rarely or never used this strategy. Otherwise, instructor’s scores were relatively high in this subset with mean scores averaging 3.86 out of 4. Instructors overwhelmingly agreed that instructor to learner interaction is not only valued but intentional within the online learning platform.

(c) What do doctoral students and faculty report about the importance of these quality elements for their learning?

Interestingly, over 85% of the students agreed that instructor to learner interaction was an important factor in the online learning environment (see Table 2). By contrast, student respondents did not think that participating in asynchronous discussion was a pressing need in terms of importance as represented in the mean score of 3.16. Conversely, being provided opportunities to be engaged through emergent technology was also not expressed as an important element for learning.

Alternatively, other than the use of blogs and wikis to facilitate learning, faculty responses closely mirrored student responses in the area of instructor to learner interaction with a mean score variance of less than .001. Faculty responses to this question had an average mean score of 2.7 as compared to 2.0 for students.

(d) What differences, if any, are there in doctoral students and faculty perceptions of the frequency and importance of quality elements in the doctoral program?

Faculty members view the importance or value of using learning technologies as essential to extending learning in the online environment. While a majority of students (88%) agree, students preferred a blended format, where some contact with the instructor was maintained through the course. Students reported apprehension with the online platform and often found themselves with little or no support from instructors. Several suggestions included creating uniform expectations by having course shells reflect the same layout and use of material.

Most notably, students were more concerned with the quality of assignments than instructors. With a mean of 3.14 and a standard deviation of 0.77, students reported assignments often lacked rigor and activities were frequently mundane within the sequence of courses. While not important as a quality element for learning, students did desire more real time interaction with each other through Google +, streaming videos, and Skype (mean score 2.9). Students miss the meaningful conversations from face-to-face interactions or chat. Interestingly, 100% of the instructors completing the survey reported offering Google + as a viable option for learners to connect with them but not with other students.

Doctoral Students Report on the Frequency of the Quality Elements

The quantitative findings revealed the self-perceptions regarding each of the quality learning elements. When posed with the question, what do doctoral students report on the frequency of the quality elements used in the online doctoral program, a majority of the respondents agreed that instructor to learner interaction is valued and facilitated in a variety of ways, that instructors provide timely responses to concerns and that learners are provided opportunities to pose questions. However, 74% of the respondents felt that the use of synchronous discussion was an issue that needed to be addressed.  Additionally, the findings revealed that respondents noted that instructors proving opportunities to be engaged through emergent technologies scored low. The mean score was 2.8 of a possible 4.0 with a standard deviation of 0.83.

Faculty Report on the Frequency of Quality Elements

Conversely, faculty members, when presented with the question what do faculty members report on the frequency of these quality elements in the online doctoral program, the findings revealed that instructors agreed that the use of synchronous discussions is also an area that needs to be improved. Seventy eight percent (mean score 2.79 out of a possible 4) stated that they rarely or never used this strategy. Otherwise, instructors overwhelmingly agreed that instructor to learner interaction is not only valued but intentional within the online learning platform.

Report of Doctoral Students and Faculty about Importance of Quality Elements

In response to the question, what do doctoral students and faculty report about the importance of these quality elements, the findings revealed that over 85% of the students agreed that instructor to learner interaction was in important factor in the online learning environment. Student respondents, by contrast, did not think that participating in asynchronous discussion was a pressing need. With the exception of blogs and wikis to facilitate learning, faculty responses closely mirrored student responses in the area of instructor to learn interaction. It is noteworthy that faculty responses to this question had an average mean score of 2.7 as compared to 2.0 for students.

Report of Doctoral Students and Faculty on the Differences of Perceptions

When posed with the question, what differences, if any in doctoral students and faculty perceptions of the frequency and importance of quality elements in the doctoral program, the findings revealed that faculty members viewed the importance or value of using learning technologies as essential to extending learning in the online environment. While a majority of students (88%) agreed, students prefer a blended format, where some contact with the instructor is maintained through the course.  Students reported apprehension with the online platform and often found themselves with little or no support from instructors. While not important as a quality element for learning, students did desire more real time interaction with each other through social networks, streaming videos, and wireless applications (mean score 2.9). Students reported a strong need for the meaningful conversations from face-to-face interactions or chat.  Interestingly, 100% of the instructors completing the survey reported offering Google + as a viable option for learners to connect with them but not with other students.


Conclusion

The researchers drew several conclusions for a quality online program from the quantitative findings of this study. Conclusion one revealed that doctoral students and faculty value interaction. Over 85% of the students agreed that instructor to learner interaction was an important factor in the online learning environment. This is facilitated in a variety of ways, especially timely response to concerns. It is noteworthy to mention than 82% of the respondents agreed that learners are provided opportunities to pose questions. This conclusion is supported by Duncan and Cator’s (2010) contention that student achievement is related to the frequency of interaction with instructors, prompt feedback, and multiple opportunities to demonstrate learning.

Conclusion two revealed that instructor to learner interaction is intentional. Instructors’ scores were relatively high in this subset with mean scores averaging 3.86 out of 4. From the reported data, all ten (100%) faculty members overwhelmingly agreed that the use of online learning technologies and the use of pedagogical methods were important and of value to increase instructor to learner interaction. This conclusion is supported by Watson and Gemin (2009) who posit that course quality is a result of forethought and diligent management.

Conclusion three revealed that instructor to learner interaction was an important factor in the online learning environment. This is evidence by over 85% of students and 100% of the faculty expressing agreement. Moreover, Faculty members view the importance of using learning technologies as essential to extending learning in the online environment, with 88% of doctoral students in agreement. Faculty viewed learning technologies as necessary for building relationships and making connections between the student and the instructor. Furthermore, faculty members focused on giving credibility to the learning environment by creating a learning environment similar to face-to-face. Students indicated that they missed the meaningful conversations from face-to-face interactions. Interestingly, 100% of the instructors completing the survey reported offering Google + as a viable option for learners to connect with them but not with other students.

Conclusion four revealed that students were more concerned with the quality of assignments than faculty. With a mean of 3.14 and a standard deviation of 0.77, students reported assignments often lacked rigor and activities were frequently mundane within the sequence of courses. This conclusion is supported by Eom, Wen, and Ashill (2006), who found that course design was a major factor impacting learning outcomes. Moreover, lessons must be created that allow students to explore resources and demonstrate their expertise through a variety of means. Teachers are no longer just delivering information, their role is now that of facilitator and collaborator in each student’s learning pathway (Duncan & Cator, 2010).

Conclusion five revealed faculty responses to students’ discussions is an area for improvement. One area of concern is evident in the use of synchronous discussions with 74% of the respondents indicating this was an issue that needed to be addressed. The mean score was 2.8 of a possible 4.0 with a standard deviation of 0.83. By contrast, instructors agree that the use of synchronous discussions is also an area that needs to be improved with 78% (mean score 2.79 out of a possible 4) stating that they rarely or never used this strategy. Other than the use of blogs and wikis to facilitate learning, faculty responses closely mirrored student responses in the area of instructor to learner interaction with a mean score variance of less than .001. Faculty responses to this question had an average mean score of 2.7 as compared to 2.0 for students. In support of this conclusion, Gardner, et al. (2010) asserted that interaction with instructors remains a key factor in evaluating the quality of educational programs.

Implications/Recommendations for Practice and Future Research

Three implications/recommendations for future study emerge that are noteworthy. The first implication is that professional development on course design, use of a variety of pedagogical methods such as case studies and simulations, and the use of emergent online technologies would be quite beneficial to faculty.   Duncan and Cator (2010) contend that what is taught is directly related to how it is taught.  Therefore, professional development is critical to the instructors’ growth in the use of technologies and pedagogical methods to ensure student-instructor engagement and learning.  Professional development would seek to minimize dissatisfaction with inconsistent course designs that fail to provide rigor and challenge. Therefore, we recommend a series of professional development sessions on how to: design courses, write learning objectives, use streaming videos, and use effective technologies and pedagogical methods in the online environment.

The second implication emphasizes the importance of having tools in place for the purpose of determining the strategies that encourage student/faculty engagement and interaction.  Allen and Seaman (2013) surveyed more than 2,800 colleges and universities for the purpose of determining the opinions of academic officers regarding online education. The study reported that academic leaders expressed substantial satisfaction with the quality of online learning. This is based on the belief that effective tools are in place to assess the online instructional program. Duncan and Cator (2010) contend that the infrastructure of learning must be adjusted so that educators and students can access each other or resources at any time.  Students seek face-to-face interaction, immediate feedback, and the social presence of the instructor. This interaction gap can be filled by offering synchronous tools as a viable option for learners to connect with faculty and for faculty to connect with learners. Chen (2007) noted that while student engagement and student/faculty interaction requires more time from the instructor, as well as the learner, the learning that occurs is truly meaningful.

The third implication is the periodic evaluation of the online program for quality. Given the changes in technology infrastructure, program evaluation serves to maintain program focus and emphasize student outcomes. Assessment information should be regularly gathered and evaluated to help educators improve delivery (Kiener, 2013). Duncan and Cator (2010) advance the notion that if education programs are more productive, they will create students who are more productive and capable. Moloney and Oakley (2010) reported that online enrollment is expected to grow 20% over the next decade; therefore, online education programs must be regularly evaluated for quality, just as traditional schools are.
 
Thus, it is recommend that the online program at State University continue to be evaluated using both formative and summative assessments.  First, evaluation tools must evaluate content design and teaching practices – with equal emphasis on both. Typically, faculty excels in teaching; therefore, the practice of using faculty as course designers and teachers should be avoided. Instead, master course developers should be utilized to design the content and environment, while many others teach from it. This shift empowers faculty to focus on teaching quality  (Pina & Bohn, 2013). Another method of evaluation could be during the time the student exits from the program.  The inclusion of structured-open-ended questions could be designed to obtain students’ perceptions of program effectiveness related to content, teacher and student presence, engagement, and learning outcomes.  Finally, State University and all educational institutions must endeavor to incorporate best practices for teaching evaluation into a larger institution-wide scheme of evaluation and planning. As Gadiner, Corbitt, and Adams (2010) posit, online teaching should be melded into the institution’s overall program assessment model. 

 


References

Allen, E., & Seaman, J. (2011). Going the distance: Online education in the United States, 2011. Wellesley, MA: Babson Survey Research Group and Quahog Research Group.

Allen, E., & Seaman, J. (2013). Changing course: Ten years of tracking online education in the United States. Wellesley, MA: Babson Survey Research Group and Quahog Research Group.

Allen, I. E., & Seaman, J. (2009). Learning on Demand: Online Education in the United States. Babson Park, MA: Babson Survey Research Group.

Arbaugh, J. B., & Rau, B. L. (2007). A study of disciplinary, structural, and behavioral effects on course outcome in online MBA courses. Decision Sciences Journal of Innovative Education, 5, 65-95.

Barbour, M. K., & Reeves, T. C. (2009). The reality of virtual schools: A review of the literature. Computers and Education, 52(2), 402-416.

Chaney, B. H., Eddy, J. M., Dorman, S. M., Glessner, L. L., Green, B. L. & Lara-Alecio, R. (2009). A Primer on Quality Indicators of Distance Education. Health Promotion Practice, 10(2), 222-231.

Chen, S. (2007). Instructional Design Strategies for Intensive Online Courses: An objectivist-constructivist blended approach. Journal of Interactive Online Learning, 6(1). 72-86.

Creswell, J. (2009). Research design: Qualitative, quantitative, and mixed methods approaches (3rd ed.). Los Angeles, CA: Sage.

Duncan, A., & Cator, K. (2010). Transforming American education learning: Powered by technology. National Education Technology Plan. US Department of Education.

Eom, S. B., Wen, H. J., & Ashill, N. (2006). The determinants of students’ perceived learning outcomes and satisfaction in university online education: An empirical investigation. Decision Sciences Journal of Innovative Education, 4, 215-235.

Gardiner, L.R., Corbitt, G., & Adams, S.J. (2010). Program assessment: Getting to a practical how-to model. Journal of Education for Business, 85, 139-144.

Gay, L.R.; Mills, G.E.; & Airasian, P. (2012). Educational research: Competencies for analysis and application (10th ed). Upper Sadlle River, NJ: Pearson.

Hathaway, D. (2009). Assessing Quality Dimensions and Elements of Online Learning Enacted in a Higher Education Setting. (Doctoral Dissertation). Retrieved from http://hdl.hadle.net/1920/4593 University Libraries Mason Archival Repository Service. George Mason University.

Hrastinski, S. (2009). A theory of online learning as online participation. Computers & Education, 52(1), 78-82.

Kopinak, J. (1999). The use of triangulation in a study of refugee well-being. Quality Quantity, 33(2). 169-183.

Leech, N. L., & Onwuegbuzie, A. J. (2007). An array of qualitative analysis tools: A call for data analysis triangulation. School Psychology Quarterly, 22, 557-584. doi:10.1037/1045-3830.22.4.557

Lunenburg, F.C. & Irby, B.J. (2008). Writing a successful thesis or dissertation: Tips and strategies for students in the social and behavioral sciences. Thousand Oaks, CA: Corwin Press.

Miles, M. B., & Huberman, A. M. (1994). Qualitative data analysis: An expanded source book (2nd ed.). Thousand Oaks, CA: Sage.

Miyazoe, T. M., & Anderson, T. (2010). Empirical Research on Learners’ Perceptions: Interaction equivalency theorem in blended learning. European Journal of Open, Distance and E-Learning, 9(2). Retrieved from http://www.eurodl.org/

Moloney, J. F., & Oakley, B. (2010). Scaling Online Education: Increasing access to higher education. Journal of Asynchronous Learning Networks, 14 (1), 55-70.

Moore, J. C., & Shelton, K. (2013). Social and Student Engagement and Support: The Sloan-C quality scorecard for the administration of online programs. Journal of Asynchronous Learning Networks, 17(1), 53-72.

Pena, A., & Bohn, L. (2014). Assessing online faculty: More than student surveys and design rubrics. Quarterly Review of Distance Education, 15(3), 25-34.

Puzziferro, M., & Shelton, K. (2008). A Model for Developing High-Quality Online Courses: Integrating systems approach with learning theory. Journal of Asynchronous Learning Networks, 12(3-4), 119-136.

Romero, M., & Barbera, E. (2011). Quality of Learners’ Time and Learning Performance Beyond Quantitative Time-On-Task. The International Review of Research in Open and Distance Learning, 12(5), 125-137.

Ward, M. E., Peters, G., & Shelley, K. (2010). Student and Faculty Perceptions of the Quality of Online Learning Experiences. The International Review of Research in Open and Distance Learning, 11(3), 55-77.

Waters, L., & Leong, P. (2011, June). New roles for the teacher and learning coach in blended learning for K-12. In World Conference on Educational Media and Technology (Vol. 2011, No. 1, pp. 2716-2725).

Watson, J. & Gemin, B. (2009). Management and operations of online Programs: Ensuring quality and accountability. Promising Practices in Online Learning. International Association for K-12 Online Learning


Online Journal of Distance Learning Administration, Volume XVIII, Number 4, Winter 2015
University of West Georgia, Distance Education Center
Back to the Online Journal of Distance Learning Administration Contents