Faculty Technology Usage Resulting from Institutional Migration to a New Learning Management System


Ryan Rucker
Midlands Technical College
ryan@ryandrucker.com


Steve Downey
Valdosta State University
sedowney@valdosta.edu

Abstract

Research literature is flush with articles discussing how teachers use individual learning management systems. However, very few studies examine how faculty are affected as they move from one platform to another. This study addresses that gap and examines how faculty adapt their online teaching practices as they migrate systems. In doing so, faculty usage levels were examined for 10 common teaching tools found in two prominent systems, Blackboard and Desire2Learn. Two broad conclusions emerged from the study. First, faculty usage is highly dependent upon system affordances; systems perceived as cumbersome will repress faculty's use of tools and subsequently will alter their teaching practices. Second, faculty adoption rates are not equal across disciplines. As a result, additional training and support may be required for certain units.

Introduction

As technology increasingly becomes more integral to the delivery of face-to-face as well as online instruction, universities and colleges are forced to update and migrate systems on a recurring basis.  Consequently faculty members are required to change from one instructional platform to another, with most of those transitions occurring over the span of one or two semesters.

Because of their complexities, universities and colleges must decide when and how to migrate to a completely new learning management system (LMS) to improve the access and quality of instruction used to support student learning (Ryan, Tyoe, Charron, & Park, 2012).  In addition to solving technological issues such as sustaining legacy data, porting live courses, and integrating new LMSs into existing enterprise systems, institutions are faced with the uncertainty of how, and to what degree, faculty will adapt their teaching practices based upon the tools, interfaces, and systemic affordances of a new LMS.

Investigating why faculty members change their teaching practices due to an LMS migration is important to understand because it helps administrators and other key individuals determine the type of training and support used during the migration and what additional training and support is needed.  Furthermore, it also can help administrators find areas (e.g., LMS tools) in the LMS where faculty members did not change their practice and find incentives to encourage faculty to explore these tools or features.

Researchers in the past 15 years have studied the phenomenon of faculty acceptance and use of LMSs in teaching online, hybrid, and traditional face-to-face courses (Ali-Busaidi, 2009; Black, Beck, Dawson, Jinks, & DiPietro, 2007; Little-Wiles & Naimi, 2011).  Unfortunately, most of the studies presented in the literature investigated the LMS that the university or college had currently used at the time of the study.  Very few studies have been conducted to determine faculty members’ use of an LMS and the issues these faculty members had experienced after a university or college has migrated or changed to a new LMS.

Purpose and Framework

The purpose of the research described in this article was to determine if faculty members’ patterns in teaching using various tools within an LMS changed after a major LMS migration as compared to the previously used LMS.  Factors that were expected to influence the study include how many years of experience each faculty member had using an LMS, which tools he or she used in the old system compared to the new system, and what types of course (online, face-to-face, and hybrid) he or she typically taught.

The problems described in the above section were examined closely by comparing the old LMS, i.e., Blackboard Vista, to the new LMS, i.e., Desire2Learn.  The overarching research question guiding this study was:  To what extent have faculty members changed their patterns in their use of tools (e.g., announcements/news, discussions, grades, etc.) within Desire2Learn compared to Blackboard Vista?

The framework underlying this study incorporated two well established technology adoption models:  (1) Venkatesh and Davis’s (1996, 2000) Technology Acceptance Model (TAM) and (2) the Levels of Use model (LoU) by Hall, Loucks, Rutherford, and Newlove (1975).  TAM has been used by many research studies to predict end-user acceptance of various information technology systems by determining the user’s received ease of use and perceived usefulness of the system (Venkatesh & Davis, 1996, 2000; Venkatesh, Morris, Davis, & Davis, 2003).  At its essence, TAM suggests that if the user’s perceived ease of use and usefulness is high, the user will accept and use the system.  For this study, only those factors for assessing perceived ease of use were incorporated.  The reason for omitting factors assessing perceived utility is that those factors have the greatest value where adoption of unfamiliar technologies are being considered.  That is not the case in this study, as faculty already are familiar with the general nature and purpose LMS technologies.

As with TAM, Hall’s et al., LoU has been used in numerous studies to “account for the individual variation in use of an innovation” (Hall et al., 1975, p. 52).  LoU recognizes that outside variables can affect a user’s acceptance of an innovation but “what actually happens in the individual application of the innovation is open to tremendous variation” (p. 52) and needs to be investigated.  The unique attribute of the LoU model that makes it particularly appropriate for this study is that it was developed specifically for use with educators in order to appraise how teachers went about adopting new practices and technologies into their teaching.  Details regarding both of these theories are provided in the following section.

Review of Literature

A review of literature found that Everett Rogers’ (1962, 2003) Diffusion of Innovation theory serves as the foundation on which additional models have been developed to help determine the adoption of technology systems and innovations.  Two prominent models drawing upon Rogers’ work include Vanketesh’s et al. (2003) Technology Acceptance Model and Hall’s et al. (1975) Levels of Use of the Innovation.  Both of these models are explained further in the sections below.

Adoption Behaviors

One model that is used to determine adoption of instructional technology in educational environments is the diffusion of innovation theory (Medlin, 2001).  This theory, developed by Rogers’ (2003), points out that diffusion is “the process in which an innovation is communicated through certain channels over time among the members of a social system” (p. 5) while innovation is “an idea, practice, or project that is perceived as new by an individual or other unit of adoption” (p. 12).  Rogers suggested that the innovation decision process consist of five stages to include: knowledge, persuasion, decision, implementation, and confirmation. These stages are similar to previous models, including Hall’s, that use the same multi-stage process.  During the knowledge stage “an individual learns about the existence of innovation and seeks information about the innovation” (Sahin, 2006, p. 16).  While in the persuasion stage, the individual forms his or her own opinion of the innovation.  In the decision stage, the individual determines whether to adopt or reject the innovation.   If the individual accepts the innovation, during the implementation stage “an innovation is put into practice” (p. 17).  Finally, in the confirmation stage, while the individual has adopted the innovation they “look for support for his or her decision” (p. 17).

In addition, Rogers (2003) characterized adopters into one of five categories based on their innovativeness; these categories are:  innovators, early adopters, early majority, late majority, and laggards.  Innovators are risk takers by being the first to adopt the new idea, practice, or project.  Like innovators, early adopters will adopt the new idea but “tend to look to innovators for information, guidance, and validation” (Jordan, Doherty, Jones-Web, Cook, Dubrow, & Mendenhall, 2012, p. 67). The early majority are not the first to adopt the new idea but do not like being the last one to try the idea.  The late majority “begin the adoption process after a large majority of the society has adopted the innovation” (Celik, Sahin, & Aydin, 2014, p. 302).  Finally, the laggards are very traditional and decide not to adopt the idea.

Technology Acceptance Model

The Technology Acceptance Model, otherwise known as TAM, is an established framework that has been used by many studies to predict technology acceptance and use by end-users (especially the adoption of various software applications/systems). Overall, TAM draws wide spread support and is very successful in predicting about 40% of a system’s use (Elwood et al., 2006).  This model, which initially was created by Davis (1989), was derived from “adaptation of the TRA (Theory of Reasoned Action) specially tailored for modeling user acceptance of information systems” (p. 985).  Davis, Bagozzi, and Warshaw (1989) describe the model as a framework “used to provide an explanation of the determinants of computer acceptance that is general, capable of explaining user behavior [i.e., use] across a broad range of end-user computing technologies and user populations, while at the same time being both parsimonious and theoretically justified” (p. 985).

While TAM has gone through modifications over the last twenty years, Figure 1 highlights the latest version of the model. As shown in Figure 1, TAM has two particular beliefs that will determine end-user acceptance: perceived usefulness and perceived ease of use. Ultimately, if an individual determines that a technology system/product is useful and easy to use, that individual has a high likelihood of adopting the system.

Figure 1. The Final Technology Acceptance Model.  Adapted from “A model of the antecedents of perceived ease of use: Development and test,” by Venkatesh and Davis, Decision Sciences, 27, p. 453.


Perceived ease of use can be defined as “the degree in which the prospective user expects the target system to be free of effort” and perceived usefulness as “the prospective users subjective probability that using a specific application system will increase his or her job performance within an organization content” (Park, Lee, & Cheong, 2008, p. 165). External variables play a critical role in determining a users perceived ease of use and perceived usefulness. These variables often include user-training, involvement of the user during the design and development process, and the nature of the implementation process (Venkatesh & Davis, 1996).

Levels of Use of the Innovation

An innovation’s levels of use, otherwise known as LoU, is another framework used to predict end-users’ acceptance and adoption.  This model, which was developed by Hall et al. (1975), is an “attempt to define operationally various states of innovation user behavior” (p. 52).  These states are divided into seven levels; Table 1 provides a description of what knowledge of the innovation or tasks a user performs during each level.

Table 1

LoU Level Descriptions

Level

Description

Non-use

The user has little to no knowledge of the innovation.

Orientation

The user has acquired or in the process of acquiring information about the innovation.

Preparation

The user has acquired knowledge and preparing to use the innovation.

Mechanical use

The user has used the innovation and focusing on the short-term use.

Routines

The use of the innovation is stabilized and few changes are being made on to improve the innovation.

Integration

The user is combining own efforts to use in the innovation with related activities.

Renewal

The user revaluates the quality of use of the innovation and seeks major modifications.

Users start at the non-use level as they are introduced to the innovation and move through the remaining seven levels as they become more familiar with and adopt the innovation in their daily lives.  Since all users are unique individuals, a user may not experience all seven levels because “it must be acknowledged that some individuals will be at less efficient levels” (Hall et al., 1975, p. 56).  In comparison to TAM, which recognizes that certain external variables can play a factor in the perceived use and usefulness of an innovation adoption, LoU does not consider other variables such as “organizational climate, intervention strategies, and characteristics of decisions makers” (p. 52).

Hall and Hord (2011) continued to expand the LoU by establishing the CBAM (Concerns-Based Adoption Model).  This model is comprised of two major dimensions:  Stages of Concern (SoC) and the LoU.  The SoC dimension “describes the feelings and concerns experienced with regard to an innovation” (Hosman & Cvetanoska, p. 31). In the LoU dimension “involves individuals’ behaviors as they experience in the process of change” (p. 31).

Design and Methods

To fulfill the study’s purpose and understand factors affecting faculty’s adaptation of teaching practices as they migrate from one LMS platform to another, more than 1,200 faculty members were surveyed regarding their technology usage levels, barriers and issues faced when migrating their courses from Blackboard Vista to Desire2Learn.

Participants

The participants in this study consisted of faculty members, both fulltime and adjunct, within a large university system in the southeastern United States.  This system is comprised of 14 institutions of higher education, all of which utilized the same LMS systems and all went through the LMS migration between systems (Blackboard Vista to Desire2Learn) at the same time.  All of the 14 institutions were invited to participate in the study.  Accepting the invitation were two universities located in the southern and southwestern parts of the state in which the university system operates.

The university in the southern part of state serves approximately 10,000 undergraduate students and 2,200 graduate students.  This university was classified according to the Carnegie Class as being a large 4-year primary residential university and employed 728 fulltime and part time faculty. In comparison, the university in the southwest serves approximately 8,164 undergraduate and graduate students. This university was classified according to the Carnegie Class as being a medium 4-year primary residential university employing 508 fulltime and part time faculty.

To deliver appropriate opportunities to learn more about the D2L (new LMS) tools and features, both universities provided faculty members with multiple training options.  One of these options was through face-to-face workshops, which addressed the new tools and how each tool’s interface was different from Blackboard Vista (previous LMS).  In addition, both universities provided asynchronous video tutorials for viewing. These tutorials covered the same information presented in the face-to-face workshops without allowing for real-time collaboration.  Finally, both universities offered one-on-one assistance to help faculty members migrate course content from Blackboard Vista to D2L Learn by scheduling an appointment with an instructional designer/technologist.  During these sessions, faculty members had the opportunity to ask questions about D2L tool(s), the new interface, or other areas of concern to better learn the system. 

Data Collection Procedures

Once formal approval and cooperation was attained from campus administrators at each of the participating sites, emails were sent to all full time and part time faculty members at these institutions soliciting their responses to an online survey, described below.  Participants had an average of two weeks to respond to the survey before another email reminder was sent.  A total of four solicitation emails were sent during a three month span from mid-October to mid-December in the semester in which the LMS migration formally occurred.  This time span was intentionally selected in order to provide faculty with sufficient time to acclimate to the new LMS platform and to seek out solutions to technical challenges they encountered and/or to adapt their teaching practices to the affordances and limitations of the new system.

Survey Instrument

To gather the necessary data for this study, a survey design was utilized with an instrument that had both Likert-scale and open-ended questions.  The survey, itself, was divided into five sections:  (1) Demographic Information, (2) Faculty’s Level of Use of LMS Tools, (3) Experience Using Desire2Learn, (4) Learning How to Use Desire2Learn, and (5) Open-Ended Questions.  To ensure high construct validity for the instrument, each of the sections, except for Demographic Information, was directly mapped to TAM and LoU models.  Internal reliability was measured using Cronbach’s alpha.  The overall rating of α =.89 (M = 102.36, SD = 26.46) indicated a high degree of internal consistency.

Section 2, Faculty’s Level of Use of LMS Tools, was comprised of two subscales with each subscale assessing faculty’s use of tools (e.g., discussion boards, grade books) in the two LMS systems.  Subscale 1 examined usage of Blackboard tools and Subscale 2 examined the usage of Desire2Learn tools.  These two subscales were used to make direct comparisons of equivalent tools between the two systems.


Data Analysis

To answer the overarching research question, “To what extent have faculty members changed their patterns in their use of tools (e.g., announcements/news, discussions, grades, etc.) within Desire2Learn compared to Blackboard Vista?” two hypotheses were formed.  Hypothesis 1.1 focused on tool usage levels and Hypothesis 1.2 examined if factors such as faculty rank and types of courses taught served as confounding variables affecting tool usage levels.

To address Hypothesis 1.1, a paired t test was executed on the rating on the two subscales for each tool located within Section 2 of the survey.  This analysis approach was used based upon the fact that “when there are two conditions and the same participants took part in both conditions” (Field, 2009, p. 325).  This approach also allowed for direct comparisons of the two tools on a one-to-one basis.  To investigate Hypothesis 1.2, i.e., whether a relationship existed between tool usage and (1) faculty status, and (2) types of course taught, a two-way factorial multivariate analysis of variance (MANOVA) was constructed.  The MANOVA examined two independent variables (types of courses taught and faculty status) against the rating on two subscales located within Section 2 of the survey. The findings for both hypotheses are presented below.

Findings

As stated previously, the survey was distributed to 1,236 faculty members over the span of three months, specifically mid-October through mid-December. From these 1,236 faculty members, 305 responded to the survey for an initial response rate of 24.7%.  A total of 51 entries were removed due to incomplete responses or missing sections within the survey, resulting in a final rate of 20.55% with an estimated 5.5% error rate.

Participant Demographics

More female faculty members completed the survey compared to males, 154 (60.6%) to 100 (39.4%).  Of the 254 faculty members that completed the survey, Institution #1 (henceforth known as ‘Southern U.’) completers included 97 females (63%) and 57 males (37%) which is similar to the Southern’s faculty population breakdown of 52% females versus 48% males.  Institution #2 (i.e., Southwestern U.) completers included 57 females (57%) and 43 males (43%) with the males showing a somewhat lower participation rate than their normal faculty population percentages of 51% males to 49% females.  Although there are minor differences between gender breakdown of the faculty population versus the sample, the differences are not large enough to undermine the credibility of the study’s findings.

In terms of teaching experience, women and men had similar distribution patterns when it came to teaching online/hybrid courses with the majority of the faculty having 10 years or less experience with online delivery, see ‘% of Males’ and ‘% of Females’ figures in Table 2.  In terms of face-to-face courses, males tended to have more experience with 25% of male respondents having 20+ years of classroom-based teaching compared to 16.2% of females having similar experience levels, see Table 3.

Table 2
Years of Experience Teaching Online/Hybrid Course

Teaching Experience

Males

% of Males

% of All

Females

% of Females

% of All

 

< 1 yr.

37

(37)

(14.6)

55

(35.7)

(21.6)

1-5 yrs.

40

(40)

(15.8)

62

(40.3)

(24.4)

6-10 yrs.

12

(12)

(4.7)

23

(14.9)

(9.1)

11-15 yrs.

10

(10)

(3.9)

11

(7.1)

(4.3)

16-20 yrs.

  1

(1)

(0.4)

2

(1.3)

(0.8)

> 20 yrs.

  0

(0)

(0.0)

1

(0.7)

(0.4)

Total

100

(100)

(39.4)

154

(100.0)

(60.6)

Table 3
Years of Experience Teaching Face-to-Face Courses

Teaching Experience

Males

% of Males

% of All

Females

% of Females

% of All

< 1 yr.

  2

(2)

(0.8)

10

(6.5)

(3.9)

1-5 yrs.

20

(20)

(7.9)

42

(27.3)

(16.5)

6-10 yrs.

22

(22)

(8.7)

30

(19.5)

(11.8)

11-15 yrs.

16

(16)

(6.3)

26

(16.9)

(10.3)

16-20 yrs.

15

(15)

(5.9)

21

(13.6)

(8.3)

> 20 yrs.

25

(25)

(9.8)

25

(16.2)

(9.8)

Total

100

(100)

(39.4)

154

(100.0)

(60.6)

When examining the breakdown of the types of courses taught by participants, women tended to teach online or combinations of online/hybrid courses more than males who tended to teach face-to-face courses more frequently, see Table 4.  This supports the previous finding regarding males’ extended time in the classroom, as stated above.

Table 4
Frequency of Faculty by Gender and Types of Courses Taught

Types of Courses

Male (%)

Female (%)

All face-to-face

46 (18.1)

49 (19.3)

All hybrid

27 (10.6)

51 (20.1)

F2F/Hybrid

13   (5.1)

21   (8.3)

F2F/Online

6   (2.4)

12   (4.7)

Online/Hybrid

8   (3.2)

21   (8.3)

Total

100 (39.4)

154 (60.6)

The remainder of this section is divided into two parts:  (1) findings related to null hypothesis 1.1; and, (2) findings related to null hypothesis 1.2.

Findings Related to Null Hypothesis 1.1

To test Null Hypothesis 1.1 “There are no significant differences in faculty members’ patterns in their use of tools within Blackboard Vista compared to Desire2Learn,” two-tailed, paired t tests were conducted for each of the 10 tools commonly found in both systems.  For each of these tools, tests were conducted at the overall, institution, and college/school levels to determine if any significant differences or discernable patterns could be found that would denote meaningful changes in teaching practices as faculty migrated from one LMS platform to another.  To evaluate the use of each tool within each LMS, LoU indicators from Section 2 of the survey were examined to determine how often each instructor used a given tool within the Blackboard Vista system compared to the Desire2Learn system.  The higher the level indicated, the greater the usage of that tool, i.e., the greater the adoption.  Results of these tests are described below.

The first thing to be noted is that faculty overwhelmingly indicated their levels of use for each of the tools were in the middle/lower levels of Hall’s et al. Levels of Use scale.  As shown on Table 5, the mean level of use scores ranged from 1.21 (SCORM in D2L) to 4.78 (Grade Book in D2L).  This indicates that faculty still consider their comfort levels relatively low with room for additional skill development for each of these tools.

The second interesting finding is that after the migration from Blackboard Vista to Desire2Learn faculty significantly increased their level of use for all but two of the tools common to both systems, see Table 5.  The only tools that didn’t see a statistically significant increase in faculty usage were the SCORM and Synchronous Session (i.e., Wimba / Collaborate) tools.  In both cases, these two tools were the least used of all the tools evaluated with mean usage scores of 2.04 or lower in both systems.  Because of the lack of initial use of these tools, it is not surprising that they saw little change in usage levels after the migration.

Of those tools where faculty usage significantly increased, the Grade Book and Email tools lead the way.  Grade Book usage rose almost a full level with mean usage levels rising from 3.85 to 4.78.  Based upon Hall’s et al. Level of Use scale, this indicates that faculty progressed from high level ‘preparation’ users (level 3) users to nearly ‘routine’ (level 5) users.  The comparably lower standard deviation value associated with the 4.78 usage rate indicates that many of the faculty who considered themselves ‘level 2’ Grade Book users in Blackboard increased their confidence and frequency of use in Desire2Learn to the point that they advanced from ‘level 2’ users to confident ‘level 4’ users.  As with Grade Book usage levels, there was a significant increase in faculty usage levels with mean usage levels rising from 3.23 to 4.07.  Reflecting back to the Level of Use summary in the literature review, this rise in usage levels indicates that faculty evolved from ‘preparation’ (level 3) users to ‘mechanical’ users (level 4).

Granted, they did not attain the comparably high ‘level 4’ status indicated with the Grade Book, i.e., 4.78 usage rating versus 4.07 rate for Email, but usage rates still went up markedly.  The same can be said for faculty usage rates of the Assessment and Discussion Board tools, both of which were associated with significant increases in usage with faculty rising from ‘level 3’ users to ‘level 4’ users.  The final two tools where statistically significant increases in usage occurred were the Announcements/News and Selective Release tools.  The mean usage level associated with the Announcements/News tool rose .41 while the mean usage level for Selective Release increased only .25.  Although these were statistically significant changes, neither were large enough to cause a change in LoU levels, i.e., from ‘level 3’ to ‘level 4’.  Faculty simply increased their usage from the low end of a usage level to the higher end of that same level.

Table 5

Summary of t-Tests Overall Results

LMS Tool

Blackboard

Desire2Learn

t-value

p-value

Mean

SD

Mean

SD

Announcement/News

3.35

2.01

3.76

2.07

-2.79

.006*

Assessment

3.34

2.16

4.10

2.03

-6.73

.000*

Discussion Board

3.31

2.20

4.04

2.04

-6.01

.000*

Email

3.23

2.15

4.07

2.08

-6.86

.000*

Grade Book

3.85

2.21

4.78

1.83

-7.82

.000*

Groups

2.25

1.80

2.57

1.90

-3.22

.001*

Learning Module

3.31

2.21

4.19

2.08

-7.11

.000*

SCORM

1.42

1.40

1.49

1.21

-0.75

.454

Selective Release

2.52

2.07

2.77

2.03

-2.02

.044*

Synchronous Session

2.04

2.00

2.04

1.69

0.03

.976

* denotes statistically significant results, where p < .05

To determine if the changes in faculty usage levels were consistent across institutions and across the various colleges/schools the institutions, additional analyses were done, beginning with examination of usages levels at each of the participating institutions, denoted as Southern and Southwestern in reference to their geographic locations within the state.

For all but one LMS tool, both institutions generated similar results, i.e., significant versus non-significant scores, for each of the LMS tools reviewed.  Both institutions witnessed significant increases in usage levels for the Assessment, Discussion Board, Email, Grade Book, Groups, and Learning Module tools.  Although the degree of change may vary from institution to institution, see Table 6 for details, both campuses generated t-scores that were large enough to produce probability levels below .05.  Both institutions similarly generated non-significant results for the SCORM, Selective Release, and Synchronous Session tools.  The one LMS tool associated with varying results between institutions was the Announcement/News tool.  Southwestern saw little change in usage levels while usage at Southern increased from 2.87 to 3.71, generating a t-score of -3.89 and a p-value < .05.

Given the general consistency in the results across institutions, save the Announcement/News tool, it’s reasonable to argue that no one single institution was driving the overall results and thereby skewing any interpretations of data.  Conversely, the overall consistency of results strengthens the potential generalizability of these findings to the other institutions in the population.

Table 6
Summary of Means and t-Tests by Institution

LMS Tool

Southwestern U.

t-value

Southern U.

t-value

BB

D2L

BB

D2L

Announcement/News

3.66

3.79

-.67

2.87

3.71

-3.89*

Assessment

3.63

4.32

-4.63*

2.89

3.76

-5.04*

Discussion Board

3.64

4.06

-2.71*

2.81

3.99

-6.60*

Email

3.65

4.42

-5.17*

2.58

3.54

-4.50*

Grade Book

4.17

4.74

-3.94*

3.36

4.85

-7.74*

Groups

2.38

2.66

-2.29*

2.06

2.42

-2.28*

Learning Module

3.47

4.29

-5.14*

3.05

4.03

-4.94*

SCORM

1.53

1.58

-0.41

1.26

1.35

-0.76

Selective Release

2.56

2.79

-1.44

2.46

2.74

-1.43

Synchronous Session

2.14

2.25

-0.79

1.89

1.71

-0.89

* denotes statistically significant results, where p < .05

Whereas institutional analyses produced little variance, analyses of usage levels by college/school discipline produced prevalent and consistent trends in how faculty in one college/school varied from the others.  In particular, faculty in the colleges of education (EDU) and schools of nursing and health sciences (NH) consistently produced statistically significant results denoting meaningful increases in the majority of the LMS tools, see Table 7 for details.  Faculty in the colleges of arts and science (A&S), schools of business (BIZ), and schools of fine arts (FA) did generate some statistically significant results but did so on a far less consistent basis, see Table 7.  The one LMS tool that produced significant results across all five areas was the Grade Book.  In each case across all five areas, faculty increased their usage levels at a rate high enough to produce t-scores with probabilities < .05.

Table 7
Summary of t-Tests by College/School


LMS Tool

A&S

BIZ

EDU

FA

NH

Announcement/News

 

 

*

Southern*

*

 

Assessment

*

 

*

*

*

 

Discussion Board

 

 

*

*

*

 

Email

*

*

*

 

 

 

Grade Book

*

*

*

*

*

 

Groups

 

 

*

 

*

 

Learning Module

*

 

*

 

*

 

SCORM

 

 

 

 

 

 

Selective Release

 

 

 

 

Southern*

 

Synchronous Session

 

 

 

 

 

 

* denotes statistically significant results, where p < .05

Findings Related to Null Hypothesis 1.2

To analyze Null Hypothesis 1.2 “There are no significant difference in faculty members’ patterns in teaching between the types of courses the faculty member usually teaches and their faculty status while using tools within Blackboard Vista compared to Desire2Learn,” a two-way factorial Multivariate Analysis of Variance (MANOVA) was constructed. The MANOVA was calculated using the responses for the level of use within Blackboard and Desire2Learn from Section 2 of the survey.

As shown in Table 8, more fulltime faculty members who teach all online/hybrid courses had a higher level of usage mean compared to nearly all other faculty members that teach in other course categories.  The one exception is a small group (n=5) of adjunct faculty who teach both ‘F2F and Online’.  This small outlier group generated slightly higher means than the fulltime ‘Online and Hybrid’ faculty group. Overall, faculty’s level of use responses ranged between Orientation (level 2) and Preparation (level 3) for Blackboard Vista and between the Preparation (level 3) and Mechanical (level 4) levels for Desire2Learn.  When examined by status, full time versus adjunct, adjunct faculty indicated a lower level of use overall for both Blackboard Vista and Desire2Learn than fulltime faculty.  Based on a comparison of means, Desire2Learn had higher level of use for both fulltime and adjunct faculty members.

Table 8

Descriptive Statistics Associated with MANOVA

 

Blackboard

Desire2Learn

Delivery Format

Fulltime

Adjunct

Fulltime

Adjunct

 

M

SD

N

M

SD

N

M

SD

N

M

SD

N

F2F only

2.36

1.35

74

1.55

.83

21

2.72

1.20

74

1.86

.92

21

F2F and hybrid

3.53

1.32

28

3.10

1.70

  6

4.31

0.87

28

4.03

1.60

  6

F2F and online

3.62

1.23

13

3.84

0.48

  5

4.00

1.23

13

4.10

0.69

  5

Hybrid only

3.10

1.37

64

2.41

1.53

14

3.74

1.05

64

2.97

1.02

14

Online and hybrid

3.89

1.09

16

3.21

1.18

13

4.31

.90

16

3.89

0.91

13

All Formats

2.98

1.42

195

2.47

1.40

59

3.50

1.26

195

2.98

1.35

59

Discussion

Educators continuously have to change their patterns in teaching in order to adopt news roles, new technologies, and new practices.  This is especially true for online/hybrid courses where technologies and best practices are constantly evolving (Harper et al., 2004; Moore & Kearsley, 1996; Skylar, 2004; Tallent-Runnels et al., 2006).  These studies, and others like them, argue that system migrations are a two party dance. Institutions need to make a conscious effort to identify the types of tools, technologies, and methods currently used in distance education in order to support and augment them.  At the same time, faculty members need to be willing to explore and adopt new tools, technologies, and methods into their own teaching.  Without contributions and concessions from both parties, toes will get stepped on and ultimately it is the students that will suffer.

Little-Wiles and Naimi (2011) stated 87% of faculty members almost always use an LMS to support delivery of instruction.  In addition, Black et al. (2007) stated that faculty members would adopt an LMS if the system supports them with different content area, philosophies, and instructional styles.  The results from this current study support both of these claims and further supports the notion that faculty will adopt a variety of LMS tools when the perceived ‘ease of use’ is apparent (Venkatesh et al., 2003).

In addition to perceived ‘ease of use’, faculty recognized another adoption factor from Venkatesh’s et al., TAM framework, ‘perceived utility’, with the tools in the new LMS.  In particular, a number of the faculty members stated in their open-ended responses that they changed their usage levels of LMS tools to adopt and conform to the Quality Matters rubric. These faculty members indicated that the tools within the LMS helped them better align the design of the course to meet these standards.  Collectively, analyses of faculty usage levels, see Tables 5-7, with supporting evidence from the open-ended questions in Section 5 of the survey suggest that some faculty members have drastically changed their use of the LMS and its tools, especially those in the colleges of education and the schools of nursing and health sciences.

As much as faculty in education and nursing and health sciences were quick to recognize the potentials of the new LMS tools and increase their usage of them, it remains to be seen if the same levels of widespread adoption are attained by faculty in the fields of arts and science, business, and fine arts.  Increased usage levels were seen with some tools, but they were largely inconsistent when comparing usage rates across all of the colleges across each institution.  These results align with previous research studies that suggests that due to their professional training, education faculty members have a stronger background in instructional design and should integrate more technologies into their curriculum and teaching more readily (Sahin, 2008).

Gautreau (2011) stated that one major factor in LMS tool adoption is a faculty member’s status.  To determine if faculty status and type of courses typically taught had an effect on overall levels of use, a MANOVA was performed.  Results indicated that fulltime faculty members had a significantly higher level of use for both Blackboard Vista and Desire2Learn compared to adjunct faculty members. Not surprisingly, when it came to ‘type of courses taught’, i.e., face-to-face vs online/hybrid, those faculty members who teach face-to-face courses had a significantly lower level of use compared to those faculty members who teach online or hybrid courses.

All in all, the findings from this study are two difficult-to-refute conclusions.  First, faculty’s usage of a system is highly dependent upon the affordances, i.e., ease of use, of the system.  Systems viewed as cumbersome or unnecessarily complicated will repress faculty’s usage of the tools and features within that system.  Conversely, systems with better interface usability, i.e., greater ease of use, can actually stimulate faculty to adopt the technology and potentially enhance their teaching practices.  The second conclusion is that faculty adoption rates are not equal across disciplines.  As a result, additional training and support may be required for certain units on campus.  Administrators need to plan for these needs and allocate their resources accordingly.

 


References

Al-Busaidi, K. (2009). The impact of learning management system characteristics and user characteristics on the acceptance of e-learning. International Journal of Global Management Studies, 1(2), 75-91.

Black, E., Beck, D., Dawson, K., Jinks, S., & DiPietro, M. (2007). The other side of the LMS: Considering implementation and use in the adoption of an LMS in online and blended learning environments. TechTrends, 51(2), 35-39.

Celik, I., Sahin, I., & Aydin, M. (2014). Reliability and validity study of the mobile learning adoption scale developed based on the diffusion of innovations theory. International Journal Of Education In Mathematics, Science And Technology, 2(4), 300-316.

Davis, F. D. (1989). Perceived usefulness, perceived ease of use and user acceptance of information technology. MIS Quarterly, 13(3), 319–340.

Davis, F. D., Bagozzi, R. P., & Warshaw, P. R. (1989). User acceptance of computer technology: A comparison of two theoretical models. Management Science, 55(8), 982-1003.

Elwood, S., Changchit, C., & Cutshall, R. (2006). Investigating students perceptions on laptop initiative in higher education. an extension of the technology acceptance model. Campus-Wide Information Systems, 23(5), 336-349. doi: http://dx.doi.org/10.1108/10650740610714099

Field, A. (2009). Discovering statistics using SPSS (3rd ed.). Thousand Oaks, CA: Sage Publications Inc.

Gautreau, C. (2011). Motivational factors affecting the integration of a learning management system by faculty. Journal of Educators Online, 8(1), 1-25.  Retrieved from http://files.eric.ed.gov/fulltext/EJ917870.pdf

Hall, G., Loucks, S., Rutherford, W., & Newlove, B. (1975).  Levels of use of the innovation: A framework for analyzing innovation adoption. Journal of Teacher Education, 26(1), 52-56.

Harper, K. C., Kuanchin, C., & Yen, D. C (2004).  Distance learning, virtual classrooms, and teaching pedagogy in the internet environment. Teaching in Society, 26(1), 585-598.

Hall, G., & Hord, S. (2011). Implementing change: Patterns, principles, and potholes. Boston, MA: Allyn and Bacon.

Hosman, L. & Cvetanoska, M. (2013).  Technology, teachers, and training:  Combining theory with macedonia’s experience.  International Journal of Education and Development using Information and Communication Technology, 9(3), 28-49.

Jordan, C., Doherty, W. J., Jones-Webb, R., Cook, N., Dubrow, G., & Mendenhall, T. J. (2012). Competency-Based faculty development in community-engaged scholarship: A diffusion of innovation approach. Journal Of Higher Education Outreach And Engagement, 16(1), 65-95.

Little-Wiles, J., & Naimi, L. L. (2011). Faculty perceptions of and experiences in using the blackboard learning management system. Conflict Resolution & Negotiation Journal, 4(1), 1-13.

Medlin, B. D. (2001). The factors that may influence a faculty members decision to adopt electronic technologies in instruction (Doctoral dissertation).  Retrieved fromProQuest Dissertations and Theses databases. (3095210).

Moore, M. G. & Kearsley, G. (1996).  Distance education:  A system view.  Belmont, CA:  Wadsworth.

Park, N., Lee, K. M., & Cheong, P. H. (2008).  University instructors’ acceptance of electronic courseware: An application of the technology acceptance model.  Journal of Computer-Mediated Communication, 13, 163–186.

Rogers, E. M. (1962). Diffusion of innovations. New York, NY: Free Press.

Rogers, E. M. (2003). Diffusion of innovations (5th ed.). New York, NY: Free Press.

Ryan, T., Toye, M., Charron, K., & Park, G. (2012). Learning management system migration: An analysis of stakeholder perspectives. International Review of Research in Open & Distance Learning, 13(1), 220-237.

Sahin, I. (2006). Detailed review of rogers’ diffusion of innovations theory and educational technology-related studies based on rogers’ theory.  The Turkish Online Journal of Educational Technology, 5(2), 14-23.

Sahin, I. (2008). From the social-cognitive career theory perspective: A college of education faculty model for explaining their intention to use educational technology. Journal of Educational Computing Research, 38(1), 51-66.

Skylar, A. A. (2004). Distance education: An exploration of alternative methods and types of instructional media in teacher education (Doctoral dissertation).  Available fromProQuest Dissertations and Theses databases.  (UMI no. 3144530).

Tallent-Runnels, M. K., Lan, W. Y., Cooper, S., Ahern, T. C., Shaw, S. M., & Liu, X. (2006). Teaching courses online: A review of the research. Review of Educational Research, 76(1), 93-135. Retrieved from http://www.jstor.org/stable/3700584

Venkatesh, V., & Davis, F. D. (1996). A model of the antecedents of perceived ease of use: Development and test. Decision Sciences, 27(3), 451-481.

Venkatesh, V., & Davis, F. D. (2000). A theoretical extension of the technology acceptance model: Four longitudinal field studies. Management Science, 46(2), 186-204.

Venkatesh, V., Morris, M. G., Davis, F. D., & Davis, G. B. (2003). User acceptance of information technology: Toward a unified view. MIS Quarterly, 27(1), 425-478.


Online Journal of Distance Learning Administration, Volume XIX, Number 1, Spring 2016
University of West Georgia, Distance Education Center
Back to the Online Journal of Distance Learning Administration Contents