Help at 3:00 AM! Providing 24/7 Timely Support to Online Students via a Virtual Assistant


Phu Vu
University of Nebraska at Kearney
vuph@unk.edu


Scott Fredrickson
University of Nebraska at Kearney
fredricksons@unk.edu


Richard Meyer
University of Nebraska at Kearney
meyerdc@unk.edu

Abstract

With a dearth of research on human-robot interaction in education and relatively high non-completion rates of online students, this study was conducted to determine the feasibility of using a virtual assistant (VA) to respond to questions and concerns of students and provide 24/7 online course content support. During a 16 week-long academic semester, four hundred and seventy five interactions between the virtual assistant and learners in two online course sections were generated. On the average, the virtual assistant had 4.2 interactions with the students each day. Three hundred and twenty one interactions out of 475 (67.6%) were made during weekend hours, and 422 interactions (88.8%) were conducted between 6:00 PM to 6:00 AM - time the human instructor usually was unviable, thereby providing better and more efficient access for the students. The students were almost unanimous that they enjoyed/appreciated working with the virtual assistant, felt they had better and more immediate support from the VA than they had in previous online classes, and believed that the VA helped them understand the material better.

Introduction

The growth of online learning enrollment in higher education far exceeded the growth of traditional or face-to-face learning enrollment overall in recent years (Allen & Seaman, 2015; Vu & Fadde, 2014). However, several studies also showed that online students tended to have lower completion rates than their on-campus peers (Jenkins, 2011; Patterson & McFadden, 2009). The lack of interactions and timely support are considered primary reasons for the high dropout rate in online learning, especially in the asynchronous learning environment. In asynchronous learning, the instructors and learners mainly interact with each other via emails and discussion boards. If a learner has questions and needs a quick explanation to complete his or her assignment, s/he often has two options. S/he can either post a question to a designated course forum or email the instructor or graduate assistant.  If the student posts a question, then he or she must wait for the answer either from peers, graduate assistants or the instructor. The turnaround time for the question to be answered is dependent upon the dynamic learning community and the course policy. Alternatively, the learner can send an email to the instructor and wait for the answer, which might be slow if it is sent on a weekend, nighttime, or a holiday. According to Clinefelter and Aslania (2014), sixty-five percent of online learners logged into their class during the night time, after work, and/or on the weekend since most of them were employed while taking online courses. This result is in line with what we found in our internal analysis of online students’ behaviors in eight online undergraduate courses for the internal program evaluation purposes at the university where the researchers were working at.  Nighttime and weekends were the prime times for students to enter their online courses.  Conversely, it is a challenge for the instructors to respond to those students' emails or questions during the same time period. Neither truly dedicated instructors nor dedicated graduate assistants could be online and available all the time to simply wait for students’ inquiries. To address this issue, we created a virtual assistant (VA) that could offer 24/7 support to undergraduate students in two online courses.  A design-based research paradigm was used to constantly monitor the implementation of the VA in an authentic learning setting.  We did so in order to find the answer to our question “Can adding a VA be a solution to the issue of providing 24/7 timely support to our online students?”

Virtual Assistant

Virtual assistant may be a foreign concept in the educational setting. Before starting this research project, we did many searches with a variety of key words and terms, but we could not find any academic reports about the use of VA in education. However, interestingly, VA is quite common in the field of commerce. According to Miller (2014), VA is the go-to resource for customer care, tech support and even marketing. Many corporates such as Verizon, United Services Automobile Association (USAA), AT&T, Charter, etc. are implementing different types of VA into their websites serving as virtual customer service representatives to provide 24/7 support to their online customers. These virtual customer service representatives may be called by different names such as “Virtual Assistant” or “Intelligent Assistant”. In this research project, we shall uniformly call them “Virtual Assistant” (VA). The complexities or functions of those VAs vary. Some can have both voice and text-based interaction features, predictive functions that are capable of making recommendations surrounding purchases and content interaction, and deep knowledge databases. Others have very basic features such as text-based interaction and general knowledge databases.

Literature Review

As discussed above, VA is quite common in the field of business. However, there are no academic reports about the use of VA in business such as its effectiveness, and users ’ perceptions as far as we were able to determine. Due to the lack of literature in this field and within the scope of this study, we examined another aspect of VA application, called human-robot interaction (HRI).

According to Goodrich and Schultz (2007), the human-robot interaction as a multidisciplinary field started to emerge in the mid 1990s and the early 2000s with the involvement of researchers, practitioners and professionals from robotics, cognitive science, human factors, natural language and psychology. However, the robotics discipline is still a main player in the HRI with applications such as robot-assisted search and rescue, assistive robots, and space exploration. Goodrich and Schultz identified five attributes that affect the interactions between humans and robots including: 1) Level and behavior of autonomy, 2) Nature of information exchange, 3) Structure of the team, 4) Adaptation, learning, and training of people and the robot, and 5) Shape of the task. These attributes lean toward HRI applications such as assistive robots, and robot-assisted search and rescue tasks. For instance, the third attribute of “Structure of the team” specifically discussed the use of HRI application of search and rescue managed by two or more people, each with special roles in the team.

Few human-robot interaction studies focusing on how people interacted with a robot were reported in the last two decades (Gockley et al, 2005; Goodrich & Schultz, 2007; Torrey, Powers, Marge, Fussell, & Kieseler, 2006). Reports of studies investigating how people interacted, either orally and/or text-based, with robots were even rarer. Gockley et al. (2005) were among the first researchers to introduce a study addressing the human-robot verbal interaction.  Gockley’s research team built a robot named “Valerie”, which served as a receptionist who could provide verbal instructions. She was placed in a high traffic area in the research team’s university to enable anyone walking through the building, including students, faculty, and visitors, could interact with Valerie. The study’s findings revealed that during a nine months experiment period, on average, over 88 visitors interacted with her daily. However, the average length of those interactions was short; just under half a minute. The number of people who interacted with Valerie significantly decreased towards the end of the experiment. According to the researchers, personality characteristics and lack of emotions were the two main problems that robots like Valerie are facing, if they want to attract and engage people on a daily basis.

A more recent and relevant study about how people made oral and/or text-based interactions with a robot was reported by Lee and Makatchev (2009). One hundred and ninety-seven individual interactions that were conducted by random people with a robot placed in a high traffic zone of a university during a one week were collected and analyzed. The researchers identified four main categories that emerged from the data including: seeking information, such as asking for directions, chatting, greetings (mainly saying hello), and nonsense words/insults. In term of the dialogue styles, the researcher found that most people also followed the social norm of human conversations/dialogues while chatting with their robot. In other words, they followed a minimum level of social norms of human-human dialogues such as using greetings, saying farewell, and thanking the interactor.

In summary, early attempts to examine the nature of HRI provided preliminary understanding of how HRI operated; however, many questions about HRI applications especially in educational settings remained unanswered. To that end, this research project aimed to address some of the questions of HRI application in the online learning environment, using design-based research paradigm.

Research Method

Design Based Research

According to Barab and Squire (2004), design-based research is not a complete approach since it is really a series of approaches whose intent is to produce new theories, artifacts, and practices which account for and potentially affect teaching and learning in natural settings. As shown in Table 1, Collins (1999) identified seven major differences between traditional research methods and the design-based method. According to the researcher, central to this distinction is that design-based research puts an emphasis on examining real-world practices. In addition, it includes multiple flexible design revisions, many dependent variables, and social interactions. Next, different from traditional research methods, in design-based research, participants are often not “subjects” assigned to specific treatments. They are viewed as co-participants in both the design and even the analysis process. Finally, researchers in design-based research can systemically modify many features or aspects of their designed context so that each of those modifications can serve as a type of experimentation allowing the researchers to test/generate theories in the natural setting.

Table 1
Comparing Traditional Experimental Research Methods and Design-Based Research Method

Design Decisions

Derived from the need to provide 24/7 online course content support to one of the researchers’ undergraduate classes and due to the lack of literature in the field of HRI application in education, the researchers designed a chatterbot-based virtual assistant, put the assistant into the real learning setting (two online courses), closely monitored the assistant’s conversational logs, and constantly modified the assistant’s database (knowledge) during an academic semester. Whenever modifications were made to the virtual assistant, we compared the data generated both before and after the modification was implemented to confirm whether the modification met our goal of improving the virtual assistant’s competence and ultimately creating effective interaction experience of the learners. All of these steps were grounded on the design-based research paradigm. We considered our learners as co-participants in the whole process who helped us make data-based decisions to modify our virtual assistant.

The first step of designing our virtual assistant was conducting a quick, internal, needs assessment to identify what we needed to achieve by having a virtual assistant instead of a physical graduate assistant. We learned that most of students who took the online course offered by one of the researchers were non-traditional students in the online Early Childhood Inclusive program. This program was designed for in-service teachers, assistants in preschools, staff in Head Start programs, and other non-traditional students who were unable to attend regular face-to-face classes on the campus. Students’ performance data from the previous courses, and as citied in similar, but non-educational studies, indicated that those students often logged into the courses during the evening/night hours and/or on the weekends. They were not as technologically savvy as their traditional peers and had numerous daily work and family-related commitments. In other words, these adult students had a busy life and were time-sensitive. When they logged into the course, they needed to get things done quickly and if they had questions, they needed to receive quick responses to their inquiries. We hypothesized that a virtual assistant might meet those unique learning needs.

Virtual Assistant Design Process

Based on our hypothesis that a virtual assistant may partly meet the unique learning needs of our students, we created a chatterbot-based virtual assistant named Mary. Mary was placed into two online course sections, each of which had 28 undergraduate students. We understood that the students were not technologically savvy, so we made it easy and simple for students to chat with our virtual assistant without any additional login or registration requirement other than being in the Blackboard version of the class. They could chat with her any time after logging into their online courses and selecting the section “Virtual Assistant”. A screenshot of Mary’s interface is shown in Figure 1.

Figure 1. Mary’s Interface in the Online Course

We also made it clear that students were not discouraged to contact the instructor directly if they had any course-related questions. The inclusion of the virtual assistant is just one of our efforts to provide faster support to the students. We tested different web-based applications and software to find a virtual assistant that 1) could handle the text-based interaction; 2) had a simple interface without any login and/or registration for users to interact with; 3) was compatible with the Blackboard learning management system and could be embedded into Blackboard; and 4) gave the administrator full control and the ability to monitor of the virtual assistant from the back-end. Chatbot4u, a web service for building and deploying chatbots, was selected because it met all of our requirements.

Description of the Virtual Assistant

We created our virtual assistant in two weeks from identifying her gender, designing her appearance, and developing a database for her so that she could find the answer to a specific question by looking at the keywords of the question. Most of the information in the database that we inputted into our virtual assistant was based on the questions that students in the courses previously had asked. We examined and categorized all of the questions students in the previous courses sent to the instructor or posted into the course forums. Upon having those questions, we provided the answers to each of the questions in detail and put all of those answers into our virtual assistant’s database. Mary was deployed into two online courses at the beginning of the semester. In addition, to get students’ attention about the presence of Mary, the instructor sent a notification to all of the learners to explain what Mary could do and made it clear that students were always welcomed to contact the instructor instead of chatting with the virtual assistant.  Whenever the students clicked on the “Virtual Assistant” tab, Mary would appear as demonstrated in Figure 1 with the following greeting.

Hello, how are you today? My name is Mary. I am [instructor’s name deleted] Virtual Graduate Assistant. Noted that I am still learning your language, so please be specific and brief. For instance, instead of asking "What is the assignment deadline?" you may ask "[Course name deleted] assignment 1 deadline". Thank you for your patience and understanding!

During the first month of Mary’s deployment, one of the researchers daily supervised Mary from the back-end in order to monitor the content of the conversations, modify her appearance and behavior.  In addition the research would add more knowledge into her database by finding all the questions that Mary could not answer and providing the answers to those questions. Details of all of the conversations made by Mary and the students, under anonymous names, including how long the conversation was made, when the conversation started, and the content of the conversation were stored in the website’s server. For the purpose of our research project, in addition to collecting all of the data we could retrieve from the back-end of our virtual assistant such as the time report, conversation content, etc. we conducted a survey of students who were enrolled in the courses that had our virtual assistant. We expected that different types of data collection would help the data sources be more reliable and valid. In addition, we expected that the survey would provide us with more insights from students’ perspectives than we could find from the data generated from the virtual assistant’s back-end. The following research questions are our focus.

  1. What was the frequency of interactions between the virtual assistant and the learners?
  2. What types of questions did the students ask of the virtual assistant?
  3. What did students think about using that 24/7 online support from the virtual assistant?

Results

Research question 1: What was the frequency of interactions between the virtual assistant and the learners?

The answer to this research question was completely based on the data retrieved from the back-end of our virtual assistant. We treated each complete conversation as one interaction regardless of how many messages one interaction had. As demonstrated in Figure 2, some interactions had only one message or piece of information while others had 24 messages or pieces of information.

Figure 2. Data Sample of Virtual Assistant-Learner’s Interactions

During a 16 week-long academic semester, four hundred and seventy five (475) interactions between the virtual assistant and the learners in two online course sections were generated. On average, the virtual assistant had 4.2 interactions with the students each day. Three hundred and twenty one interactions out of 475 (67.6%) were made during weekend times, and 422 interactions out of 475 (88.8%) were conducted between 6:00 PM to 6:00 AM. Data also showed that the learners interacted with the virtual assistant more frequently during the first three weeks of the semester than the rest of the semester (179 interactions, accounting for 37.7%). The number of interactions decreased significantly toward the middle of the semester and increased again toward the end of the semester. In some days during the middle of the semester, no interaction between the virtual assistant and learners were made.  

2. What types of questions did the students ask of the virtual assistant?

The second research question examined two aspects of the interactions between the virtual assistant and learners: the format or structure of the interaction and the content of the interaction. The answer to this research question was also completely based on the data retrieved from the back-end of our virtual assistant.

In terms of the interaction format, we found that most of the learners (74%) followed the social norms of daily life conversation when they had conversations with the virtual assistant. Specifically, they tended to start the conversation by greeting the virtual assistant with “Hello” or “Hi”, and ended the conversation with expressions such as “Thank you”, “Thanks”, “Bye” or “BB”.  In some cases, we found that the students did not greet the virtual assistant but at the end of the conversation, they used expressions such as “Thank you” or “Thanks” when the virtual assistant answered their questions.

In term of the content of the interaction, three main categories of conversation topics emerged from the analysis as shown in Table 2.

Off-topic Chatting: Although the students received noticed at the beginning of the semester that the virtual assistant did not have sufficient answers to off-topic questions, twenty-nine percent of the interactions were off-topic chats that were unrelated to the course content. The most common questions were about the virtual assistant’s age, emotion, the instructor’s information, and the university. The following excerpt is an example of how one learner off-topic chatted with the virtual assistant.

Virtual assistant: Hello, how are you today? My name is Mary. I am [instructor’s name deleted] Virtual Graduate Assistant. Noted that I am still learning your language, so please be specific and brief. For instance, instead of asking "What is the assignment deadline?” you may ask "[Course name deleted] assignment 1 deadline". Thank you for your patience and understanding!
Student: Hi, how old are you?
Virtual assistant: I was created by [name deleted] in 2013.

Information Seeking: Sixty five percent of the interactions were seeking assignment information. Specifically, learners tended to ask the virtual assistant for the assignment deadline and how to submit the assignment. Few learners asked the virtual assistant about the content of the quizzes.

Non sequitur comments: We classified six percent of interactions as non sequitur because they did not belong to either off-topic category or the information- seeking category. Some of them were just one interaction like “I love you” or “Weird”.

In summary, we found that most of the learners followed the social norms of daily conversation while interacting with the virtual assistant. In terms of the content of the interaction, most of the conversations between the learners and our VA were about information- seeking, some were off-topic, and very few were non sequitur

Research question 3: What did students think about using that 24/7 online support from the virtual assistant?

To find the answer this research question, we conducted an online survey with 56 students who took the courses that had the virtual assistant. The survey had eight Likert-like scale statements to elicit students’ perceptions of the interactions with the virtual assistant. Those statements were scored from lowest (1) to highest (5). A small scale pre-test with five undergraduate students was conducted to help the researchers identify questions that may not make sense to participants, or problems with the survey that might lead to biased answers. It was expected that the results of the survey would shed light to some of the questions we could not find answers in the first two research questions. After one month of communicating with students three times via email, we had anonymous 39 responses to the survey (out of 56 students) accounting for a 69.64% response rate. We think that the total number of responses is sufficient to be analyzed, given the fact that this is a small-scale design-based research with a limited body of literature. In our data analysis, three responses were removed out of the pool because they were incomplete. In total, 36 complete responses were included into the data analysis process for this research question.

Table 2
Summary of the Survey

Statements

N

Min

Max

Mean

S.D.

I feel more comfortable interacting with the virtual assistant (VA) than the instructor.

36

1

5

2.5

0.8

The VA answered most of my questions

36

1

5

2.4

0.9

I chatted with the VA even though I did not have any specific question.

36

1

5

4.7

0.9

The VA provided me with immediate supports that I did not have before in other/previous online courses I took.

36

1

5

4.8

1.0

The VA support helped me learn better.

36

1

5

3.1

0.8

I feel frustrated when chatting with the VA because her answers did not make sense to me.

36

1

5

2.5

0.9

I feel foolish chatting with a robot.

36

1

5

2.2

0.8

I preferred chatting with the VA than emailing the instructor if I had a question.

36

1

5

3.3

0.8

Data from the survey showed that the learners had quite a favorable attitude toward interacting with the virtual assistant. Although the learners pointed out that the VA did not answer most of their questions, they agreed on the fact that the VA provided them with immediate supports that they did not have before in previous online courses.

Discussion

In the first research question, we found that on average, the VA had 4.2 interactions with the learners each day. Three hundred and twenty-one interactions out of 475 (67.6%) were made during weekends, and 422 interactions out of those 475 weekend interactions (88.8%) were conducted between 6.00 PM and 6:00 AM. These time frames for student-VA interactions are actually in line with what Clinefelter and Aslanian (2014) reported about course login habits of online students. According to those researchers, sixty-five percent of online learners logged into their class during the nighttime, after work, and/or on the weekends since most of them were employed while taking online courses. Although we know online learners’ learning habits as introduced by Clinefelter and Aslanian and from our internal data analysis as presented in the introduction section, online instructors can not simply be always available all the time during nights and weekends or even during all daytime hours to respond to learners’ inquiries. Therefore, we think that the use of VA in online courses as a solution is appropriate. In addition, by answering about four questions per day by herself at a faster pace than the instructor, the VA was saving time for both the instructor and learners. The VA benefited both the instructor and the students.  Because the VA had already answered them, the instructor did not have to spend time responding to those questions and the learners did not have to pause their work and/or wait for the answer to their question due to having their questions responded to immediately. Finally, if one interaction between the instructor and a learner on average takes approximately 10 minutes, during a 16 week-long semester, the VA saved the instructor about 79 hours.

Data also suggested that the learners interacted with the virtual assistant more frequently during the first three weeks of the semester (179 interactions, accounting for 37.7%) than the rest of the semester.  There are two reasons we suspect that the learners made more interactions with the VA during the first three weeks than the rest of the semester include 1) It is common knowledge that students both in the online and onsite learning environment tend to have more questions during the first few weeks of a semester, and 2) The “novelty effect” also plays a role in attracting the learners to interact with our VA more often during the first few weeks. Actually, the “novelty” of learner-VA interactions was also reported in a study by Gockley et al.  (2005). According to the researchers, during the first week of introducing their robot, people tended to interact with her for longer periods of time. In some cases, a few spent for as long as an hour or more simply testing the robot’s capabilities.

In analyzing the format and content of the interactions between the learners and the VA, we found that most of the conversations followed the basic, daily conversation etiquette that has a greeting at the beginning and a thank-you and/or good-bye at the end of the conversation. In addition, sixty five percent of the interactions were identified as “information-seeking” such as looking for assignment information. These findings indicated that the learners treated the VA seriously and used the VA as an additional support while needed. Results from our survey helped support the above assumption by revealing most students indicated that the VA provided them with immediate supports that they did not have before in other/previous online courses they took.

In summary, our initial findings from a design-based research paradigm, confirmed that including a virtual assistant in an online course has a strong potential to be a solution to the issue of providing 24/7 timely support to online students. We also are certain that adding a VA to an online course does not mean further isolating the online learners from the human interactions with the instructor. We continue to believe that online learners should be highly encouraged to interact with the instructor, since the more learners interact with the instructor, the less the drop-out rate would be, and the more successful they tend to as that is well documented in the literature (Bhuasiri, Xaymoungkhoun, Zo, Rho & Ciganek, 2012; Kang & Im, 2013; Kuo, Walker, Belland & Schroder, 2013; Vu & Fadde, 2013). Adding a VA to an online course will enhance an instructor- led online class as an optional communication channel. When the online learners think their instructor cannot answer the question immediately, the virtual assistant is an excellent option; an option with which students enjoyed interacting. Virtual assistants should be considered as a highly useful add-on to assist and improve learning performances of and experiences for online learners.

Finally, we think that institutions and administrators of online programs should consider the benefits from providing virtual assistants to online instructors.  While it is clear that the virtual assistant is not a replacement for the instructor, the virtual assistant may certainly be beneficial in providing timely responses to routine and commonly asked questions.  Consequently, instructors may spend more time providing feedback to students or interacting with students on the course content.  In our study, students were intrigued by the virtual assistant and not hesitant in chatting even if they didn’t have a specific question.  So, it is likely students will feel at ease using a virtual assistant in their classes.  The use of virtual assistants in online classes is certainly worth further consideration, funding, and research in an effort to make online classes more interactive and engaging.

 


References

Allen, I.E., & Seaman, J. (2015). Grade level: Tracking online learning in the United States. Wellesley MA: Babson Survey Research Group and Quahog Research Group, LLC. Retrieved from http://onlinelearningconsortium.org/read/survey-reports-2014/.

Barab, S., & Squire, K. (2004). Design-based research: Putting a stake in the ground. The Journal of the Learning Sciences13(1), 1-14.

Bhuasiri, W., Xaymoungkhoun, O., Zo, H., Rho, J. J., & Ciganek, A. P. (2012). Critical success factors for e-learning in developing countries: A comparative analysis between ICT experts and faculty. Computers & Education58(2), 843-855.

Clinefelter, D. L. & Aslanian, C. B., (2014). Online college students 2014: Comprehensive data on demands and preferences. Louisville, KY: The Learning House, Inc.

Collins, A. (1999). The changing infrastructure of education research. In E. C. Lagemann, & L. S. Shulman (Eds.) Issues in education research: Problems and possibilities. (pp. 289–298). San Francisco: Jossey-Bass Publishers

Jenkins, R. (2012, March 13). Online classes and college completion. The Chronicle of Higher Education. Retrieved from http://www.chronicle.com/article/article-content/131133/

Kuo, Y. C., Walker, A. E., Belland, B. R., & Schroder, K. E. (2013). A predictive study of student satisfaction in online education programs. The International Review of Research in Open and Distributed Learning14(1), 16-39.

Gockley, R., Bruce, A., Forlizzi, J., Michalowski, M., Mundell, A., Rosenthal, S. & Wang, J. (2005, August). Designing robots for long-term social interaction. In Intelligent Robots and Systems, 2005. (IROS 2005). 2005 IEEE/RSJ International Conference on (pp. 1338-1343). IEEE.

Goodrich, M. A., & Schultz, A. C. (2007). Human-robot interaction: a survey. Foundations and trends in human-computer interaction1(3), 203-275.
Miller, D (2014). Evaluating Enterprise Virtual Assistants. Retrieved from http://info.intelliresponse.com/rs/intelliresponse/images/Opus_EvaluatingEnterpriseVirtualAssistants_Jan2014 (2).pdf

Lee, M. K., & Makatchev, M. (2009, April). How do people talk with a robot?: an analysis of human-robot dialogues in the real world. In CHI'09 Extended Abstracts on Human Factors in Computing Systems (pp. 3769-3774). ACM.

Kang, M., & Im, T. (2013). Factors of learner–instructor interaction which predict perceived learning outcomes in online learning environment. Journal of Computer Assisted Learning29(3), 292-301.

Patterson, B., & C. McFadden. (2009). Attrition in online and campus degree programs. Online Journal of Distance Learning Administration, 12(2). Retrieved from http://www.westga.edu/~distance/ojdla/summer122/patterson112.html.

Fong, T., Thorpe, C., & Baur, C. (2003). Collaboration, dialogue, human-robot interaction. In Robotics Research (pp. 255-266). Springer Berlin Heidelberg.

Torrey, C., Powers, A., Marge, M., Fussell, S. R., & Kiesler, S. (2006, March). Effects of adaptive robot dialogue on information exchange and social relations. In Proceedings of the 1st ACM SIGCHI/SIGART conference on Human-robot interaction (pp. 126-133). ACM.

Vu, P., & Fadde, P. (2013). When to Talk, When to Chat: Student Interactions in Live Virtual Classrooms. Journal of Interactive Online Learning. 12(2), 41-52.


Online Journal of Distance Learning Administration, Volume XIX, Number 1, Spring 2016
University of West Georgia, Distance Education Center
Back to the Online Journal of Distance Learning Administration Contents