A Faculty Observation Model for Online Instructors: Observing Faculty Members in the Online Classroom
Michael T. Eskey
Henry "Hank" Roehrich
Maintaining academic standards, retention of quality online instructors and establishing a measure for instruction can be enhanced through faculty observation and evaluation. As Park University entered the online market, the increased course offerings involved an increased number of adjunct faculty members. In order to ensure that these faculty members used best practices and maintain high standards of teaching that are important to student satisfaction, Park University developed an evaluation process for online adjunct faculty that was similar to in-class observation of full-time faculty, but focused on unique factors of online course facilitation. The focus of this paper is on the evolution and current usage of the Faculty Online Observation (FOO) method which is a model for evaluation developed from utilizing the original evaluation system used in annual observations for online adjunct faculty. The FOO process that is used by a team of evaluators ensures that Best Practices in online teaching are addressed continuously in the online delivery program. The focus on Best Practices, specific institutional policies of online teaching, and technology plays a significant role in the growth of online degree programs and the success of the Park University online program.
The rapid and continued growth of distance learning has established an important role in educational programs worldwide. Distance education has a long and storied history with the first distance education offerings emerging over one hundred years ago in the form of correspondence courses and low-tech media (Holmberg, 1977; Matthews, 1999). While not online or steeped in technology, early distance education sought to provide opportunities for diverse and dispersed populations. Over the past decade, most colleges and universities in the United States have experienced a dramatic increase in the growth and popularity of online degree programs (Allen & Seaman, 2011).
According to research conducted by the Sloan Consortium, distance learning is growing rapidly, with 83% of higher education institutions offering some form of distance learning (Allen & Seaman, 2011). Likewise, community colleges reported an 11.3 percent increase for distance education enrollments, substantially ahead of overall national campus enrollments, which averaged less than two percent (Lokken, 2009). Further, in 2012 there was an overall 17 percent growth in online learning in higher education, far outdistancing a growth of 1.2 percent of traditional classes in the same period (Allen & Seaman, 2012).
The Babson Survey Research Group reports that enrollment in online programs rose by almost one million students from the prior year. Researchers surveyed more than 2,500 colleges and universities nationwide and found approximately 6.1 million students were enrolled in at least one online course in fall 2010 (Allen, et al. 2012). The number of students in online courses in 2011 represents a 10 percent increase over the previous year. That compares with an increase of less than one percent in the overall number of higher-education students (pg. 4). In the online learning process there are continued improvements in the linkage of pedagogy, technology, and learner needs in an effort to satisfy the growing demands of varied students in the online classroom (Kim, Bonk & Zeng, 2005; Bonk & Kim, 2006). Distance-learning programs at colleges and universities are often compared to their counterparts at other institutions, and their counterpart in-class courses within their own institution (Dell, Low & Wilker, 2010). Many institutions offer a full range of online degree programs with full organizational, infrastructure, design, and technical support for distance learners and their instructors (Herman & Banister, 2007).
The growth of online learning has brought the challenge of the need to develop appropriate evaluation methods that are relevant, specifically, to the online environment. This focus has included the evaluation of pedagogy, course content, and facilitation. Unique distance learning course requirements require a periodic assessment of faculty, students, courses, and institutional needs to meet the unique demands and expectations of modern online learning (Mandernach, et al, 2005; Weschke & Canipe, 2010; Villaro & Alegre de la Rosa, 2007; Avery, et al, 2006).
Park University was founded as a small, private liberal-arts college in 1875. Park University began offering Internet courses in 1996, to better meet the needs of undergraduate degree completion students being served at military bases around the U.S. Today, Park serves 42 nationwide campus centers, to include 37 military bases in 21 states. From the beginning, the focus of the Park initiative was student service – making all services that would be available face-to-face also available online. To date, over 120,000 students have been enrolled in online courses at Park University through offerings of over 250 online degree credit courses, 12 online bachelor's degree completion programs www.park.edu/online/degrees.aspx ) and six complete graduate programs (www.park.edu/online/degrees.aspx). Like similar educational institutions in the United States, Park University has created and has fostered a thriving online learning program (Eskey and Schulte, 2010). As a best practice, Park has focused on quality of instruction by requiring faculty teaching online to go through a six-week training program in which they are the student in the course they will be teaching, thus understanding the student perspective (Abel, 2005).
To meet the demand of students within its established campus center system, Park University relies heavily on an adjunct faculty pool. As the online course offerings grew, many of these adjuncts were trained to teach online courses, as needed. Recognizing a need to properly assess the facilitation of online instructors, Park Distance Learning (PDL) created a proprietary instrument called the Online Instructor Evaluation System (OIES). The OIES developed out of a comprehensive review of the literature on benchmarks and best practices of online pedagogy (for more detailed information on these standards, see; Berg, 2001, Graham et al., 2000; Finch & Montambeau, 2000; ; Reeves, 1997; and Tobin, 2004).
Rationale for Online Instructor Evaluation System
An initial online faculty evaluation system was first developed in 2001 at Park University. This evaluation system was initially based upon face-to-face classroom instructors and not the unique characteristics and functioning of online instruction. As such, the original evaluation system did not encompass directed evaluation of learning outcomes, classroom management, facilitation, faculty presence, response rate, student accessibility, and course-related administrative tasks. To address the unique concerns associated with a growing number of online courses and online adjunct instructors, Park University College for Distance Learning developed a formalized online instructor evaluation system (OIES), consisting of a pre-term review, four formative reviews, and a summative review for evaluating online adjunct faculty (Mandernach, et al, 2005). The areas addressed in the formative and summative reviews are shown in Table 1.
Course Facilitation Addressed Within OIES Reviews and Evaluation
Instructor Pre-Term Review
Formative Review #1
Formative Review #2
Formative Review #3
Formative Review #4
Course Set-Up and Organization
Facilitation and Grading
Assessments, Grading, and Feedback
Course climate and Organization
1) Overall Evaluation of Course
2) Course Organization
Discussion Thread Interaction
Implementation of Assessments
3) Building Community
4) Discussion Facilitation
Feedback and Grading
5) Assessments Grading, and Feedback
6) Course Climate
Online Classroom Set-Up
Ongoing Core Facilitation Criteria
Ongoing Core Facilitation Criteria
Final Exam Preparation
Supplementary Instructional Material
(Instructor – Added)
7) Professional Engagement
The OIES was first piloted Fall 2004 term. The OIES was utilized as the sole online adjunct instructor evaluation mechanism at the institution from 2004 through 2008. The OIES strengths were its robust evaluation/mentoring process which paired an online evaluator with an online adjunct for an entire term. Not having limitless resources and personnel, Park University's College for Distance Learning sought a more streamlined process which still adhered to institutional needs and research guidelines (Eskey & Schulte, 2010). The OIES more closely aligned faculty observation with specific and unique characteristics of online courses.
Theoretical and Institutional Foundations
The OIES was founded on the related literature and research findings. Additionally, the development of the OIES included the adaptation of best practices in online- education in ways that reflected Park University's institutional history, current context, and future goals, as synthesized in the University's "Online Course Standards and Principles" (Park University, 2009). Initially, newly-hired online instructors were trained by utilizing a self-paced, individualized format (PDL758). This mandatory training was formalized to include an orientation process and an emphasis on course facilitation of Park -developed, online courses. In order to be scheduled to teach an online course at Park University, Faculty members are be screened for required professional credentials, approved by the individual academic department, and complete the Park University Online Instructional Qualification Seminar (PDL750). PDL750 was developed to ensure new online instructors were trained in accordance with the implementation of institutional dynamics, current online pedagogy, and a foundation of generally-accepted practices tailored to the needs of Park University.
Currently, all new online faculty members are required to complete the Park Online Instructor Qualification Seminar – PDL750 prior to teaching their first online course. The course is 6 weeks in length and covers the basics of working in the eCollege environment. Park courses are developed by faculty members with the guidance and assistance of a Course Developer/Instructional Designer; thus, course development is not included in this training. There is, however, an emphasis placed on the standards of Park University courses, the importance of course objectives, core learning outcomes, communications, grading and gradebooks, building community, and the like.
The focus of the PDL750 instruction is on: Theory/pedagogy; Technology; and, Park Policies and Procedures. A key to the PDL750 instructor training course is consistency. Park University has retained many of the same instructors for courses for ten years or more. This has required an increase of support. Additionally, there is a continuous emphasis on the improvement of the recruiting process, the addition of an instructional design team, and ongoing professional development of instructors. It is important to note that utilizing this training model at other institutions would require individual institutional-tailoring.
The OIES was established to provide both evaluation and mentoring of online faculty. Through the pre-term review, four formative and a summative evaluation, the online observer interacted on a number of occasions, evaluated specific course areas, and provided the instructor with a number of helpful and constructive points through e-mail, telephone, and digital evaluations. The OIES utilized by Park University from 2005 through 2008 was found to be a very complete and functional, albeit tedious, and time-consuming method of evaluating online adjunct instructors. It became evident that it was very labor and manpower intensive Mandernach, et al, 2005). The university required and, indeed, North Central Association accreditations required the formal observation of adjunct faculty on an annual basis (NACS, 2010). Beginning in Fall I, 2007 Park University instituted a formal adjunct faculty mentoring program. Upon completion of the Park Online Instructor Qualification Seminars CDL 750 or 758, all new online instructors were assigned to the College for Distance Learning's (currently Park Distance Learning – PDL) new Online Instructor Mentoring Program (OIMP) for one-on-one mentoring throughout their first term teaching for Park's CDL-Online program. There is currently one full-time mentor and a number of experienced online adjunct faculty mentors that generally are assigned up to five "mentees" per term. The mentoring program provides one-on-one guidance to all instructors as they begin to teach online at Park University and to offer continued support from peer mentors throughout their Park career. The mentoring program served a secondary function of allowing course evaluators more time to concentrate solely on course facilitation and eliminated the responsibility of mentoring online adjunct instructors. This was an important and necessary functional separation. While mentoring is very important, especially for new instructors, the mentoring function was somewhat counterproductive to the function of evaluators. As such, a mentoring program allowed for the separation of the functions of mentoring and evaluation.
Implementation of the Faculty Online Observation (FOO)
The Faculty Online Observation model (FOO) was developed as a follow-on to the Online Instructor Evaluation System (OIES) for online faculty at Park University. The FOO was created to meet the needs of the evolving online program, as well as fulfill an annual requirement for online adjunct faculty to meet the similar in-class observation of full-time faculty. The current FOO condenses the requirements of the OIES into less iterations and accomplishes this in a two-week portion of the online course. Both the OIES and the current FOO focus on four major categories of course facilitation:
- Building Community in the Classroom
- Discussion Facilitation and Instruction
- Assessment and Grading
- Online Course Learning Environment
This remains the focus, but the process was streamlined, the mentoring and evaluation functions were separated, kept the focus remained on the four major categories of course facilitation, and an evaluation model was created that allows us to provide an annual evaluation of every online adjunct faculty member.
The Faculty Online Observation model (FOO) was developed as a follow-on to the Online Instructor Evaluation System (OIES) for online adjunct faculty at Park University. The FOO was created to meet the needs of the evolving online program, as well as fulfill the annual requirement for online adjunct faculty similar to those for full-time and adjuncts teaching in the face-to-face classroom. Implementation of the FOO was accomplished utilizing the personnel resource of five full-time faculty members on half-time release dedicated to the observation of facilitation by online adjunct faculty.
To accomplish this, a team of faculty members developed an observation form, driven by university observations, university policies (Park University, 2008), and best practices (Boettcher and Conrad, 2004; Berg, 2001; Chickering et al, 1996; Reeves, 1997; Quality Matters (UMUC, 2008); Weiss, et al, 2001; Burnett, et al, 2007; and Tobin, 2004). Further, the observations are based upon the Park Distance Learning (PDL) - Online Policies and Procedures, Online Course Standards, and the PDL Online Instructor Participation Policy.
The Faculty Online Observation serves a number of purposes: ensure the finest quality education experience for our online students, provide pragmatic instructional support for our online instructors, and gauges the needs and strengths of our online instruction delivery methods. All online adjunct faculty members must complete PDL750. In addition, they have access to the Park online resource page (PDL751). Those online adjuncts hired after September, 2007, have an assigned faculty mentor to assist and guide them through their first online course following training, as well as continued support while teaching for Park University. The FOO allows for an online observation of course facilitation by trained course observers and course content evaluation by program coordinators.
Overview of the Faculty Online Observation System
The Faculty Online Observation was developed out of concerns for the academic success of our students as well as the need to systematically evaluate online adjunct instructors on an annual basis. The benefit and impacts of the FOO are demonstrated through: expected instructor performance; enhanced student satisfaction; strengthened scholarship of teaching; and, clarification of learning and professional development strengths and weaknesses. For the FOO, instructors, as well as their college dean, receive notification of the observation process and the names and courses-taught of instructors in their college. The instructor is provided a copy of the observation form, an explanation of the observation, a number of informative documents related to the online teaching and observation process, access to the Park University online resource website, and access to communicate freely with the observer for any questions or concerns on the FOO process.
Each of the online course observers is full-time, tenured or tenure-track faculty member at Park University, on half-time release to Park Distance Learning. Each observer has a minimum of five years of online teaching experience, has an earned terminal degree from a regionally-accredited University, and has developed courses online for Park University or other institutions. As a faculty member, they, too, are teaching online courses, participating in curriculum and other university committees, and fulfilling institutional requirements for scholarship, teaching, and performing community and university service. Utilizing full-time faculty allows the addition of a faculty member to the individual departments who contribute to the teaching load and unique department needs. Additionally, these individuals also serve as valuable assets, contributing to the academic oversight of the online program and course development. Since the inception of the OIES and transition to the FOO, the faculty/evaluator role has provided a stronger relationship between the academic departments and the online learning program.
During the eight-week term scheduled for the instructor's formal observation, the instructors are observed over a specified two-week period and observed over five course facilitation-related areas, to include: course organization and facilitation; building community in the online classroom, discussion facilitation and instruction; assessment, grading, and feedback; and course climate and online classroom environment (Eskey & Schulte, 2010). These observation topics are in accordance with guiding principles outlined in the best practices for online education, the School of Online Learning (SOL) Principles and Standards and the College for Distance Learning (CDL) Online Instructor Participation Policy. The criteria divisions are depicted in Table 2: Course Organization and Facilitation; Building Community in the Online Classroom; Discussion Facilitation and Instruction; Assessment, Grading, and Feedback; and, Course Climate and Online classroom Environment.
Guiding Principles for the Faculty Online Observation (FOO)
Topics Instructor Expectations
Course Organization and Facilitation
Compliant with online course policies, procedures, and standards.
Provide augmentations to course discussions in a manner that complements the course objectives
Utilize online platform features to enhance the course delivery and online learning experience
Building Community in the Online Classroom
Utilize staff to aid the development of online courses
Create an open and inviting climate for communication
Set the tone for interactions via course tools and provide feedback
Discussion Facilitation and Instruction
Substantively interact in the course discussion thread four (4) days, minimum, per week
Postings must be professional, clear, precise, and supportive of student learning
Augment course content and provide examples to facilitate the understanding and application of course concepts.
Assessment, Grading, and Feedback
Meet deadlines for grading and feedback so students can make timely adjustments and improvements
Utilize course assignments’ grading rubrics and apply these to grading
Provide helpful, individualized, constructive feedback on all syllabus-identified component assessments
Course Climate and Online classroom Environment
Maintain a positive and interactive atmosphere in the online course
Professional behavior in facilitating the course
Use correct grammar, be respectful and exhibit fairness
The Faculty Online Observation is implemented by the designated instructor observers. Additionally, these individuals also serve as valuable assets, contributing to the academic oversight of the online program and course development. Since the inception of the OIES and transition to the FOO, the faculty/evaluator role has provided a stronger relationship between the academic departments and the online learning program.
It is important to note that the focus of the FOO is facilitation. That is, the FOO focuses on the facilitation of the course, not the course content, not a specific discipline. The academic department program coordinator is responsible for evaluating the course content of online courses. At Park University, online course content is created by a content-expert with the support of a course development specialist and with the approval of the academic department. The online instructor evaluator then utilizes the FOO to observe instructors for online teaching facilitation in adherence to best practices and unique institutional distance learning policies. The FOO observation and program coordinator evaluation fulfill the instructor annual observation requirement.
Since the implementation of the FOO in Fall I, 2008, 1,825 FOOs have been administered been evaluated. The FOO online instructor evaluators are in constant communication, to include a weekly conference call to address observer/observee concerns, programmatic needs, and considerations for needed revisions, interpretations, and any needed modifications. The FOO completions and observer ratings by academic year are depicted below (Table 3).
FOO Completions by Rating by Academic Year
Year / Rating
AY2008 – 2009
AY2009 – 2010
AY2010 – 2011
AY2011 – 2012
AY2012 – 2013
At the conclusion of the first term of the FOO pilot (Summer 2009), the team of observers reflected on the form. Extensive reflection on the observation form and reactions by instructors and university personnel resulted in revisions of the form and additional information provided to instructors. As part of the review process, the number of instructor observers on the team remained at five to meet the demands of the University's pool of approximately 450 active online instructors and to ensure sound and attainable, observation loads. Based on pilot analysis, it was determined that five instructor observers could accommodate the observations of approximately 20 faculty members per eight-week term, five terms each academic year. This load would allow each instructor evaluator to complete approximately 100 FOO observations in one academic year, resulting in approximately 400 evaluations completed for online adjunct instructors annually. Note: during this second year of observations there were a number of cross-listed inter-rater-reliability observations performed to gauge the reliability of observations between observers. There are always budgetary concerns when faculty members are not utilized strictly in the role of teaching, scholarship, and service. The release time afforded the FOO team from faculty duties has resulted in a valuable observation tool, accepted by the regional accreditors, accepted by both faculty and administration, and credited with the retention of qualified online instructors
Inter-rater Reliability Comparison
In order to validate the consistency of observations among the observers they were compared directly by observing identical adjunct faculty each term over the course of five academic terms. Inter-rater reliability is the extent to which two or more individuals (coders or raters) agree. The inter-rater reliability for the FOO addressed the consistency of the implementation of the FOO. A test of inter-rater reliability followed a process: Five observers observed online adjunct faculty teaching eight-week online courses. The adjuncts were teaching a class following Park University policies, Best Practices, and Quality Matters. The observers utilized a rating scale (0) "Needs Improvement"; (1) "Meets Expectations"; and, (2) "Exceeds Expectations" with which they rated faculty course facilitation on 16 separate factors (and a summary) within the four major categories listed above. Inter-rater reliability assessed the consistency of how the FOO rating system was implemented. For example, if one researcher gives a "0" to a student response, while another researcher gives a "2", obviously the inter-rater reliability would be inconsistent. Inter-rater reliability was dependent upon the ability of two or more individuals to be consistent. Training, education, monitoring skills can enhance inter-rater reliability (Marques & McCall, 2005).1
1 Note: Online adjunct instructors receive an annual rating based on a 100-point scale. The rating score influences both future teaching assignments and placement in courses. The FOO provides up to 25 of the total points. Likewise the program coordinator evaluation provides up to 25 points. Other points are provided from a number of items, to include degree level of instructor, instructor longevity (# of courses taught for PD) student survey results, and course administrative needs.
Each observer was assigned the observation of five identical instructors each term for five terms. Only one of the observations counted as the official observation. The observation results were compared statistically utilizing inter-rater reliability yielding a Kappa of .78 with p < .01. As a rule of thumb, values of Kappa from 0.60 to 0.79 denote substantial agreement. In the small number of inconsistencies in the observations, the rationale was discussed and in all cases the soundness of the decision and rationale for any discrepancy was agreed upon by the observation team. While official changes in instructor did not result; the rationale and consistency of the observations were solidified. This is important in the determination that all instructors will be fairly and consistently observed by any of the observers to whom they are assigned. While each of the 16 specific points of each of the FOOs compared between the observers were not compared statistically, it is important to note that the observation team has met weekly since January, 2006 to discuss all issues related to both the OIES and FOO.
Results and Reflections
Like all effective evaluation systems, to include the Park University Online Instructor Evaluation System (OIES) that preceded the Faculty Online Observations, the FOO is continually monitored and adjusted to adapt to the dynamic nature of higher-education, instructor needs, student needs, and the emergent quality of online education in particular. This is accomplished via constant communication between the observer team, departmental colleagues, administrative staff, and student-related support personnel. The FOO was developed as a result of the capitalization of the strengths and weaknesses noted in the OIES. A dedicated online instructor evaluator team working in conjunction with academic departments, the university administration, and distance learning support staff have created and implemented a successful observation tool, as well as an instrument conforming to accreditation demands. A review of the initial concerns, coupled with experience with the more resource-consuming OIES, and a review of initial results has enabled continued refinement of the observation instrument and process. All proposed and implemented changes have gone through a thorough review by the observation team in concert with quality matters, best-practiced, and Park University Distance Learning policy. Further, all changes are finalized through the approval of the Distance Learning Advisory Committee (DLAC). The results of the observations allow for more refined and focused future professional development and training of online adjunct instructors.
Instructor feedback to the FOO has revealed interesting results that somewhat have resembled those found with the initial findings from the OIES. That is, the similarities between the perceptions and reactions of experienced, "seasoned" online faculty have revealed initial contrasts between relatively new online instructors. These are tracked and more thoroughly analyzed in an upcoming study of adjunct instructors' reactions to the FOO. While most new instructors (those with less than one year of teaching) were very receptive to and appreciative of the information provided, observation process, and access provided to additional resources, there were noted differences between experienced (more seasoned) faculty members, who were more resistant to any indicators of "needs improvement", or not receiving "exceeding expectations" for the courses observed (Eskey & Bunkowski, 2008).
As with the OIES, many new instructors indicated an appreciation for the completeness of the observation process in the measurement of their performance as facilitators of learning online. Satisfaction is normally related to improvement and attention to course participation and the completeness of the process. It was important that the observation process yielded opportunities for instructor reflection and subsequent revision and improvement of teaching practices. At the various levels, examples of instructor comments included an appreciation of the Park online process, the training, the mentoring, and the evaluation. Conversely, there are faculty who were not receptive to either the OIES or FOO. While not all instructor comments are positive, they are helpful to focus on areas of concern. The comments are also helpful for focusing on future professional development needs. Additionally, results are sent to department program coordinators who may use the results in future course scheduling decisions.
In addition to feedback from instructors, the observers' reflections on the administrative aspects of the observation process provided valuable insight for continual enhancements of the FOO. A marked improvement over issues related to the OIES used previously includes extensive time savings by replacing one observation period of two weeks covering many of the topics covered over the entire eight-week period utilizing three formative and one summative report. This allows for providing an annual observation of all online adjunct faculty members that are actively teaching for Park University. Further, the inter-rater reliability observations provided confidence in a higher rate of consistency in the observation of specific areas and the end commentaries provided by observers to explain the criteria rankings. Current faculty observers have transitioned from faculty evaluators allowing a consistency in the interpretation of the observation criteria and the explanation and interpretation of the criteria utilized in the observation categories. The evaluators worked together to create a standardized approach to ensure consistency in criteria interpretation and, more importantly, to allow more time for adding custom, instructor-specific commentary.
Summary and Conclusions
Like many other colleges and universities, Park University has an online program that experienced a rapid growth in online students and courses with limited administrative resources and models. The challenges of meeting the demands of students and offering quality online courses required that an efficient and effective system be implemented. The development and implementation of such a system demanded that benefits outweigh the costs, which can be a concern during challenging budget scenarios. Park University has supported the efforts in creating an evaluation system for faculty teaching in an online environment. Based on this growth, the main objective of the faculty online observation process was to provide students with a quality learning experience in online delivery while assisting faculty in their professional development as online instructors.
Initially, the College for Distance Learning mirrored the established processes and paradigms for faculty evaluation at the University's traditional campuses of in-class students. The Online Instructor Evaluation System was the initial evaluation system implemented at Park University to address the evaluation and mentoring process. The labor intensive effort needed to implement the OIES system was instrumental in looking at alternatives and options in the areas of evaluation and mentoring of adjunct online faculty. As a result of the concerns for effectiveness, the mentoring process was refined and a process that resulted in specialized support was implemented. This type of evaluation online faculty at Park University was of limited relevance to the online environment, both in content and in implementation. Strategic planning, advancing technology, and increased online focus led to identification of program goals and key competencies for online instructors. These were subsequently woven into the recruiting, training, extensive evaluation (OIES), reduced and focused evaluation (FOO), and faculty development components of the online program. The FOO has reduced the exchange and interaction between the evaluator and the instructor; instead, there is more of a concerted effort in the entire process that includes the trainer, mentors, course developers, instructional designers, program coordinators (focused on course content), and evaluators (focused on course facilitation) in the process.
The Faculty Online Observation has the adaptable potential to other institutions contemplating the addition of a formal online faculty evaluation that bridges the gap often found between online and resident instructors, directors, and administrators. The FOO does not stand alone; but, it is built based on the face-to-face evaluation model, Park online policies, online best practices, a team of evaluators, a team of faculty mentors, instructional designers, course developers, a distance learning advisory committee (DLAC), departmental buy-in, and support from the University administration. At times this coordination is as confusing as determining the placement of the chicken and the egg. Each entity plays an important part and allows for sound decisions for establishing and maintaining an effective online pedagogy. The coordinated effort is reflective of best practices and program expectations, resulting in the reinforcement of the academic quality goals of Park University's online learning program.
Abel, R. (2005). Achieving Success in Internet-Supported Learning in higher Education: Case Studies Illuminate Success Factors, Challenges, and Future Directions, Alliance for Higher Education Competitiveness, November 16, 2005.
Allen, E. and J. Seaman (2011). Conflicted: Faculty and online education, 2012, Babson Survey Research Group and Inside Higher Ed and Quahoa Research Group, LLC, June, 2012.
Allen, E. and J. Seaman (2011). Going the distance: online education in the United States, 2011, Babson Survey Research Group, November, 2011. 1 – 29.
Allen, E.I. and Seaman J., Staying the Course: Online Education in the United States, The Sloan consortium, (November, 2008). Retrieved March 18, 2011 from http://sloanconsortium.org/publications/survey/staying_course
Avery, R., Bryant, W., Mathios, A., Kang, H., & Bell, D. (2006). Electronic course evaluations: Does online delivery system influence student evaluations? Journal of Economic Education, 37(1), 21–37.
Berg, G. Distance learning best practices debate. WebNet Journal. (April-June, 5-6, 2001), 17. Retrieved March 19, 2011, from: http://www.aace.org/pubs/webnet/v3no2/3=2DistanceLearning.pdf
Boettcher, J. V. and Conrad, R. M. (2004) Faculty guide for moving teaching and learning to the web. 2nd Edition. Phoenix, AZ, League for Innovation, 247.
Bonk, C. and K. Kim (2006), The Future of Online teaching and Learning in Higher Education, Educause Quarterly, 4, pp. 22-30.
Burnett, K., Bonnici, L. J., Miksa, S. D., and Joonmin, K. (2007) Frequency, intensity and topicality in online learning: An exploration of the interaction dimensions that contribute to student satisfaction in online learning. Journal of Education for Library & Information Science, 48(1) (Winter, 2007), 21-35
Chickering, A. and Ehrmann, S. (1996) Implementing the seven principles: Technology as lever. American Association for Higher Education Bulletin, 49 (2) (October, 1996): 3-6. Retrieved March 18, 2011 from http://www.tltgroup.org/programs/seven.html
Dell, C., C. Low, and J. Wilker (2010) Comparing student achievement in online and face-to-face class formats, Journal of Online Learning and Teaching, 6)1) March, 2010
Eskey, M. and L. Bunkowski, (2008), College of Distance Learning Characteristics and Satisfaction Report: Evaluation and Mentoring (Preliminary Findings), College of Distance Learning, Park University, August 13, 2008
Eskey, M.T. and Schulte M. (2010) What Online College Students Say About Online Instructors and What Do Online Faculty Members Say About Online Instruction: A Comparison of Attitudes, Journal of Online Education, August, 2010
Finch, J., and Montambeau, E. Beyond Bells and Whistles: Affecting Student Learning Through Technology (2000). Retrieved March 10, 2011 from http://www.cofc.edu/bellsandwhistles/#
Graham, C., Cagiltay, K., Lim, B., Craner, J., and Duffy, T.M. (2000). Seven Principles of Effective Teaching: A Practical Lens for Evaluating Online Courses (2000). Retrieved March 10, 2011 from http://ts.mivu.org/default.asp?show=article&id=839
Herman, T. and S. Banister (2007). Face-to-face versus online coursework. A comparison of costs and learning outcomes, Contemporary Issues in Technology Education, 7(4), 318-326.
Holmberg, B. Distance education: A survey and bibliography. (1977) London: Kogan Page; New York: Nichols Pub. Co.
Instructional Technology Council (2013). Trends in eLearning: Tracking the impact of eLearning at community colleges, Washington, DC, April, 2013.
Kim, K., Bonk, C. and Zeng, T. (2005). Surveying the Future of Workplace E-Learning: the Rise of Blending, Interactivity, and Authentic Learning, E-Learn Magazine (June, 2005).
Lokken, F. (2009) Distance Education Survey Results: Tracking the Impact of eLearning at Community Colleges, Instructional Technical Council, Washington, DC (2009).
Mandernach, B. J., Donnelli, E., Dailey, A., and Schulte, M. A (2005) faculty evaluation model for online instructors: Mentoring and evaluation in the online classroom. Online Journal of Distance Learning Administration, 8 (3) (2005) Retrieved March 18, 2009, from: https://www.westga.edu/~distance/ojdla/fall83/mandernach83.htm
Marques, J.F. and McCall C. (2005)The Application of Inter-rater Reliability as a Solidification Instrument in a Phenomenological Study, The Qualitative Report, 10, No. 3, September, 2005. Retrieved from: http://www.nova.edu/ssss/QR/QR10-3/marques.pdf
Maryland Online, Inc. (2008). Quality Matters rubric standards 2008 – 2010 edition with assigned point values. Retrieved from http://www.qmprogram.org/files/RubricStandards2008-2010.pdf
Matthews, D. (1999) The origins of distance education and its use in the United States. The Journal, September, 1999 North Central Association of Colleges and Schools (2010). Retrieved from http://www.northcentralassociation.org/ Park University (2009) Online Course Principles and Standardshttp://www.park.edu/online/faculty/Best_Practices/principles_and_standards.html
Park University. (2008). SOL principles and standards. Retrieved from Park University School for Online Learning Website:
Reeves, T. (1997) Evaluating What Really Matters in Computer-Based Education.
Tobin, T. (2004). Best Practices for Administrative Evaluation of Online Faculty. Online Journal of Distance Learning Administration, 7 (2). Retrieved March 10, 2011, from: https://www.westga.edu/~distance/ojdla/summer72/tobin72.html
Villar A. & Alegre de la Rosa, O. (2007). Online faculty development and assessment systems (OFDAS): A study of academic learning. Journal of Personnel Evaluation in Education, 20, 21–41.
Weschke, B., & Canipe, S. (2010). The faculty evaluation process: The first step in fostering professional development in an online university. Journal of College Teaching and Learning, 7(1), 45–55.
Online Journal of Distance Learning Administration, Volume XVI, Number II, Summer 2013
University of West Georgia, Distance Education Center
Back to the Online Journal of Distance Learning Administration Contents