Comparing Student Learning Outcomes in Face-To-Face and Online Course Delivery


Stephen Sussman
Barry University
ssussman@mail.barry.edu

Lee Dutter
Barry University
ldutter@mail.barry.edu

Abstract

Since the advent of fully online delivery of college-level coursework, a number of issues has preoccupied administrators, educators, and researchers with regard to student learning outcomes or performance vis-à-vis face-to-face delivery. The present study does not seek to demonstrate or to discover which mode of delivery is “superior” or “inferior” to the other. Rather, through a quantitative analysis of performance indicators, it seeks to highlight the differences and similarities in student learning outcomes between the two modes of delivery for an undergraduate, social science course, which focuses on public policy and administration. Thus, primarily through the analysis of real-time course data, which covers four academic years from 2005 to 2009, the study aims to provide a better understanding of the differences and similarities between these delivery modes, as far as the issues of concern to administrators, educators, and researchers are concerned.

Introduction

In a recent meta-analysis of studies on student learning outcomes in face-to-face versus online course delivery, Means, Toyama, Murphy, Bakia, & Jones (2009) identified more than 1000 studies which address this issue. However, before conducting their analysis, they employed screening criteria which focused on experimental or quasi-experimental studies which reported “…a learning outcome that was measured for both treatment and control groups,” and student learning outcomes which “…included scores on standardized tests, scores on researcher-created assessments, grades/scores on teacher-created assessments (e.g., assignments, midterm/final exams), and grades or grade point averages” (Means et. al., 2009, 12). Their criteria lead to the identification of “…51 independent effect sizes…from the study corpus of 46 studies” (Means et. al., 2009, 17). Although there is variability in the conclusions of these studies, their principal conclusion is that “…in recent applications, online learning has been modestly more effective, on average, than the traditional face-to-face instruction with which it has been compared” (Means et. al, 2009, 51). 

The building and refining of knowledge in any field of study, but especially in one as important in higher education as student learning outcomes, leans heavily upon replication; that is, the continuing examination of the same phenomenon via the reanalysis of the same data with different methods and/or the analysis of additional data with the same or different methods. The present, empirically-based study adopts the view that, as the vast majority of students who take courses in online delivery formats are not participating in a structured experiment, or even a quasi-experiment, knowledge building will benefit from additional studies which engage in the real-time comparative analysis of student learning outcomes. More specifically, this study is an analysis of those outcomes in a course which has been taught by one of the authors in both face-to-face and fully online delivery formats over four academic years in the School of Adult and Continuing Education of Barry University. As embedded in the analysis which follows, the particular issues of interest include: student demand for courses in a fully online delivery format; student motivation to choose courses in this format; comparison of measures of student learning outcomes; comparison of student withdrawals from courses as a surrogate for attrition; and the timeframe of delivery.

The Frank J. Rooney School of Adult and Continuing Education of Barry University

The Frank J. Rooney School of Adult and Continuing Education (ACE) of Barry University was established in the early 1980s in order to offer courses and degrees to working professionals on the home campus in Miami Shores and throughout the state of Florida. A snapshot of the principal characteristics of ACE’s student population is provided in Table 1. Also, ACE was created to serve working students who begin their university education elsewhere and its admission requirements reflect this. That is, admitted students hold an associate degree or its equivalent and build on those earlier credits by finishing a professionally related bachelor’s degree in two or three years of additional study.   

POS 303 Public Policy and Administration, the course which is the focus of the present study, is one of the required courses in ACE’s Bachelor of Public Administration (BPA) degree program. In the fall 2008 semester, this degree program served a population of 365 students, who were also distributed throughout the state. The majority of these students works in professions, which are related to the public or not-for-profit sectors of the state’s economy; more specifically, in law enforcement, as firefighters, or in other emergency services. As a consequence, they are on call 24/7/365 and are often subject to major, unexpected vagaries in their work schedules. For example, in 2005 significant numbers of them were sent to Louisiana and Mississippi to assist in relief efforts following Hurricane Katrina. Thus, ACE students, in general, and this sub-population, in particular, form “natural” constituencies of working professionals for the fully online delivery of courses, especially those which are directly related to their chosen degree program.

The Course

POS 303 is the first in a series of required courses which students must complete in order to receive Barry University’s BPA degree. However, non-majors may also take the course in order to satisfy their General Education requirement in social science, but majors are not allowed to use POS 303 for both their major and that requirement. Public Administration majors must take a different, approved course to satisfy their requirement in social science. Also, any student may take the course, if in need of an elective in order to reach the minimum grand total of 120 semester credit hours, which are required for a Barry University bachelor’s degree. As a consequence, the pool of students who take the course is expanded beyond Public Administration majors and the students who do enroll are a mixture of majors and non-majors.

For purposes of this study, it is important to note that one of the graded course assignments which all students must complete is the research and writing of an “issue paper.” This teacher-created assignment is an analysis of a contemporary public policy issue, which is selected by the student with guidance from the instructor and approved no later than the third week of the eight-week session, at which time the student submits a brief, one-paragraph description of the issue selected. The principal purpose of the assignment is to give students practical experience in the critical analysis of policy issues and in the preparation of a concise, but thorough, summary of their analysis which, hypothetically, might then be provided to policymakers. Each student is required to select a unique issue for analysis, thus limiting the possibility of unacceptable student collaboration in the preparation of the submitted paper, and to use one of the analytic or process models or theories (e.g., systems, rational choice, elite, group), which are covered in the course.The issue may be federal, state, or local in focus, but must have broad appeal. In other words, while it can be related to their work, an issue which is idiosyncratic to the student’s employer or locale is not acceptable, thus limiting students’ ability to use some type of previously developed analysis from their workplaces. Possible topics or acceptable issue areas include, but are not limited to, criminal justice, health care, welfare, education, economic policy, taxation, international trade, immigration, environmental protection, civil rights, federalism, national defense, or social justice.

The issue-paper assignment, which is weighted as 24% of the final course grade, is assessed by the use of a “grading rubric” (see the APPENDIX), which is attached to the course syllabus and distributed to students at the start of the course and which contains five evaluation criteria, each of which is assigned a score of one (worst) to five (best), depending the quality of the student’s paper on each criterion. An assessment “score,” which is a Likert-type index, for the assignment is calculated by summing scores on each criterion and dividing the total by five. Sub-ranges on this five-point assessment scale correspond approximately to letter grades:  that is, 0-.99 – F; 1.00-1.99 – D; 2.00-2.99 – C; 3.00-3.99 – B; and 4.00-5.00 – A. If a student does not submit the assignment, then they receive a zero. The resulting grade is then factored into the final course grade in conjunction with the other graded components (i.e., weekly essays, quizzes, discussion questions, discussion board, attendance and participation, and final exam), the exact mix of which can vary, depending upon face-to-face versus fully online delivery. In other words, the issue-paper assignment is the principal common feature of both face-to-face and fully online sections of the course. Then, following guidelines which are derived from the Southern Association of Colleges and Schools (SACS) accreditation requirements, these assessment scores are reported to the University administration as an assessment indicator of student learning outcomes in General Education courses.

Finally, it should be noted that students who wished to register for an online section of the course were required to satisfy two specific prerequisites: (1) the University’s computer skills requirement, which is needed to graduate with a bachelor’s degree and which contains instruction in the use of Blackboard, the Learning Management System which is used by Barry University for online course delivery; and (2) the University’s college-level writing requirement. Students also had access to an online orientation, which included a diagnostic, “personal readiness” assessment, designed to assess their skills in personal discipline, task organization, and time management. However, regardless of the results of this diagnostic, no student was refused registration in an online section of the course.

Data and Findings

While not central to the analysis of student learning outcomes, the first indicator which is examined is mean enrollment in face-to-face versus online sections of the course. For academic years 2005-2006 through 2008-2009, a total of 85 sections of the course were offered. Of these, sixty-seven were delivered face-to-face and eighteen fully online. Mean enrollment in the face-to-face sections was 11.5 (range – 4 to 31; std. dev. = 5.64) and in online sections, 16.1 (range – 9 to 34; std. dev. = 7.27). Although 16.1 is obviously greater than 11.5, it can be asked whether this difference is statistically significant, even though the standard canons (e.g., random sampling or section assignment of students) of statistical analysis are not strictly met. Using these “samples,” an F-test for equality of variances fails to reject the null hypothesis of equality (p<.05). Second, examination of the distributions of enrollments suggests that they are both approximately normal, but both distributions are slightly skewed, which is not too surprising, given the facts that the minimum value for enrollment is zero and that the maximum enrollment is unbounded. However, the “samples” are less likely to satisfy the independence assumption, given the processes by which the data were generated. That is, again, the data were not generated by an experimental or quasi-experimental research design, as students were free to choose in which type of section to enroll, which, of course, raises the issue of selection (Creswell, 2009; O’Sullivan, Rassel, & Berner, 2008; Weiss, 1998). In any event, the relevant difference of means test yields a t-value of 2.88, which indicates rejection of the null hypothesis of equal means (p<.005) and acceptance of the hypothesis that mean enrollment in online sections was significantly greater than for face-to-face sections. 

This conclusion is bolstered by noting that face-to-face sections were more likely to be cancelled due to low enrollment. However, this point is somewhat counterbalanced by the fact that more students, who initially registered for an online section, dropped during the drop/add period, which is the first week of a session. The University does not systematically track students who register for a course, but then drop at the start, which expunges the course from the student’s official record. On the other hand, after the first week, a student’s option is to withdraw from a course, if they cannot or do not wish to continue for any reason. While they carry no academic weight in terms of grade point average, withdrawals are permanently noted in the student’s record, information which is used in the present analysis. Nevertheless, the finding here on mean enrollment in the two types of sections is consistent with the empirical findings of others on the demand for online delivery, especially among older, working students, and the growth trends in such enrollments in recent years (Allen & Seaman, 2008; Block, Udermann, Felix, Reineke, & Murray, 2008; Donavant, 2009; Hansen & Gladfelter, 1996; Rivera & Rowland, 2008; Roblyer, 1999; Yoon, 2003).

Turning next to data on student learning outcomes and focusing on the twenty-four sections of the course which were taught by one of us, as already indicated, it is not possible to follow the standard requirements of experimental, or even quasi-experimental, design (Creswell, 2009; Weiss, 1998), given the data and information which are available. Thus, a pre-experimental design with “post-test” comparison of non-equivalent (i.e., non-matched groups) is followed, which, of course, does not control for key issues such as selection, outside effects, and maturation (O’Sullivan et. al., 2008).  However, the fact that all sections were taught by the same instructor with the same course materials, comparable student assignments, especially the issue paper, and evaluation methods does reduce some sources of potential variation and related threats to internal validity. Also, due to the small number of face-to-face sections (n=6), the econometric technique of pooling cross-sectional and time-series data is adopted in order to boost the size of this baseline comparison group to 81 students.

Focusing first on assessment scores for the issue-paper assignment, Table 2.A displays comparative data on mean scores for face-to-face versus fully online sections. As can be seen, whether unweighted or weighted by the number of students per section, the differences in mean assessment scores for the face-to-face versus online sections are not large. Again, while the canons of statistical analysis are not strictly met, difference-of-means test on the unweighted means, not surprisingly, fails to reject the null hypothesis of equal means. In other words, by this assessment indicator, the data suggest that student learning outcomes were essentially the same for face-to-face and fully online delivery.  As assessment scores were based on only one course assignment (the issue paper), which was weighted as 24% of the final course grade, the second focus is on final course grades, as measured by grade point average (GPA) per section and these are displayed in Table 2.B. Again, whether unweighted or weighted by the number of students per section, the differences in mean GPA for face-to-face versus fully online sections are not large. Performance of the difference-of-means test again fails to reject the null hypothesis of equal means which suggests the equality of outcomes by this second indicator.

In sum, while somewhat at odds with the meta-analysis of Means et. al. (2009), these two aggregate indicators suggest another finding which is consistent with other empirical studies of student learning outcomes or performance for face-to-face versus fully online delivery; namely, no statistically significant differences (Aragon, Johnson, & Shaik, 2002; Block, Udermann, Felix, Reineke, & Murray, 2008; Brown & Kulikowich, 2004; Dellana, Collins, & West, 2000; Donavant, 2009; Peterson & Bond, 2004; Shiratuddin, 2001; Thirunarayanan & Perez-Prado, 2002). However, it is possible that aggregate indicators can be misleading, or at least incomplete, as far as a numerical or statistical portrait of what is transpiring in the underlying phenomenon, which generates the observable data, is concerned. In other words, the calculation and use of aggregate indicators such as sample means loses potentially valuable information, which, of course, may increase the probability of coming to erroneous, misleading, or incomplete conclusions. Thus, a fuller, more robust portrait may be painted via an analytical strategy of disaggregation, if the data allow (Creswell, 2009; Weiss, 1998). Here, based on the data and information which is available, two forms of disaggregation are possible; namely, to break down assessment scores and final course grades into separate, mutually exclusive categories and examine the patterns which emerge.

In Table 3, assessment scores on the issue paper are broken down into their sub-ranges. The expected frequencies are for the fully online sections and are derived from the percentage distribution of scores in the pooled face-to-face sections (n = 81 students). The observed frequencies are the pooled totals for the fully online sections (n = 272 students).  Again, while the canons of statistical analysis are not strictly met, a chi-square, goodness-of-fit test yields a sample value of 49.94 (d.f. = 4) and the null hypothesis of equal distributions is rejected (p<.005). This result suggests that the distributions of scores in face-to-face versus online sections are not the same, but does not reveal which differences between the observed and expected frequencies are driving the result, or how the differences might be explained. Examination of the differences between observed and expected frequencies can shed light on these issues and it is noted that “extreme” values (0-.99 and 4.00-5.00) are noticeably greater than expected and “middle” values (2.00-2.99 and 3.00-3.99), less than expected. 

This pattern suggests a bimodal distribution of assessment scores for students who took the course online, when compared to students who took it face-to-face. While speculative and a legitimate focus for future research, this bimodality suggests that a form of selection may be operating. Namely, ceteris paribus, for those students who completed the course, thus receiving an assessment score and a final course grade, better prepared students with regard to computer and writing skills, but especially those who were strong on personal discipline, task organization, and time management may have been more likely to have enrolled in an online section in the first place. On the other hand, students, who did not submit an issue paper and received an assessment score of zero, or who received a low score on what they did submit, may not have been as well prepared. As noted earlier, regardless of the result of the personal readiness assessment, no student was refused registration in an online section. In this context, anecdotally and qualitatively, ACE academic advisors have reported that some students have ignored their counsel about the demanding nature of online delivery and registered anyway.

These empirical findings and tentative speculations as to why they have been found are complemented by an examination of the distributions of final course grades. In Table 4, final course grades are classified by the specific grade reported for each student. Table 4.A examines the basic letter grade categories, while Table 4.B further disaggregates them into plus/minus sub-categories, which are assigned different point values in the calculation of a student’s GPA and which were used in the calculations of section GPAs, which were examined earlier. Again, the expected frequencies are for the pooled online sections, as derived from the percentage distribution of final course grades in the pooled face-to-face sections, and the observed frequencies are pooled totals for the online sections. 

Data on withdrawals (W), which are available from the University’s records and which can be viewed as a surrogate for attrition, are also included.  As noted earlier, after the first week of a session, if they do not wish to continue, a student’s option is to withdraw formally from a course. However, while some students withdraw for non-academic reasons (e.g., illness), it seems likely that as the basis for comparison to face-to-face sections, any withdrawals for these reasons, assuming approximate equality in the proportion of such withdrawals from both face-to-face and online sections, would be subsumed in the calculation of expected frequencies for the online sections. Thus, again, while highly speculative and deserving of further study, any “excess” of observed over expected withdrawals might be attributed to academic reasons (e.g., the desire to avoid a low final course grade). As a consequence, students who may have discovered after the first week that they were, indeed, in over their heads, could not drop the course and the only option without an academic penalty (i.e., low final grade) was withdrawal. Other students may have chosen to complete the course as best they could, while accepting a higher probability of a lower final course grade than they might have earned in a face-to-face section.

Whatever the specific explanation may ultimately be, employing the chi-square, goodness-of-fit test, the sample values of the test statistic are 51.320 (d.f. = 5) for Table 4.A and 88.576 (d.f. = 9) for Table 4.B, which leads to rejection of the null hypothesis of equal distributions (p<.005) in both sub-tables. As with assessment scores, these results indicate that the distributions of grades in face-to-face versus online sections are not the same and examination of the raw differences between observed and expected frequencies shows that “extreme” values (Ws, Fs, and As) are greater than expected and “middle” values (Cs and Bs) are less than expected. The fact that the findings for final course grades and assessment scores parallel each other is not too surprising, as there is clearly a positive correlation between low/high assessment scores and low/high final course grades. 

As noted earlier, the issue-paper assignment was weighted at 24% of the final course grade, while the remaining 76% was based on other tasks and assignments, some (e.g., discussion board) of which were tailored or customized to face-to-face or fully online delivery. In any event, if a student failed to submit this assignment and received a zero, then the maximum possible result in the course would have been 76%, which translated to a maximum final course grade of C+. Finally, again, somewhat at odds with Means et. al. (2009) and while not disaggregating in quite the way done here, these results are broadly consistent with the empirical  findings of other studies which have compared face-to-face to fully online delivery in terms of differential attrition or retention rates and the impact of online learning on final course grades (Block, Udermann, Felix, Reineke, & Murray, 2008; Brown & Liedholm; Dellana, Collins, & West, 2000; Keefe, 2003; Peterson & Bond, 2004; Richards & Ridley, 1997; Richardson, 2003; Ridley & Husband, 1998; Roblyer, 1999; Rovai & Jordan, 2004; Wells, 2000).

Summary and Conclusion

The preceding analysis of real-time data found that fully online sections of the undergraduate social science course (POS 303) under examination had a significant higher mean enrollment than face-to-face sections. More to the point, however, the analysis compared two aggregate indicators of student learning outcomes or performance in this course for face-to-face versus fully online course delivery. These indicators were numerical assessment scores for an issue paper assignment and final course grades as measured by section GPA. For both indicators, essentially no difference was found for face-to-face versus fully online course delivery. On the other hand, when these indicators were disaggregated, patterns emerged which can be linked to face-to-face versus fully online delivery. Specifically, assessment scores were disaggregated into sub-ranges on a five-point scale and actual grades which students received were examined, along with course withdrawals as a surrogate for attrition. The findings indicated a bimodal distribution of scores and grades for fully online versus face-to-face course sections. However, it should be emphasized, again, that the analysis did not take into account a number of issues which are relevant to empirically-based research design, especially selection, outside effects, and maturation. 

First, as far as selection is concerned, students were not randomly assigned to face-to-face versus fully online course sections and, as reported by ACE’s academic advisors, some students chose to register for an online section against their counsel, based on the results of the personal readiness assessment. Second, with regard to outside effects, previous experiences of students could not be taken into account, especially for students who had taken courses in online delivery formats before coming to Barry University.  Moreover, students may have learned from each other in the sense that students who took courses in online formats, especially with Barry University, were in a position to share their experiences with fellow students as to the challenging nature of fully online delivery. Third, with regard to maturation, while the fundamental structure of Blackboard did not change in the fully online delivery of the course over the four-year period under examination, a variety of additional technologies (e.g., LiveMeeting) have recently become available and are now employed in the delivery of the course, but the instructor used essentially the same online delivery format across the four years (2005-2009) which are covered by this study and only introduced these additional technologies after the timeframe from which the data are drawn. In addition, the instructor made minor adjustments to some of the specifics of course delivery based on end-of-course surveys of students who completed the online sections, as well as the face-to-face sections. Also, the data was pooled for a four-year period, but without pooling, the quantity of available data would not have been sufficient for any analysis, let alone the identification of how systematic or stochastic processes which generated the observed data may have changed and how such changes may have impacted the reported findings.  

Two additional considerations, which were not under the control of the course instructor, should also be mentioned. First, POS 303 was one of the first courses to be offered by ACE in a fully online format.Since 2005, additional courses have been offered in this format and there has been some evolution in their administration based on cumulative experience. For example, the quantity and quality of the counsel given to students by ACE’s academic advisors has changed with regard to the demanding nature of online learning. Perhaps one lesson here is that academic advisors should continue to augment and to refine their counsel to students who are contemplating registering for an online section of a course in terms of the potential difficulties which they may face as compared to face-to-face sections.  Second, during the four-year period which has been examined, the timeframe for the delivery of all ACE courses was shifted from nine to eight weeks.

In conclusion, the conditional nature of what has been learned should be reiterated. First of all, with regard to student learning outcomes or performance in one course, POS 303 Public Policy and Administration, this empirically-oriented study has not sought to demonstrate or to discover which mode of delivery (face-to-face or fully online) is “superior” or “inferior” to the other. Rather, by the examination of multiple indicators with real-time comparative data, it has sought to highlight the differences and similarities in student learning outcomes between the two modes of delivery for this type of undergraduate, social science course, as well as the consistencies and differences with previous empirically based analyses of student learning outcomes. Thus, primarily through a quantitative analysis of course data, it has sought to contribute to the ongoing building and refining of knowledge about the differences and similarities between these delivery modes with regard to the issues which were raised at the outset this study. Finally, perhaps one principal message of the present study is the desirability of examining multiple, disaggregated indicators of student performance, as far as is feasible. 

TABLE 1

Profile of ACE Students*

Demographics**

 

Age 25 to 54

89.7

Age 35 to 44 (mode)

34.8

Mean Age (years)

39

Male

33.9

Female

66.2

Resident in South Florida (Broward, Miami-Dade, and Palm Beach counties)

64.4

African-American, Hispanic

3.9

African-American, Non-Hispanic

30.8

Caucasian, Hispanic

34.5

Caucasian, Non-Hispanic

30.8

 

Employment Status***

 

Full-time

89.2

Part-time

6.4

Not Working

4.4

 

 

Marital Status***

 

Single

43.8

Married or With DomesticPartner

56.2

 

 

Family Members***

 

No Dependents

33.2

One or More Dependents

66.8

*Unless otherwise noted, all numbers are percentages.

**All enrolled students (n=2209) in the fall 2008 semester.

***Sample (n=654) of enrolled students in the fall 2007 semester.


TABLE 2

Aggregate Measures of Student Learning Outcomes

  1. Mean Assessment Scores for Issue-Paper Assignment

  2. Section Scores

    Face-to-Face Sections (n=6)

    Online Sections (n=18)

     

     

     

    Unweighted

    2.70

    2.95

     

      (1.52)*

     (0.18)*

     

     

     

    Weighted**

    2.94

    3.04


  3. Final Course Outcomes – Grade Point Averages (GPA)

Section GPA

Face-to-Face Sections (n=6)

Online Sections (n=18)

 

 

 

Unweighted

   2.84    
(0.60)*

2.78
  (0.36)*

 

 

 

Weighted**

2.86

2.79



*Standard deviation.

**By number of students per section.


TABLE 3

Assessment Scores by Numerical Category – Fully Online Sections – Pooled Data

                                                                                  Score

Frequencies                 0-.99          1.00-1.99          2.00-2.99          3.00-3.99          4.00-5.00          

Expected*                     35                  7.9                   38.8                 73.4                   116.3

Observed**                   60                    9                       9                    55                      139

   O – E                       +25                 +1.1                 -29.8                -18.4                 +22.7


 *These frequencies are for fully online sections and are derived from the percentage distribution of frequencies in all face-to-face sections.

**These are combined frequencies which were observed in all online sections.


TABLE 4

  1. Final Course Grades by Category – Fully Online Sections – Pooled Data

                                                                              Grade

    Frequencies                 W*               F               D               C               B               A

    Expected                     9.7              12.8          9.7             44.9           110.4         73.9

    Observed                     20                30            7                 33              79              92

    O – E                        +10.3           +17.2        -2.7             -11.9          -31.4         +18.1

  2. Final Course Grades by Category With Plus/Minus Option– Fully Online Sections – Pooled Data

                                                                                            Grade

Frequencies                 W*            F             D         C         C+        B-        B        B+       A-       A

Expected                     9.7         12.8          9.7      22.5      22.4     20.1     51.7    38.6     41.8    32.1

Observed                     20           30            7          32          1        17        41       21        36       56

O – E                       +10.3      +17.2        -2.7       +9.5      -21.4    -3.1     -10.7   -17.6    -5.8    +23.9


*Data on Withdrawals was not included in earlier analyzes of assessment scores and GPA.


APPENDIX

POS 303 Public Policy and Administration:  Assessment Rubric for Issue Paper Assignment

Score:

5

4

3

2

1

Criterion

 

 

 

 

 

Written Communication

Language is clearly organized. Word usage, spelling and punctuation are excellent. Faultless use of grammar and language.

 

Writing is sufficient. Adequate use of working, grammar and punctuation. 3-4 errors in use of grammar and language.

 

Writing is rambling and unfocused. Topic and supporting arguments are presented in disorganized and unrelated way. More than 5 errors in use of grammar and language.

 

 

 

 

 

 

Resource/Citation Style

All sources are cited correctly and thoroughly (in text and on reference page); an acceptable citation system is used consistently and correctly.

 

All sources are cited, the majority cited correctly (in text and on reference page); an acceptable citation system is used correctly for majority of citations.

 

Some sources are cited correctly (in text and on reference page); an acceptable format is not used or used for a minority of citations. Or no reference page is present.

 

 

 

 

 

 

Oral Communication

Premise, supporting arguments, and conclusions are logically presented and analyzed systematically. Good command of Social Science vocabulary. Verbal precision is superior. Confidently answers questions.

 

Premise is competently presented; arguments are summarized, and a conclusion is provided.  Social Science vocabulary is sometimes used imprecisely or incorrectly. Some hesitancy is evidenced in the choice of proper terms and in answering questions.

 

Premise is unclear, supporting evidence is absent or illogically presented. A conclusion is either lacking or does not follow from evidence presented. Social Science vocabulary is often used incorrectly. Does not understand questions and is unable to provide explanation.

 

 

 

 

 

 

Content              

Concepts/issues/facts of specific assignments are presented and analyzed in depth. Links to applicable Social Science theory, concepts, and methods show a clear understanding of their use and possible limitations. A clear idea of relationship to issues of equity and social justice is evident.

 

Concepts/issues/facts of specific assignments are presented and analyzed adequately with sufficient links to applicable Social Science theory, concepts, and methods to appreciate their utility.   A general idea of possible relationship to issues of equity and social justice is evident.

 

Concepts/issues/facts of specific assignments are presented and analyzed incompletely. Links to applicable Social Science theory, concepts, and methods are absent or misunderstood.  Little or no idea of relationship to issues of equity and social justice is evident.

 

 

 

 

 

 

Critical Thinking

Concepts, assumptions, inferences, and conclusions are clearly and thoroughly expressed. Analysis is logical and thorough.

 

Concepts, assumptions, inferences, and conclusions are expressed clearly in most cases, but are not expressed thoroughly. Analysis is mostly logical, but may be absent or flawed in some places.

 

Concepts, assumptions, inferences, and conclusions are unclear. Absent or flawed logic may be present. Analysis is minimal or absent or the logic used in argument may not be discernable.


Total Score: ____________   Average Score (Total Score / 5 Criteria): _____________


References

Allen, I. E., & Seaman, J. (2008).  Staying the Course, Online Education in the United States,2008. Needham MA:  The Sloan Consortium.

Aragon, S. R., Johnson, S. D., & Shaik, N. (2002). The influence of learning style preferences on student success in online versus face-to-face environments. The American Journal of Distance Education, 16(4), 227-244.

Bee, R. H., & Usip, E. E. (1998). Differing attitudes of economics students about Web-based instruction. College Student Journal, 32(2), 258-269.

Block, A., Udermann, B., Felix, M., Reineke, D., & Murray, S. R. (2008). Achievement and satisfaction in an online versus a traditional health and wellness course. Journal of Online Learning and Teaching, 4(1), 57-66.

Brown, B. W., & Liedholm, C. E. (2002). Can Web courses replace the classroom in principles of microeconomics? American Economic Review, 92(2), 444-448.

Brown,  S. W., & Kulikowich, J. M. (2004). Teaching statistics from a distance: what have we learned? International Journal of Instructional Media, 31(1), 19-35.

Creswell, J. W. (2009). Research Design, Qualitative, Quantitative, and Mixed Methods Approaches, Third Edition. Los Angeles: SAGE Publications.

Dellana, S. A., Collins, W. H., & West, D. (2000).  On-line education in a management science course: Effectiveness and performance factors. Journal of Education for Business, 76(1), 43-47.

Donavant, B. W. (2009). The new, modern practice of adult education: Online instruction in a continuing professional education setting. Adult Education Quarterly, 59(3), 227-245.

Gilliver, R. S., Randall, B., & Pok, Y. M. (1998). Learning in cyberspace:  Shaping the future. Journal of Computer Assisted Learning, 14, 212-222.

Hansen, N. E., & Gladfelter, J. (1996). Teaching graduate psychology seminars using electronic mail: Creative distance education. Teaching of Psychology, 23(4), 252-256.

Hutti, D. L. G. (2007). Online learning, quality, and Illinois community colleges. Journal of Online Learning and Teaching, 3(1), 18-29.

Kassop, M. (2003). Ten ways online education matches, or surpasses, face-to-face learning. The Technology Source, May/June.

Keefe, T. J. (2003). Using technology to enhance a course: The importance of interaction.EDUCAUSE Quarterly, 1, 24-34.

Means, B., Toyama, Y., Murphy, R., Bakia, M., & Jones, K. (2009). Evaluation of Evidence-Based Practices in Online Learning Studies. Washington: U. S. Department of Education, Office of Planning, Evaluation, and Policy Development.O’Neal, K. (2009). The comparison between synchronous online discussion and traditional classroom discussion in an undergraduate education course. Journal of Online Learning and Teaching, 5(1), 88-96.

O’Sullivan, E., Rassel, G. R., & Berner, M. (2008). Research Methods for Public Administrators, Fifth Edition. New York: Pearson Longman.

Peterson, C. L., & Bond, N. (2004). Online compared to face-to-face teacher preparation for learning standards-based planning skills. Journal of Research on Technology in Education, 34(4), 345-360.

Richards, C. N., & Ridley, D. R. (1997). Factors affecting college students’ persistence in on- line computer-managed instruction. College Student Journal, 31(4), 490-495.

Richardson, J. T. E. (2003).  Approaches to studying and perceptions of academic quality in a short Web-based course. British Journal of Educational Technology, 34(4), 433-442.

Ridley, D. R., & Husband, J. E. (1998). Online education:  A study of academic rigor and integrity. Journal of Instructional Psychology, 25(3), 184-188.

Rivera, B., & Rowland, G. (2008).  Powerful E-learning:  A preliminary study of learner experiences. Journal of Online Learning and Teaching, 4(1), 14-23.

Roblyer, M. D. (1999).  Is choice important in distance learning? A study of student motives for taking Internet-based courses at the high school and community college levels. Journal of Research on Computing in Education, 32(1), 157-171.

Rovai, A. P., & Jordan, H. M. (2004). Blended learning and sense of community: A comparative analysis with traditional and fully online graduate courses. International Review of Research in Open and Distance Learning, 5(2), 1-13.

Shiratuddin, N. (2001). Internet instructional method:  Effects on student performance. Educational Technology & Society, 4(3), 72-76.

Tallent-Runnels, M. K., Thomas, J. A., Lan, W. Y., Cooper, T. C., Ahern, S. M., Shaw, S. M.,& Liu, X. (2006). Teaching courses online: A review of the research. Review of Educational Research, 76(1), 93-135.

Thirunarayanan, M. O., & Perez-Prado, A. (2002). Comparing Web-based and classroom-based learning:  A quantitative study. Journal of Research on Technology in Education, 34(2), 131-137.

Weiss, C. H. (1998). Evaluation, Second Edition. Upper Saddle River NJ:  Prentice Hall.

Wells, J. G. (2000). Effects of an on-line computer-mediated communication course, prior  computer experience and Internet knowledge, and learning styles on students’ Internet attitudes: Computer-mediated technologies and new educational challenges. Journal of Industrial Teacher Education, 37(3), 22-53.

Yoon, S. (2003). In search of meaningful online learning experiences. New Directions for Adult and Continuing Education, 100, Winter, 19-30.


Online Journal of Distance Learning Administration, Volume XIII, Number IV, Winter 2010
University of West Georgia, Distance Education Center
Back to the Online Journal of Distance Learning Administration Contents