Cases of Quality: Case Studies of the Approval and Evaluation of K-12 Online and Blended Providers


Michael K. Barbour
Touro University California
mkbarbour@gmail.com

Tom Clark
Clark Consulting
tom@tomclarkconsulting.net

Jason Siko
Madonna University
sikojp@gmail.com

Kristen DeBruler
Michigan Virtual Learning Research Institute
kdebruler@michiganvirtual.org

Justin Bruno
Michigan Virtual Learning Research Institute
jbruno@michiganvirtual.org

E. Vaughn Wilson
University of the Cumberlands
vwilson003@gmail.com


Abstract

State-level departments of education vary in their mechanisms for monitoring online courses and programs. This study reviewed various state models for initial and ongoing evaluation of online courses. Five constructs were identified through this review, and examples from Georgia, Maryland, California, Washington, and Colorado were detailed. The report concludes with potential models and key guidelines for states to consider when developing policy to ensure quality online education for K-12 students.

Introduction

The use of online learning in K-12 education has expanded significantly throughout the United States and internationally (Barbour & LaBonte, 2017; Barbour, Brown, Hasler Waters, Hoey, Hunt, Kennedy, Ounsworth, Powell, & Trimm, 2011; Gemin, Pape, Vashaw, & Watson, 2015; LaBonte & Barbour, 2018). Recent estimates indicate that anywhere from two to six million U.S. K-12 students are engaged in some form of online or blended learning (Ambient Insights, 2012; Barbour, 2017; Gemin & Pape, 2017; Wicks, 2010). The purpose of this study was to examine existing policies and practices related to the evaluation and approval of K-12 online learning in select US states and to use those examples to inform cyber and online provider evaluations.

States differ in their approval process for both online course providers and online courses themselves (Barbour, Clark, DeBruler, & Bruno, 2016). The most common type of approval granted was front-end approval of full-time online program providers; few states required both front-end approval and ongoing performance evaluations. Student performance growth has been shown to be weak in many fully online schools (Woodward et al., 2015). Blended schools were found to be a potential solution to address the student growth weaknesses found in schools where instruction was fully online; as blended programs have expanded, state policies and regulations have tended to lag behind.

In this article, we begin by reviewing current literature on the evaluation of online courses and online learning in general. This is followed by a description of our review of current state policies with respect to initial and ongoing evaluation of online courses and course providers. Next, we describe five constructs that appeared as a result of our research, and we provide examples from a variety of states to describe each construct in greater detail. We conclude with a discussion of our findings, including a discussion of potential future research and policy implications for states moving forward.

Literature Review

There are many ways to measure the quality of K-12 online learning. For example, Black, DiPietro, Ferdig, and Polling (2009) began the process of developing a validated survey to measure best practices of K-12 online instructors, using an instrument designed from an earlier study into K-12 online teachers (DiPietro, Ferdig, Preston, & Black, 2008). The most common approach used by states to assess quality in K-12 online learning is evaluation of course content (Barbour et al., 2016). This method of evaluation has a history almost as long at K-12 online learning itself. Early K-12 online learning initiatives, such as the Virtual High School (VHS) Collaborativei and Electronic Classroom of Tomorrow, developed design standards that were used in the development of their online course content. In the case of the VHS Collaborative, these standards were used as the basis of an online professional development course that all potential teachers had to complete (Zucker & Kozma, 2003). Over the past fifteen years, organizations like the National Education Association (NEA) and the Southern Regional Education Board (SREB) have also released ‘national standards’ to measure the quality of online course content. Quality standards often draw upon or incorporate content from prior standards efforts. Comparisons of standards have been completed on some aspects of quality. For example, Kennedy and Archambault (2012) developed a crosswalk of standards related to online teaching. Further, Rozitis (2017) conducted a Delphi study examining online instructional design standards from seven organizations encompassing K-12, higher education, and corporate domains. Over three rounds, participants identified 10 standards that should be applied to K-12 teachers modifying their own courses for online delivery. Again, these standards generally examine the initial design of the course itself.

In 2007, the International Association for K-12 Online Learning (iNACOL) released the first edition of their National Standards for Quality Online Courses. “As a result of the research review, [iNACOL chose] to fully endorse the work of the SREB Quality Online Course Standards as a comprehensive set of criteria…. with an additional rubric for the inclusion of 21st century skills” (North American Council for Online Learning, 2007, p. 2). In 2011, iNACOL (2011) released a second edition of their National Standards for Quality Online Courses, based on the work of CLRN and the Texas Education Agency’s Texas Virtual School Network (TxVSN) (Smith, Bridges, & Lewis, 2013). The second edition includes a more developed rubric developed by the TxVSN, which can be used to evaluate the quality of online course content. While there has been some research conducted using these standards (e.g., Huett, Huett, & Ringlaben, 2011), these standards have never actually been validated through a systematic research process. Jackson (2017), in her review of these standards in relation to various learning theories, found strong cognitivist leanings. She argued for more attention to be paid to teacher presence and community building in the standards. Recent efforts to test the reliability and validity of the standards have not been successful (Adelstein & Barbour, 2016a; 2016b; 2017, 2018).

To date, the only research-based initiative examining the quality of online course content has been the Quality Matters (QM) program.ii Originally developed through a U.S. Department of Education’s Fund for the Improvement of Postsecondary Education grant (Shattuck, 2007), this proprietary program provides a review process based on 40 specific standards grouped under eight general standards (Legon & Runyon, 2007). Since 2005, each specific QM standard has been supported by a full review of the published research literature in post-secondary educationiii, where far more research is available. QM has also developed a separate K-12 focused rubric that is also supported by the literature, and aligned with the iNACOL standards iv, and verified through extensive application. QM offers training and certification programs for higher education, K-12 education, publishers, and continuing & professional education programs. Subscribers can use the QM process to certify online courses through research, best practices, and external peer review. Approximately 5% of the 800 current institutional subscribers are K-12 online learning programs (QM, 2013). However, with an annual fee of $500 to $3000 – depending on the nature of services the organization subscribes – many K-12 programs are unable to afford the financial commitment to access this evidence-based course certification process.

It should also be noted that QM is working with the Virtual Learning Leadership Alliance (VLLA) to update the iNACOL National Standards for Quality Online Courses, Online Teaching and Online Programs (Virtual Learning Leadership Alliance, 2018), which began with literature reviews for each set of standards to summarize relevant literature correlated to each standard (Kennedy, Tomaselli, & Stimson, 2018; Shattuck & Burch, 2018a; 2018b). Unfortunately, each of the literature reviews makes the implication that any reference to particular standard in a piece of literature implies that the standard was supported – regardless if the literature was based on actual research. A close reading of the document reveals that the authors intended the matrix to indicate a simply relationship between the piece of literature and the standard. However it calls into question whether a reader, particularly a policymaker or legislator would understand that nuance. For example, with respect to the online course design standards, the authors described the following piece of literature as:

Barbour (2007) explored teacher and developer perceptions of effective web-based content for secondary school students and found that the following guidelines were key for course developers to keep in mind and use for future design work: simple navigation but diverse content presentation; summarizing and personalizing content; clear instructions; and content targeted to average and below average students. (Kennedy et al., 2018, p. 3)

The authors later indicate a relationship between this study and iNACOL Standard A. The document does not mention the fact that those guidelines were based on interviews with six individuals who had designed at least one online course for a single, supplemental virtual school.  The researcher who generated those guidelines did not review the online courses those individuals created to determine whether the interviewees incorporated the guidelines into own their design; compare student performance in online courses where the guidelines were employed or absent; interview the online teachers of students to determine their perceptions were consistent. Essentially, these guidelines were the opinions of six individuals from a single online program that relies primarily on a synchronous model of instruction. The fact that the iNACOL standards have not been found to be reliable or valid, as well as these questionable stages to the refresh effort, call into question the overall process.

How states measure the quality of their K-12 online programs is of particular importance for policymakers, especially considering that many state decision makers have outsourced their online education to corporations (Vadell, 2013). There are several other aspects of K-12 online learning, beyond online course design, that could be used to measure quality. For example, iNACOL released National Standards for Quality Online Programs (Pape, Wicks, Brown, & Dickson, 2008)v, as well as a report focused on Measuring Quality from Inputs to Outcomes: Creating Student Learning Performance Metrics and Quality Assurance for Online Schools (Patrick, Edwards, Wicks, & Watson, 2012). Interestingly, few of these measures of the quality of K-12 online learning have actually had any empirical research to support their deployment. In fact, the legislative activity related to K-12 online learning over the past five years has shown that there have been very few bills introduced in state legislatures that are designed to ensure quality or accountability in K-12 online learning (Molnar et al., 2013; 2014, 2015; 2017); and even fewer of these bills have actually been enacted into law.

In addition to the lack of political will, the dearth of research linking course design to student outcomes has other potential causes. Lokey-Vega, Jorrín-Abellán, and Pourreau (2018) referred to the relative youth of the field of K-12 online learning research specific to K-12. The authors noted how the maturation of the broader field of distance education evolved from media comparison studies (i.e., how well do students learn online when compared to traditional methods?) to studies examining specific online pedagogies (i.e., how can online learners learn better?). Barbour (2018) mirrored this sentiment. Both sets of authors acknowledged that while there are similarities between adult online learners and those in K-12, contextual differences exist in addition to the inherent differences between K-12 and adult learners. In addition, they found that contextual differences exist within the K-12 domain with respect to online learning; in particular, whether the students are taking one or more supplemental online courses versus students enrolled full-time in an online school.

In summary, the primary mechanism by which online courses and programs have been evaluated is evaluation of the initial course design, which is generally based on some set of design standards that have evolved over the past decade or so. The claim that these standards have strong empirical backing has been called into question. Finally, little evidence exists that this process results in improved student outcomes, or that there is a political desire in states to incorporate student outcomes into the evaluation and approval process. This is troubling given that common themes in K-12 online learning include poor student performance, high levels of students repeating courses, and high drop-out rates (Barbour, 2017).

Methodology

This study was a part of a larger initiative that examined at policies and practices used to evaluate U.S.-based K-12 online learning programs (Barbour, Clark, DeBruler, & Bruno, 2016). Using case study methods (Stake, 1995), the study was guided by the following research questions:

  1. What are individual state policies and practices related to initial online learning approval?
  2. What are individual state policies and practices related to on-going online learning evaluation?

Yin’s (2003) approach to case studies was chosen because it closely matched the needs of the study, where the clear distinction between the phenomenon being studied and the context itself is unclear. Additionally useful is the opportunity that exists for single cases, individual units of analysis, within the collective case study to help feed into, understand, and inform the larger phenomenon (Patton, 2002). For the current study, each state represents an individual unit within the larger case – the United States.

From March to August of 2013, data were collected from extant documents for qualitative analysis (Bowen, 2009, Hodder, 2000). Data sources included documents highlighting K-12 online learning policy such as Keeping Pace with K-12 Online and Blended Learning (i.e., Watson, Murin, Vashaw, Gemin, & Rapp, 2012; 2013) and Virtual Schools in the U.S. 2013: Politics, Performance, Policy, and Research Evidence (Molnar et al., 2013). These documents helped to provide a state-by-state view of current legislation, which was the focus of the study.

Following the document analysis, we distributed a web-based survey to Department of Education officials in order to make sure we had all available information to answer the research questions (Marshall & Rossman, 1999); nine states completed the survey (see Appendix A), during its initial administration from July to September 2013. States that did not respond were called, and five more states completed the survey. Despite the fact that only 14 states completed the survey, the detailed review of the legislation and policy documents that were collected online from state education agencies and other official state websites helped to round out the data necessary to complete the 50-state analysis.

When all data were collected, one of the researchers on the team coded using open coding, designed “to uncover, name, and develop concepts, we must open up the text and expose the thoughts, ideas, and meanings contained therein” (Strauss & Corbin, 1990, p. 102). Using the research questions as guides, codes evolved from the data. The focus was on coding of data that helped the researchers understand the processes for approval and evaluation of online courses and online programs. The iterative coding process helped the team gain a better understanding of the existing phenomenon (Emerson, Fretz, & Shaw, 1995). Once open coding was complete, the team met to negotiate the codes to come to agreement on and confirm them.

The authors acknowledge the original data collected for this study are dated. As such, we have attempted to conclude each of the cases with a comment on the currency of the information presented.

Results

In an earlier article that reported on the state of national policy (see Barbour, Clark, DeBruler, & Bruno, 2016), we found that state policy on online and blended learning appeared to be lagging behind school practice. State policies related to online learning quality were limited in nature. This was troubling in light of relatively weak student performance in fully online schools. However, we noted that blended schools were growing faster and had better student performance overall, which might help address the performance issue. We also found great variation among states in their policies. To address this issue, we recommended disseminating models of promising state practices in areas such as initial course provider approval or ongoing course evaluation.

In conducting the state policy analysis and attempting to understand what states must consider when looking to implement new approval measures or critically evaluate existing measures, five dimensions of consideration emerged: level of evaluation and approval, approval requirement, geographic reach, mode of instruction, and approval and evaluation procedures.

Level of Evaluation and Approval

Level of evaluation and approval refers to the unit under review. States typically either focus at the provider level, developing approval and evaluation criteria for entire programs, or at the course level, requiring each course to undergo approval regardless of provider approval status. Provider and course level evaluations are not mutually exclusive and some states, such as Georgia, have developed distinct evaluation and approval criteria for each level.

Case Example: Georgia. Under SB 289 (2012)vi all local school systems must provide opportunities for participation in part-time and full-time virtual instruction program options to all public school students enrolled in grades three through 12 who reside within their attendance boundaries. In addition to mandates for access to online learning, Georgia also requires approval and ongoing evaluation at the provider and course levels.

Provider level. All virtual instruction programs in Georgia must be approved by the Department of Education, and the Department will provide annually to local school systems a list of approved providers. To be approved, providers must document an extensive list of requirements. Additionally, as part of the provider approval and contract process providers must detail curriculum plans about how student services will be provided and how proficiency in state and national standards will be measured.

Recent laws have strengthened state-level approval mechanisms for providers in Georgia. In 2016, the State of Georgia prohibited the offering of virtual instruction to out-of system students by local school systems with a College and Career Ready Performance Index below the state average in the prior year.vii This prohibition expires on June 30, 2019. In 2018, the State mandated that the State Department of Audits and Accounts develop an annual report to the State Board of Education, Governor and Legislature on the performance and quality of state-chartered virtual schools.viii

Course level. HB 175 (2012)ix
established a clearinghouse of distance learning courses through which local school systems and charter schools may offer their computer-based courses to students in other local school systems and charter schools. It also mandates that the Georgia Department of Education review the content of each course prior to including it in the clearinghouse to ensure that it meets state curriculum standards.

There have been no significant updates to the course level evaluation and approval since the collection of the original data.x

Case Example: Maryland. Maryland is an example of a state that focuses approval exclusively at the course level. The Maryland Virtual Learning Opportunities program, managed by the Maryland State Department of Education (MSDE), provides online courses to in-state students through the Maryland Virtual School (in collaboration with local districts) and oversees the state legislated course approval process. Maryland does not have any multi-district or statewide online programs, and virtual charter schools are prohibited in the state. All courses in which more than 80% of content and instruction is delivered online must be approved by MSDE.

Online courses must be taught by teachers meeting highly qualified status under NCLB and certified in the content of the course. Courses can be reviewed either by a team of reviewers at the MSDE or local district, MSDE also recognizes courses reviewed and certified by QM. Reviews are conducted by panels of highly qualified teachers who examine the courses for alignment with Maryland content (including Common Core) and national content standards and for alignment with the MSDE instructional design standards. The reviews cover 30 criteria in three areas: curriculum, instructional design and student assessment; legal requirements; and accessibility. The rubric has recently been updated with different scoring criteria. The new tool reflects more criteria. The full rubric with standards for reviewing online student courses can be found on the Maryland Virtual Learning Opportunities websitexi

This program is still in place in Maryland. A 2018 meeting of the Maryland State Board of Education included a presentation entitled “Online Course Overview,” which outlined the processes to review and approve online courses based on the 2012 legislation described above (Salmon, 2018).xii

Approval Requirements

Approval requirement refers to whether or not the approval and evaluation procedures (at any level) are required by the state or are optional. Required approval and evaluation are often necessary to offer online programs or courses in a particular state and/or tied to state funding. In the cases of optional approval and evaluation the procedures are typically neither mandated nor necessarily developed by the state, however there typically exists some additional external pressure or motivation to undergo approval.

Case Example: California. California does not currently have in place a state mandated approval process for online courses or providers, leaving the discretion over course purchasing and credit-granting to individual schools and districts.

CLRN Review Process. The California Learning Resource Network (CLRN) was established in 1999 as part of the Statewide Education Technology Services Learning Resource contract, awarded by the California Department of Education. CLRN’s primary focus is to provide online courses evaluations with regard to their alignment with Common Core or state content standards and nationally recognized quality standards. CLRN also reviews open educational resources (OER) and supplemental electronic learning resources for their alignment to content standards.

CLRN’s reviewers receive training on California’s Standards for Evaluating Instructional Materials for Social Contentxiii , which aim to ensure cultural and racial equitability in public school curricula. Additionally, reviewers become well-versed in iNACOL’s National Standards for Quality Online Courses. Review teams, consisting of three members, conduct reviews to find evidence of instances within each course that demonstrate alignment to content and quality standards. Review findings are published in CLRN’s course review repository and remain there for three years or until a course is discontinued, whichever is first. CLRN also certifies those courses that meet fifteen select course quality standards – known as “Power Standards” – with the status of CLRN-Certified®. Approximately 50% of courses with current reviews are CLRN-Certified®.

Also of consideration is the process by which online courses are used to fulfill university admission requirements within the state. The University of California revised its online course policy for the 2013-2014 school year in an effort to uphold the University’s Board of Admissions and Relations with Schools Statement on K-12 Online Learning. The statement outlines a number of requirements that online courses must meet in order to be used to fulfill admission requirements to the University of California, including an adherence to the iNACOL National Standards for Quality Online Courses. Courses published by public online schools or course publishers serving public institutions with a single set of content standards (California state content standards or Common Core State Standards) must first achieve CLRN-Certified® status to earn “a-g” approval, and must additionally satisfy 80 % of the remaining iNACOL quality standards.

The CLRN process described above in California is still active.xiv Although it should be noted that recent research has examined the perceived reliability of the “Power Standards” – and to a lesser extent the iNACOL standards in general – with quite mixed (Heller, 2018).

Geographic Reach

Geographic reach refers to the differentiation of approval and evaluation processes based on the reach of the online course or provider (multi-district, single-district). Instead of a singular approval process, states may develop specific approval and evaluation requirements and criteria for providers who wish to offer their courses to students outside of their resident district.

Case Example: Washington. Washington is a unique example in that the state originally developed approval and evaluation criteria in response to multi-district providers and has adapted (with some changes) the multi-district process to single-district providers with limited success. According to RCW 28A.250xv and WAC 392-502-020xvi online providers must be approved by the State of Washington’s Superintendent of Public Instruction for districts to collect state funding, to the extent otherwise allowed by state law, for courses offered by those providers in accordance with Washington law. This approval process makes a distinction between multi-district and single-district online course providers, as well as a third path known as the “affiliate option.” Single district providers may not exceed 10% of the total program headcount in students who reside outside of the geographic boundaries of the district. If single district programs exceed this 10% threshold they must apply for multidistrict online provider approval. Affiliate programs may serve a population whose out-of-district contingency is 10% or more without submitting to a full review if the program is completely outsourced to a previously approved provider that administers the LMS, curriculum, and instruction. This option essentially attaches the program’s approved status to that of their contracted program provider, and was intended as a means of avoiding the subjection of identical programs to separate full reviews.

Online course provider approval was developed in response to a perceived need to assure on-going quality of multi-district providers, and was originally targeted toward these providers. In 2011 the state legislature expanded legislation pertaining to approval to all online course providers, producing two alternative paths to approval, in addition to the multi-district provider path: the single-district and affiliate options.

The overall process in Washington remains in place, but there have been significant changes made. In 2014, rules regarding continuing approval were changed to include student outcomes when considering re-approval of courses (Hunter & St. Pierre, 2016). However, in 2015, the Office of Superintendent of Public Instruction asked for a one-year delay in implementation. According to the 2018 report to the legislature, student outcome data is still not being used in the approval process (Nelson, 2018).xvii

Mode of Instruction

Mode of instruction refers to differential approval and evaluation procedures based on how course content and instruction are delivered. A necessary pre-condition for this dimension is a clear definition of online course/learning and blended course/learning, which specifies the delivery, communication and contact expectations and sets a threshold for the distinction between online and blended. The Maryland online course review is required for all courses in which 80% of the content and instruction is delivered online and Minnesota requires provider approval in cases where more than 50% of instruction is delivered online (Watson, Murin, Vashaw, Gemin, & Rapp, 2013). States may set their own thresholds but it worth considering where that line rests and the impact it may have on blended and online learning programs in each state.

Evaluation and Approval Procedures

This dimension refers to the nature of approval and evaluation procedures as a one-time requirement or an on-going (typically annual) requirement. There is considerable variability in this dimension across states, even in those with similar models.

Case Example: Colorado. Colorado is an example of a state that currently requires only front-end approval after removing many of its on-going monitoring and reporting requirements. According to the Rules for the Administration, Certification and Oversight of Colorado Online Programsxviii released by the Colorado State Board of Education, multi-district online programs must be certified by the Department and approved for operation by the Authorizer. Authorizers must submit an application detailing evidence of adequate resources and capacity to oversee the online program. Colorado HB 11-1277 (2011)xix repealed mandates around multi-district online programs enacted in 2007. These mandates included the creation of a Division of Online Learning established to develop a review process whereby the division would review multi-district programs two years after initial certification, establish annual reporting requirements, evaluate reports from online programs, and publish annual reports concerning online programs. These on-going reporting requirements were removed and HB 11-1277 introduced new (significantly reduced) reporting requirements and mandated that each online program must submit data annually to its authorizers and the Department of Education regarding financial and accounting practices and any proposed changes to multi-district program offerings.

At present, the overall process in Colorado remains consistent. However, the specific requirements and criteria were amended several times in 2012 (HB 12-1212, HB 12-1240, and HB 12-1124), and most recently in 2014 (HB 14-1382). The amendment in 2014 clarified language around what constituted an online program, adding a provision requiring online providers to document student attendance.xxThe amendment also advised a shift from certifying individual programs to certifying authorizers of such programs as well as the creation of a task force to oversee this transition and develop best practices for certification and authorization.  Finally the amendment advised the creation of pilot programs by authorizers and providers to test multiple aspects of online learning, pupil accounting, best practices, teacher effectiveness, and so on.

Summary of the Results

The evaluation and approval processes in Georgia, Maryland, California, Washington, and Colorado in 2013-14 were presented as case examples. Providers in Georgia sought approval at the state level; individual courses in Georgia were also approved at the state level. Maryland banned virtual charter schools, but allowed individual school districts to offer online courses that were approved by the Maryland State Department of Education (MSDE). Maryland’s apprehension in embracing fully online charter schools could be related to documented weak student performance in schools that are fully online (Author, 2016).

California, which had one of the most developed approval processes, allowed individual school districts to approve online courses. However, as it pertained to university admission, the University of California system required that online courses be approved via the California Learning Resource Network (CLRN) to ensure that accepted online courses are “a-g” approved for admissions purposes. Washington had implemented a single district and multi-district approval process as it pertained to approving providers of online courses; providers in Washington might also seek approval via an affiliate option. Colorado required only front-end approval as the state had repealed many prior mandates surrounding multi-district providers; ongoing provider reporting requirements had also been reduced in the state of Colorado.

Discussion

In the previous section, we outlined the guidelines and policies of several states that were either notable or typical when it came to their mechanisms for approving and evaluating online courses. They range in scope from a process for evaluating course content and design only, to a process that involves both courses and providers, with differing levels of approval based on whether the course is primarily for students in a single district or whether it serves a wider geographic range. It is difficult to state whether one method or process of approval is superior to another. What seems to be lacking in most of the case studies is a process for ongoing review, with only Washington and Colorado adding elements of student performance and attendance to the re-approval process. With that said, Washington has delayed the implementation of student outcome data for continuing approval, and they estimate that inclusion of the data for this purpose will not begin until the 2019-2020 school year (Nelson, 2018). Further, while Colorado’s 2014 law suggested the creation of pilot programs that examined the use of attendance, no data have been presented to date, with the next official report on online learning from the state is expected to be published in 2019.xxi While Georgia does have ongoing approval, it is limited to reviewing whether the course contains the required content for that subject. Further, there seems to be little consideration for the success of these courses. While student performance should not be the sole indicator of whether a course is well designed, there is a lack of research supporting whether or not the online approval process has a significant effect on student achievement. Given some of the dismal results for students enrolled in online courses (Barbour, 2017), one should seriously question whether any approval process has merit. The link between online standards and course outcomes has weak support at the post-secondary level as well. First, many studies only examine student and faculty satisfaction with QM-designed courses (Ralston-Berg & Nath, 2008). While some studies show significant improvements in grades when online courses are redesigned using the QM rubric, other factors, such as the instructor’s experience with teaching online, make direct comparisons difficult (Adair & Shattuck, 2015).

Further, while 2001’s No Child Left Behind Act required that teachers are ‘highly qualified,’ federal mandates and incentives (e.g., Race to the Top, the reauthorization of the Elementary and Secondary Education Act as The Every Student Succeeds Act, etc.) have reshaped the factors that determine the effectiveness of a teacher. Most notably, these legislative acts have pushed for “student growth” calculations. While the American Statistical Association argued against value-added modeling in teacher evaluations (Morganstein & Wasserstein, 2014), given the poor performance of students in online courses, K-12 instructors may become reluctant to teach courses online regardless of the approval mechanism. Further, the effectiveness rating is centered on success in a face-to-face classroom. In other words, teaching online requires a different – yet overlapping – skill set than teaching in a traditional classroom. Someone who is rated effective in a traditional classroom is not guaranteed to be effective in an online classroom. While Adair and Shattuck (2015) noted how factors other than design can influence outcomes, and that experience with the medium may matter, K-12 teachers are often underprepared – if prepared at all – to teach in the online medium (Barbour, Siko, Gross, & Waddell, 2013). This could have multiple ramifications on the approval process, and should be taken with caution. For example, if a course is known to have a high attrition or failure rate for students, teachers could be reluctant to teach these courses knowing that their ability to maintain an effective rating may be impacted.

What was not clear in this study is what happens in circumstances where both providers and individual courses are subject to approval. If the provider meets all of the requirements, but the courses are not approved, are there consequences for the provider?  This would have different effects on providers that primarily offer supplemental online courses than on providers who specialize in full-time online programs such as cyber charter schools. Again, ongoing state evaluation that includes student performance is lacking, leaving continued enrollment in poor performing courses with word-of-mouth and market forces as the only oversight in many cases.

Conclusions and Implications

In our overall study, we found a wide range of policies and practices related to the evaluation of online learning in the 50 U.S. states. Each state has a unique policy and practice environment, which those recommending potential models and guidelines to a given state must keep in mind. Ensuring quality in online courses is the most common evaluation approach. As illustrated through in-depth profiles, the course approval processes of the CLRN and Maryland Virtual Learning Opportunities Program are good examples to consider. QM, the most validated approach to certifying online course quality of which we are aware, has seen limited adoption at the K-12 level. A more extensive validation process for widely accepted course standards would help states justify their use in the evaluation of the quality of K-12 online courses.

Some states have sought to limit access to full-time online learning programs, and the research evidence suggests that there is some merit to this approach (Molnar et al., 2017). However, rigorous monitoring and performance requirements should allow states, over time, to ensure that full-time programs are of high quality. The state of Washington serves as a good example of ways to provide an additional focus on the quality of full-time online learning programs when compared to part-time programs, making the most effective use of scarce state evaluation resources. Rigorous state requirements may also provide an incentive for full-time program providers to move to blended learning models, where it is easier to achieve quality learning results.

The education of students participating in full-time online learning programs, as opposed to taking one or two courses online, should be of special concern to states. Effective processes for evaluating the quality of online programs are needed. Finally, we believe that the use of periodic external program audits by dedicated teams of experts can play a valuable role in ensuring program quality, and can provide a mechanism for starting program shutdown when absolutely needed. It can also provide an avenue for helping programs remediate quality problems.


 

References

Adair, D., & Shattuck, K. (2015). Quality Matters™: An educational input in an ongoing design-based research project. American Journal of Distance Education, 29(3), 159-165.

Adelstein, D., & Barbour, M. K. (2016a). Redesigning design: Field testing a revised design rubric based of iNACOL quality course standards. International Journal of E-Learning & Distance Education, 31(2). Retrieved from http://www.ijede.ca/index.php/jde/article/view/976

Adelstein, D., & Barbour, M. K. (2016b). Building better courses: Examining the content validity of the iNACOL national standards for quality online courses. Journal of Online Learning Research, 2(1), 41-73. Retrieved from http://www.learntechlib.org/d/171515

Adelstein, D., & Barbour, M. K. (2017). Improving the K-12 online course design review process: Experts weigh in on iNACOL national standards for quality online courses. International Review of Research in Open and Distance Learning, 18(3). Retrieved from http://www.irrodl.org/index.php/irrodl/article/view/2800

Adelstein, D., & Barbour, M. K. (2018). Redesigning the iNACOL standards for K-12 online course design. Journal of Online Learning Research, 4(3), 231-260. Retrieved from https://www.learntechlib.org/primary/p/178229/

Ambient Insight. (2012). 2012 Learning technology research taxonomy: Research methodology, buyer segmentation, product definitions, and licensing model. Monroe, WA: Author. Retrieved from http://www.ambientinsight.com/Resources/Documents/AmbientInsight_Learning_Technology_Taxonomy.pdf

Barbour, M. K. (2007). Principles of effective web-based content for secondary school students: Teacher and developer perceptions. Journal of Distance Education, 21(3), 93-114. Retrieved from http://www.ijede.ca/index.php/jde/article/view/30

Barbour, M. K. (2017). K-12 online learning and school choice: Growth and expansion in the absence of evidence. In R. A. Fox & N. K. Buchanan (Eds.), School Choice: A Handbook for Researchers, Practitioners, Policy-Makers and Journalists (pp. 421-440). New York: John Wiley & Sons Ltd.

Barbour, M. K. (2018). Lessons for K-12 distance, online and blended learning from research in higher education. Lansing, MI: Michigan Virtual Learning Research Institute at Michigan Virtual University. Retrieved from https://mvlri.org/research/publications/examining-online-research-in-higher-education-what-can-we-replicate-in-k-12/

Barbour, M. K., Brown, R., Hasler Waters, L., Hoey, R., Hunt, J., Kennedy, K., Ounsworth, C., Powell, A., & Trimm, T. (2011). Online and blended learning: A survey of policy and practice from K-12 schools around the world. Vienna, VA: International Association for K-12 Online Learning. Retrieved from http://www.inacol.org/cms/wp-content/uploads/2012/11/iNACOL_IntnlReport2011.pdf

Barbour, M. K., Clark, T., DeBruler, K., & Bruno, J. A. (2016). Evaluation and approval constructs for online and blended courses and providers: A national overview. Journal of Applied Educational and Policy Research, 2(1). Retrieved from https://journals.uncc.edu/jaepr/article/view/469

Barbour, M. K., & LaBonte, R. (2017). State of the nation: K-12 e-learning in Canada. Cobble Hill, BC: Canadian E-Learning Network. Retrieved from http://k12sotn.ca

Barbour, M.K., Siko, J., Gross, E., & Waddell, K. (2013). Virtually unprepared: Examining the preparation of K-12 online teachers. In R. Hartshorne, T. Heafner, & T. Petty (Eds.), Teacher education programs and online learning tools: Innovations in teacher preparation (pp. 60-81). Hershey, PA: IGI Global.

Black, E., DiPietro, M., Ferdig, R., & Polling, N. (2009). Developing a survey to measure best practices of K-12 online instructors. Online Journal of Distance Learning Administration, 12(1). Retrieved from http://www.westga.edu/~distance/ojdla/spring121/black121.html

Bowen, G. A. (2009) Document analysis as a qualitative research method. Qualitative Research Journal, 9(2), 27-40.

DiPietro, M., Ferdig, R. E., Black, E. W., & Preston, M. (2008). Best practices in teaching K-12 online: Lessons learned from Michigan Virtual School teachers. Journal of Interactive Online Learning, 7(1), 10-35. Retrieved from http://www.ncolr.org/jiol/issues/pdf/7.1.2.pdf

Emerson, R. M., Fretz, R. I., & Shaw, L. L. (1995). Writing ethnographic fieldnotes. Chicago, IL: University of Chicago Press.

Gemin, B., & Pape, L. (2017). Keeping pace with K-12 online learning, 2016. Durango, CO: Evergreen Education Group. Retrieved from https://www.evergreenedgroup.com/keeping-pace-reports/

Gemin, B., Pape, L., Vashaw, L., & Watson, J. (2015). Keeping pace with K-12 digital learning: An annual review of policy and practice. Durango, CO: Evergreen Education Group. Retrieved from https://www.evergreenedgroup.com/keeping-pace-reports/

Heller, K. (2018). The University of California’s use of the iNACOL standards for online classes. Journal of Online Learning Research, 4(1), 5-31. Retrieved from https://www.learntechlib.org/primary/p/180972/

Hodder, I. (2000). The interpretation of documents and material culture. In N. K. Denzin, & Y. S. Lincoln (Eds.), Handbook of qualitative research (2nd ed.) (pp. 703-715). Thousand Oaks, CA: Sage Publications.

Huett, K. C., Huett, J. B., & Ringlaben, R. (2011). From bricks to clicks: Building quality K–12 online classes through an innovative course review project. Online Journal of Distance Learning Administration, 14(4). Retrieved from http://www.westga.edu/~distance/ojdla/winter144/huett_huett_ringlaben.html

Hunter, L., & St. Pierre, L. (2016). Report to the legislature: Online learning. Olympia, WA: Office of the Superintendent of Public Instruction. Retrieved from http://www.k12.wa.us/LegisGov/2016documents/2016-01-OnlineLearning.pdf

International Association for K-12 Online Learning. (2011a). National standards for quality online courses version 2. Vienna, VA: Author. Retrieved from http://www.inacol.org/wp-content/uploads/2015/02/national-standards-for-quality-online-courses-v2.pdf

Jackson, B. (2017). Evaluating the online teacher: An analysis of the iNACOL Quality Standards for Online Teaching. In J. Johnston (Ed.), Proceedings of EdMedia + Innovate Learning 2017 (pp. 558-565). Waynesville, NC: Association for the Advancement of Computing in Education.

Kennedy, K., & Archambault, L. (2012). Design and development of field experiences in K-12 online learning environments. Journal of Applied Instructional Design, 2(1), 35-49.

Kennedy, K., Tomaselli, K., & Stimson, R. (2018). National standards for quality online courses (K-12) and QM K-12 secondary and K-12 publisher rubric revision: Literature review. Annapolis, MD: MarylandOnline. Retrieved from https://www.qualitymatters.org/sites/default/files/research-docs-pdfs/National-Standards-for-Quality-Online-Courses-Lit-Review-122818.pdf

LaBonte, R., & Barbour, M. K. (2018). An overview of elearning organizations and practices in Canada. In K. Kennedy & R.E. Ferdig (Eds.), Handbook of research on K-12 online and blended learning (2nd ed., pp. 601-616).  Pittsburgh, PA: ETC Press. Retrieved from https://figshare.com/articles/Handbook_of_Research_on_K-12_Online_and_Blended_Learning_Second_Edition_/6686813

Legon, R., & Runyon, J. (2007). Research on the impact of the quality matters course review process. In 23rd Annual Conference on Distance Teaching & Learning (pp. 8-10). Madison, WI: University of Wisconsin Extension. Retrieved from http://www.uwex.edu/disted/conference/resource_library/proceedings/07_5284.pdf

Lokey-Vega, A., Jorrín-Abellán, I. M., & Pourreau, L. (2018). Theoretical perspectives in K-12 online learning. In K. Kennedy & R. Ferdig (Eds.), Handbook of research on K-12 online and blended learning (2nd ed.) (pp. 65-90). ETC Press. Retrieved from https://figshare.com/articles/Handbook_of_Research_on_K-12_Online_and_Blended_Learning_Second_Edition_/6686813

Marshall, C., & Rossman, G. B. (1999). Designing qualitative research (3rd ed.). Thousand Oaks, CA: Sage Publications.

Molnar, A. (Ed.); Huerta, L., Shafer, S. R., Barbour, M.K., Miron, G., Gulosino, C. (2015). Virtual schools in the U.S. 2015: Politics, performance, policy, and research evidence. Boulder, CO: National Education Policy Center. Retrieved from http://nepc.colorado.edu/publication/virtual-schools-annual-2015

Molnar, A., Miron, G., Gulosino, C., Shank, C., Davidson, C., Barbour, M.K., Huerta, L., Shafter, S.R., Rice, J.K., & Nitkin, D. (2017). Virtual schools report 2017. Boulder, CO: National Education Policy Center. Retrieved from http://nepc.colorado.edu/publication/virtual-schoolsannual-2017

Molnar, A. (Ed.); Miron, G., Huerta, L., Cuban, L., Horvitz, B., Gulosino, C., Rice, J. K., & Shafer, S. R. (2013). Virtual schools in the U.S. 2013: Politics, performance, policy, and research evidence. Boulder, CO: National Education Policy Center. Retrieved from http://nepc.colorado.edu/publication/virtual-schools-annual-2013/

Molnar, A. (Ed.); Rice, J.K., Huerta, L., Shafer, S. R., Barbour, M.K., Miron, G., Gulosino, C, Horvitz, B. (2014) Virtual schools in the U.S. 2014: Politics, performance, policy, and research evidence. Boulder, CO: National Education Policy Center. Retrieved from http://nepc.colorado.edu/publication/virtual-schools-annual-2014

Morganstein, D., & Wasserstein, R. (2014). ASA statement on value-added models. Statistics and Public Policy, 1(1), 108-110.

Nelson, R. (2018). Report to the legislature: Online learning. Olympia, WA: Office of the Superintendent of Public Instruction. Retrieved from http://www.k12.wa.us/LegisGov/2018documents/2018-01OnlineLearning.pdf

North American Council for Online Learning. (2007). National standards of quality for online courses (1st ed.). Vienna, VA: Author. Retrieved from http://www.charterschooltools.org/tools/StandardsQualityOnlineCourses.pdf

Pape, L., Wicks, M., Brown, C., & Dickson, W.P. (2008). Evaluation in online learning. In J. Watson, B. Gemin, & J. Ryan (Eds.), Keeping pace with K-12 online learning: A review of state-level policy and practice (pp. 26-28). Evergreen, CO: Evergreen Consulting Associates. Retrieved from https://www.evergreenedgroup.com/keeping-pace-reports/

Patrick, S., Edwards, D., Wicks, M., & Watson, J. (2012). Measuring quality from inputs to outcomes: Creating student learning performance metrics and quality assurance for online schools. Vienna, VA: International Association for K-12 Online Learning. Retrieved from http://www.inacol.org/wp-content/uploads/2015/02/iNACOL_Quality_Metrics.pdf

Patton, M. Q. (2002). Qualitative research and evaluation methods (3rd ed.). Thousand Oaks, CA: Sage Publications.

Quality Matters. (2013). Subscriber institutions by country, K-12. Annapolis, MD: Author. Retrieved from https://www.qmprogram.org/qmresources/subscriptions/subscribers.cfm?program=3

Ralston-Berg, P., & Nath, L. (2008). What makes a quality online course? The student perspective. In Proceedings of the 24th Annual Conference on Distance Teaching and Learning, Madison, WI. Retrieved from http://www.uwex.edu/disted/conference/Resource_library/proceedings/08_12876.pdf

Rozitis, C. P. (2017). Instructional design competencies for online high school teachers modifying their own courses. TechTrends, 61(5), 428-437.

Salmon, K. B. (2018). Online courses overview. Baltimore, MD: Maryland State Department of Education. Retrieved from http://marylandpublicschools.org/stateboard/Documents/07242018/TabO-OnlineLearningPolicies.pdf

Shattuck, K. (2007). Quality matters: Collaborative program planning at a state level. Online Journal of Distance Learning Administration, 10(3). Retrieved from http://www.westga.edu/~distance/ojdla/fall103/shattuck103.htm

Shattuck, K., & Burch, B. (2018a). National standards for quality online teaching (K-12): Literature review. Annapolis, MD: MarylandOnline. Retrieved from https://www.qualitymatters.org/sites/default/files/research-docs-pdfs/National-Standards-for-Quality-Online-Teaching-Lit-Review-050418.pdf

Shattuck, K., & Burch, B. (2018b). National standards for quality online programs (K-12): Literature review. Annapolis, MD: MarylandOnline. Retrieved from https://www.qualitymatters.org/sites/default/files/research-docs-pdfs/National-Standards-for-Quality-Online-Programs-Lit-Review-050418.pdf

Smith, B., Bridges, B., & Lewis, R. (2013). State review of online courses. A webinar for the International Association for K-12 Online Learning. Retrieved from http://www.inacol.org/resource/state-review-of-online-courses/

Stake, R. (1995). The art of case study research: Perspectives on practice. Thousand Oaks, CA: Sage Publications.

Strauss, A. L., & Corbin, J. (1990). Basics of qualitative research: Grounded theory procedures and techniques (2nd ed.). Newbury Park, CA: Sage.

Vadell, K. (2013). Approaching K-12 online education in Pennsylvania. Online Journal of Distance Learning Administration, 16(2). Retrieved from http://www.westga.edu/~distance/ojdla/winter164/vadell164.html

Virtual Learning Leadership Alliance. (2018). Press releases: K-12 national standards for quality online courses, teaching and programs set to be revised. Lansing, MI: Author. Retrieved from https://www.virtuallearningalliance.org/about/news/

Watson, J., Murin, A., Vashaw, L., Gemin, B., & Rapp, C. (2012). Keeping pace with K-12 online learning: An annual review of state-level policy and practice. Evergreen, CO: Evergreen Education Group. Retrieved from https://www.evergreenedgroup.com/keeping-pace-reports/

Watson, J., Murin, A., Vashaw, L., Gemin, B., & Rapp, C. (2013). Keeping pace with K-12 online learning: An annual review of state-level policy and practice. Evergreen, CO: Evergreen Education Group. Retrieved from https://www.evergreenedgroup.com/keeping-pace-reports/

Wicks, M. (2010). A national primer on K-12 online learning, version 2. Vienna, VA: International Association for K-12 Online Learning. Retrieved from http://www.inacol.org/cms/wp-content/uploads/2012/11/iNCL_NationalPrimerv22010-web1.pdf

Yin, R. K. (2003). Case study research: Design and methods (3rd ed.). Thousand Oaks, CA: Sage Publications.

Zucker, A & Kozma (2003). The Virtual High School: Teaching Generation V. New York: Teachers College Press.

Appendix A

Web-Based & Telephone Survey Instrument

Interviewee's name:
Interviewee's agency/organization:

1. Does your state have an evaluation/approval process for individual K-12 online COURSES?

2. What kind of evaluation/approval processes does your state have for COURSES?
a. A ‘front end’ evaluation/approval process BEFORE the course is offered?
b. Ongoing evaluation of performance or quality checks WHILE the course is being offered?
c. An optional in-depth review process that results in a higher level of course approval, IN ADDITION TO the ‘front end’ course approval?

ADDITIONAL COMMENTS:

3. Does your state have an evaluation/approval process for full-time K-12 online learning PROGRAMS?

4. What kind of evaluation/approval processes does your state have for full-time online learning PROGRAMS?
a. A ‘front end’ evaluation/approval process BEFORE the course is offered?
b. Ongoing evaluation of performance or quality checks WHILE the course is being offered?
c. An optional in-depth review process that results in a higher level of course approval, IN ADDITION TO the ‘front end’ course approval?

ADDITIONAL COMMENTS:

5. Does your state have an evaluation/approval process for online learning PROVIDERS?

6. What kind of evaluation/approval processes does your state have for online learning PROVIDERS (other than the processes for courses and programs cited above)?
a. A ‘front end’ evaluation/approval process BEFORE the course is offered?
b. Ongoing evaluation of performance or quality checks WHILE the course is being offered?
c. An optional in-depth review process that results in a higher level of course approval, IN ADDITION TO the ‘front end’ course approval?

ADDITIONAL COMMENTS:

7. What state agencies or state-recognized entities are involved in the state’s evaluation/approval process? (CHECK ALL THAT APPLY)
a. State education agency
b. Regional education agency
c. University
d. Charter School Commission
e. Other (please specify in COMMENTS)

COMMENTS:

8. IN ADDITION to the evaluation/approval processes you have in place now, is your state CONSIDERING adding evaluation/approval processes for any of the following in the near future?
a. Individual K-12 online courses
b. Full-time K-12 online learning programs
c. K-12 online learning providers (separate from the approval processes for courses & programs above)

COMMENTS:

Appendix B

Internal citations:

i See https://vhslearning.org/
ii
See http://www.qualitymatters.org
iii
See https://www.qualitymatters.org/research/curated-research-resources
ivSee https://www.qualitymatters.org/qm-membership/faqs/qm-inacol-derived-standards-report-background and https://www.qualitymatters.org/qm-membership/faqs/qm-k-12-rubric-inacol-comparison
v The iNACOL National Standards for Quality Online Program are being updated as a part of the VLLA efforts (Shattuck & Burch, 2018b).
vi See http://www.legis.ga.gov/Legislation/20112012/127888.pdf
vii Official Code of Georgia. (2016) § 20-2-167.2. Virtual instruction through virtual schools; no waivers
viii Official Code of Georgia. (2018) § 20-2-2076. The Department of Audits and Accounts shall develop an annual report on state chartered special schools that offer virtual instruction
ix See http://www.legis.ga.gov/Legislation/20112012/127714.pdf
x See http://www.gadoe.org/_layouts/GADOEPublic.SPApp/ClearingHouse.aspx
xi See https://web.archive.org/web/20150726065010/http://www.mdk12online.org/docs/StandardsforReviewingOnlineStudentCourses.pdf
xii The current process is available at http://bcpsdci.ss3.sharpschool.com/department/innovative_learning/educational_options/maryland_virtual_learning_opportunities
xiii See https://web.archive.org/web/20140209090548/http://www.cde.ca.gov/ci/cr/cf/documents/socialcontent.pdf
xivThe CLRN is no longer available online, but you can access the last captured archive of the page at https://web.archive.org/web/20170510004940/http://clrn.org/
xvSee http://apps.leg.wa.gov/rcw/default.aspx?cite=28A.250
xviSee http://apps.leg.wa.gov/wac/default.aspx?cite=392-502-020
xviiThe current process, along with the supporting materials, is available at http://www.k12.wa.us/ALD/Providers/default.aspx
xviiiSee http://www.cde.state.co.us/sites/default/files/documents/onlinelearning/download/adoptedrules02011-00751.pdf
xixSee http://www.cde.state.co.us/sites/default/files/documents/onlinelearning/download/hb1277_online.pdf
xxThe current criteria are available at https://www.cde.state.co.us/onlinelearning/resources
xxiSee https://www.cde.state.co.us/onlinelearning/reports


Online Journal of Distance Learning Administration, Volume XXII, Number 1, Spring 2019
University of West Georgia, Distance Education Center
Back to the Online Journal of Distance Learning Administration Contents