A Comparative Study of Competency-Based Courses Demonstrating a Potential Measure of Course Quality and Student Success


Jackie Krause
Central Washington University
krausej@cwu.edu


Laura Portolese Dias
Central Washington University
diasl@cwu.edu


Chris Schedler
Central Washington University
schedlerc@cwu.edu

Abstract

While competency-based education is growing, standardized tools for evaluating the unique characteristics of course design in this domain are still under development. This preliminary research study evaluated the effectiveness of a rubric developed for assessing course design of competency-based courses in an undergraduate Information Technology and Administrative Management program. The rubric, which consisted of twenty-six individual measures, was used to evaluate twelve new courses. Additionally, the final assessment scores of nine students who completed nine courses in the program were evaluated to determine if a correlation exists between student success and specific indicators of quality in the course design. The results indicate a correlation exists between measures that rated high and low on the evaluation rubric and final assessment scores of students completing courses in the program. Recommendations from this study suggest that quality competency-based courses need to evaluate the importance and relevance of resources for active student learning, provide increased support and ongoing feedback from mentors, and offer opportunities for students to practice what they have learned.

Introduction

The effectiveness of a competency-based education program depends on the overall course design to enhance student learning.   As noted in the literature review, these factors can contribute to student success in a competency-based course and program and are necessary to enhance the overall learning experience of the student.  Measurement of effective competency-based courses should not be based on a traditional online course evaluation rubric because the unique needs of a competency-based course require a different design.   This paper will address a previously proposed rubric for competency-based education (Krause, Portolese Dias & Schedler, 2015) and assess the effectiveness of the rubric based on unbiased course review and student success (competency mastery) in the course.  After the data analysis, the authors will provide recommendations on possible improvements in the course review rubric and overall course design considerations.  Finally, the authors will provide suggestions for future research in competency-based course design. 

Literature Review

Competency-based education (CBE) is one of the fastest growing sectors of online education. Fleming (2015) suggests that as many as 200,000 students currently participate in approximately 150 different CBE programs, while as many as 400 programs are in the development stage. In CBE programs, students demonstrate their understanding of topics through various activities that prove mastery of the subjects (U. S. Department of Education, n.d.). Skills or learning outcomes are evaluated against given competency requirements. Students proceed at their own pace and typically work one-on-one with mentors or evaluators as they progress through their program. As a result, CBE courses are designed to help students understand the competencies by which they are evaluated and to provide resources that will help students successfully demonstrate their mastery of those competencies.

Effective online course design is one of many important factors in student success in online courses and competency-based programs (Yukselturk & Bulut, 2007). Student success factors related to this study include (but are not limited to): student interaction with the instructor/mentor, student self-regulation, active learning, and quality course activities.

Students' interaction with instructors is an important factor in online learning. In fact, students have more success in instructor-led online courses than in independent study online courses, which are similar to competency-based education models (Jiang, Parent & Eastmond, 2006). However, despite students' expectations for interaction with an instructor in online courses, the same learning outcomes can be accomplished through clear guidelines for interaction between students and mentors (Graham, Cagiltary, Craner, Kim, & Duffy, 2000). Social presence refers to the degree of awareness of the other person in any given communication (Sallnas, Rassmus-Grohn, & Sjostrom, 2000) and has been found to be a critical link in learning, and an element of student success in an online course environment (Pollard, Minor & Swanson, 2014). Social presence is a necessary component in any online course, but it is especially important in self-paced competency-based courses, in which the mentor is responsible for providing the student with motivational factors contributing to student success (Robb & Sutton, 2014).

The next factor contributing to effective online course design is high self-regulation by the student. Self-regulation requires students to take primary responsibility for their learning. It is especially important to the success of students in self-paced competency-based education courses, as much of the work is done on their own. It is imperative that mentors and completion coaches have an understanding of how to assist low self-regulated learners in a competency-based learning environment (Dabbagh & Kitsantas, 2004). Besides the focus on the student's social and motivational needs in online courses, another factor contributing to student success in competency-based education is the quality of learning activities.

Involving students in active learning is crucial to online course success, by providing many types of learning resources, such as videos, textbook, and articles, and having students engage in real-world activities related to those learning resources (Graham, et al., 2000). Lee and Choi (2011) proposed factors for success, including quality course design and learning activities, which supports earlier research by Graham et al. (2000). Sixty-nine total factors were identified in three categories: student factors, environmental factors, and program quality factors (Lee & Choi, 2011). Student factors accounted for the largest number of dropout factors (55%) and included such categories as academic background, relevant experiences, skills, and psychological attributes. Environmental factors made up 25% of the total dropout factors and included the categories of work commitments and supportive environments and addressed issues related to both the college/university support and services as well as support from family, work, and friends. Program quality made up 20% of the total dropout factors and included course design and interactions, specifically course activities such as team-building, well-structured and relevant course content, course orientation, student-to-student and student-to-faculty interactions, and student participation. Ineffective or low-quality courses were identified as a significant barrier to student success.

All of the above factors, and many more beyond the scope of this paper, impact student success; however, we will focus on factors related to program quality and successful design of competency-based courses. By utilizing a comprehensive evaluation rubric, such factors can be taken into consideration for quality course design.

Purpose Statement and Research Question

The purpose of this study is to evaluate the overall effectiveness of the competency-based evaluation rubric in terms of defining course quality and student success. Therefore, the research question is: How effective is the proposed rubric in evaluation of competency-based courses in terms of design quality and student success prediction?

Methodology and Results

Study Design

Because no standardized rubric for evaluating competency-based courses existed, the Multimodal Learning department at Central Washington University developed an instrument (Krause, et al., 2015) to evaluate twelve newly developed courses in support of the competency-based FLEX-IT program for the Information Technology and Administrative Management (ITAM) Bachelors of Science program at Central Washington University (CWU). These twelve courses comprised core classes within the Retail Management and Technology specialization and the Administrative Management specialization. Each course was evaluated by peer reviewers within the CWU Multimodal Learning department using the new rubric. Peer reviewers included experienced online teaching faculty and instructional designers, who had been given instruction in the use of the rubric prior to assessment. Each course was evaluated by two peer reviewers to provide multiple perspectives on course design, with the evaluations submitted to the director for final review. While the competency-based course evaluation rubric was a new evaluation instrument, Multimodal Learning has provided online course reviews for more than three years using similar quality assurance rubrics.

Courses were developed by a variety of faculty within the ITAM department. No specific course development experience was expected of these faculty. The emphasis on selection of faculty for course development was based on subject matter expertise. Twelve different faculty were used to develop the courses between the two specializations, ranging from full professors to adjunct faculty. The courses were designed using a master-course model, which implements best practices and ensures that the menu options for each course are the same for the student. Each competency was divided into topical areas, and modules were built around the topical areas for each of the courses. The master-course model was shared with faculty, and then each completed course was reviewed by Multimodal Learning using the competency-based course evaluation rubric.

The evaluation rubric includes 26 individual measures, which are grouped into seven categories (see Table 2). A three part scale was used to assess quality: Improvement Needed, Effective, or Exemplary. Reviewers rated each course on all 26 measures and included comments with additional feedback for improvement. Rating forms were captured electronically as Microsoft Word documents.

Once the ratings were complete, any identifying information was removed from the forms, and the completed rubrics were assigned a number from 1 to 12. Researchers then compiled the information from each rubric into a single worksheet using Excel. To create quantitative data for analysis, quality assessments were coded as 1=Improvement Needed, 2=Effective, and 3=Exemplary. Once the data was coded, each measure was averaged to develop an overall assessment of the individual measures. Descriptive statistics, specifically frequencies and percentages were used to summarize characteristics of data and were deemed appropriate for this study as there was no attempt to associate variables (Park, 2001). Finally, measures were grouped together and averaged to form a view of each category. Table 1 displays the descriptive statistics for all 26 measures evaluated.

Table 1. Descriptive Statistics

Results

Table 2 provides a detailed look at the averages of each of the 26 measures. Of the 26 average measures, 7 scored between 1.00 and 1.50, with 3 measures scoring the lowest score of 1.00 (Improvement Needed). Twelve measures scored between 1.51 and 2.0 (between Improvement Needed and Effective), four measures scored between 2.01 and 2.5 (Effective and Exemplary), and finally, three measures scored above 2.51 (Above Effective). No single measure rated consistently Exemplary (3.0).

The individual measure that received the highest rating was 10) Learning resources support achievement of competencies and learning objectives at 2.67. There were three individual measures that received the lowest ratings: 8) Learners have opportunities for ongoing assessment and practice with mentor feedback, 9) Expectations for evaluator’s response time and feedback on assessments are clearly stated, and 20) Instructions are provided on how and when to contact mentor for instructional support, all with scores of 1.00.

The category of measures that received the highest overall rating was 7) Policy Compliance with a score of 1.86 and the category of measures to receive the lowest rating was 2) Assessment and Evaluation with a rating of 1.50. Two of the three lowest individual measures were part of this category. No single measure received an average rating above 2.67 (between Effective and Exemplary) while no category of measures averaged above 1.86 (between Improvement Needed and Effective).

Table 2. Averages by Measure

1.   Competencies & Learning Activities

Competencies and learning objectives are measurable and aligned with learning activities.

IM = Improvement Needed
EF = Effective,
EX = Exemplary

IM

EF

EX

Average

1. Competencies and learning objectives identify measurable knowledge, skills, and abilities to be demonstrated by learners

11

1

0

1.08

2. Learning activities support achievement of competencies and learning objectives

0

11

1

2.08

3. Instructions on how to complete learning activities and meet competencies are clear

2

6

4

2.17

4. Learning activities provide opportunities for interaction with content for active learning

1

10

1

2.00

Category Totals

14

28

6

1.83


2.   Assessment & Evaluation

Assessments measure mastery of competencies with specific evaluation criteria.

IM = Improvement Needed
EF = Effective,
EX = Exemplary

IM

EF

EX

Average

5. Assessments are rigorous and valid measures of learners’ mastery of competencies

3

5

4

2.08

6. Requirements are clearly stated for achieving mastery-level on competencies

1

11

0

1.92

7. Assessment rubrics provide detailed and specific guidelines and criteria for evaluation

6

6

0

1.50

8. Learners have opportunities for ongoing assessment and practice with mentor feedback

12

0

0

1.00

9. Expectations for evaluator’s response time and feedback on assessments are clearly stated

12

0

0

1.00

Category Totals

34

22

4

1.50

















3.   Learning Resources

Learning resources support achievement of competencies and learning activities.

IM = Improvement Needed
EF = Effective,
EX = Exemplary

IM

EF

EX

Average

10. Learning resources support achievement of competencies and learning objectives

1

2

9

2.67

11. Use of learning resources (required and optional) for learning activities is clearly explained

4

8

0

1.67

12. Learning resources are current, flexibly available, and appropriately cited

10

2

0

1.17

Category Totals

15

12

9

1.83


4.   Technology & Navigation

Course technology and navigation support personalized learning pathways.

IM = Improvement Needed
EF = Effective,
EX = Exemplary

IM

EF

EX

Average

 

 

13. Tools and media support personalized learning pathways to attain required knowledge, skills, and abilities

0

12

0

2.00

14. Navigational structure of course is explained, logical, consistent, and efficient

2

4

6

2.33

15. Students can readily access technologies required in the course with instructions provided

4

8

0

1.67

16. Minimum technology requirements and technical skills are clearly stated

7

5

0

1.42

Category Totals

13

29

6

1.85


5.   Learner Support

Course facilitates access to support services essential to student success.

IM = Improvement Needed
EF = Effective,
EX = Exemplary

IM

EF

EX

Average

17. Instructions are provided on how to access technical support services

3

9

0

1.75

18.  Instructions are provided on how to obtain accessibility support services

0

12

0

2.00

19. Instructions are provided on how to access academic support services (e.g., Library, Writing Center, Tutoring)

1

11

0

1.92

20. Instructions  are provided on how and when to contact mentor for instructional support

12

0

0

1.00

Category Totals

16

32

0

1.67

 

 

 










6.   Accessibility 

Course demonstrates a commitment to accessibility and usability for all students.

IM = Improvement Needed
EF = Effective,
EX = Exemplary

IM

EF

EX

Average

21. The course provides learning resources in alternative formats for diverse learners

4

7

1

1.75

22. The course follows universal design principles for usability

1

11

0

1.92

23. The course design accommodates the use of assistive technologies

2

10

0

1.83

Category Totals

7

28

1

1.83


7.   Policy Compliance

Course complies with institutional policies.

IM = Improvement Needed
EF = Effective,
EX = Exemplary

IM

EF

EX

Average

24. The course materials comply with Copyright Policy

5

7

0

1.58

25. The course complies with Intellectual Property Policy

0

12

0

2.00

26. The course complies with FERPA Policy

0

12

0

2.00

Category Totals

5

31

0

1.86


To examine the overall effectiveness of the rubric to evaluate quality competency-based course design, researchers examined the final assessment scores of students who have completed courses in the program. Due to the fact that it is a new program, the sample size is small but will provide a preliminary basis to support or reject components of the rubric created for competency-based programs. For the purposes of this study, scores for the final assessments were only recorded if there was more than one student who had completed the course. A total of twenty-two courses have been completed in the program, nine of which have had more than one student complete the course. Table 3 shows these courses, the number of students to successfully complete the final assessment, the number of students who attempted the final assessment more than one time, and the average score for all final assessment attempts. In five of the nine courses, students submitted more than one attempt. Table 4 examines individual student performance and includes the number of courses completed, the number of courses that required multiple attempts, and the maximum number of attempts for the nine students in the program. Of these nine students, four required multiple attempts and no student required more than two attempts to complete the final assessment.

Table 3. Course Completions and Average Assessment Scores

Course Name

Number of students completed

Number of students with more than one attempt

Average assessment score for all attempts

ADMG201

3

1

89.25

ADMG271

4

1

90.4

ADMG302

4

0

92.9

ADMG371

2

1

96.3

ADMG372

2

0

97.5

ADMG385

4

2

91.5

IT101

5

0

97.12

IT260

2

0

100

RMT330

3

1

96.5



Table 4. Individual Student Course Completions and Attempts

Student

Total Number of Courses Complete

Courses completed with more than one attempt

Maximum Number of attempts

One

5

2

2

Two

6

2

2

Three

2

1

2

Four

1

0

0

Five

2

0

0

Six

20

0

0

Seven

3

1

2

Eight

2

0

0

Nine

1

0

0


Students are required to score at least 80% on their final assessment for the course to achieve competency mastery. Students may take the assessment two additional times to improve their final score. Should a student fail to achieve mastery after 3 attempts, the student is then requested to withdraw from the program and attend a more traditional program.  It should be noted that such a withdrawal has not yet occurred in the program.


Discussion and Recommendations

 Based on the course evaluation rubrics, the individual measure that received the highest rating was 10) Learning resources support achievement of competencies and learning objectives at 2.67.  This correlates with the average grade received in all courses, at an average of 94.6% after all attempts had been taken.  This is consistent with the fact that learning resources supporting achievement of competencies and learning objectives is clearly correlated between the course reviews by Multimodal Learning and the student achievement (competency mastery).

The lowest ratings on the course evaluations by Multimodal Learning were measured for 8) Learners have opportunities for ongoing assessment and practice with mentor feedback, and four students out of nine resubmitted assessments more than once (44% of students).  This supports the course reviewers’ evaluation that ongoing assessment and practice with mentor feedback would improve some of these courses so that students do not require multiple attempts on their final assessment.

The other measures with low ratings on the course reviews, 9) Expectations for evaluator’s response time and feedback on assessments are clearly stated, and 20) Instructions are provided on how and when to contact mentor for instructional support, had scores of 1.00.  These lower ratings can be resolved by simple operational changes and information provided in the introductory module of each course. It is possible these lower ratings impacted the four of our nine students who chose to redo assignments and resubmit, but the authors feel there may be little correlation between competency mastery and these operational aspects. One last area of concern was 1) Competencies and learning objectives identify measurable knowledge, skills, and abilities to be demonstrated by learners, as this scored 1.08, just slightly above our three lowest factors. Course learning objectives are being revised as part of an ongoing program development project to define measurable and quantifiable outcomes.

Overall recommendations for improving the quality of competency-based courses include:


Conclusion

Based on the data analyzed, it appears that the proposed rubric as outlined in this paper is appropriate for evaluating competency-based course design and includes measures that correlate with student success.  Because competency-based courses are different than traditional online courses, use of such a specific evaluation rubric is beneficial when measuring the quality of competency-based courses.


References

Dabbagh, N. & Kitsantas, A. (2004). Supporting self-regulation in student-centered web-based learning eEnvironments. International Journal on E-Learning, 3(1), 40-47. Norfolk, VA: Association for the Advancement of Computing in Education (AACE).

Fleming, B. (2015, February 17). Mapping the Competency-Based Education Universe - Eduventures. Retrieved from http://www.eduventures.com/2015/02/mapping-the-competency-based-education-universe/

Graham, C., Cagiltay, K., Craner, J., Lim, B., & Duffy, T. M. (2000). Teaching in a Web-based distance learning environment: An evaluation summary based on four courses. Center for Research on Learning and Technology Technical Report No. 13-00. Indiana University Bloomington. Retrieved from Web: http://cmapspublic2.ihmc.us/rid=1FWJQH04B-1GZ7Y50-G85/crlt00-13.pdf

Gunawardena N., & Zittle, F. (1997). Social presence as a predictor of satisfaction within a computer mediated conferencing environment. American Journal of Distance Education, 11(3), 8-26. Retrieved from ProQuest database.

Jiang, M., Parent, S. & Eastmond, D. (2006). Effectiveness of Web-based learning opportunities in a competency-based program. International Journal on E-Learning, 5(3), 353-360. Chesapeake, VA: Association for the Advancement of Computing in Education (AACE).

Krause, J., Portolese Dias, L. & Schedler, C. (2015, Spring) Competency-based education: A framework for measuring quality courses. Online Journal of Distance Learning Administration, Volume XVIII, Number I, Spring 2015

Lee, Y. & Choi, J. (2011, October). A Review of online course dropout research: implications for practice and future research. Educational Technology Research and Development, Volume 59, Issue 5, pp 593-618

Park, K. (2001). Statistics without complex formulas: A conceptual approach. La Verne, CA: University of La Verne Press.

Pollard, H., Minor, M. & Swanson, A. (2014, Summer). Instructor Social Presence within the Community of Inquiry Framework and its Impact on Classroom Community and the Learning Environment. Online Journal of Distance Learning Administration, Volume XVII, Number II, Summer 2014. University of West Georgia, Distance Education Center.

Robb, C. & Sutton, J. (2014, April). The importance of social presence and motivation in distance learning. Journal of Technology, Management, and Applied Engineering, 31(2). Retrieved from https://c.ymcdn.com/sites/atmae.site-ym.com/resource/resmgr/articles/robb___sutton-the_importance.pdf

Sallnas, E.L., Rassmus-Grohn, K., & Sjostrom, C. (2000). Supporting presence in collaborative environments by haptic force feedback. ACM Transactions on Computer-Human Interaction, 7(4), 461-476

U. S. Department of Education. (n.d.). Competency-Based Learning or Personalized Learning. Retrieved from http://www.ed.gov/oii-news/competency-based-learning-or-personalized-learning

Yukselturk, E., & Bulut, S. (2007). Predictors for student success in an online course. Educational Technology & Society, 10(2), 71-83. Retrieved from http://anitacrawley.net/Articles/Yukselturk.pdf

 


Online Journal of Distance Learning Administration, Volume XVIII, Number 4, Winter 2015
University of West Georgia, Distance Education Center
Back to the Online Journal of Distance Learning Administration Contents