Standard One: Report
B.1 What do candidate assessment data tell the unit about candidates meeting professional, state, and institutional standards?
Initial Teacher Preparation Programs
In order for initial candidates to meet professional, state, and institutional standards, they must demonstrate they have content knowledge, pedagogical content knowledge and skills, and professional dispositions necessary to help all students learn. The unit’s comprehensive assessment is designed to give the unit information on how well we are preparing candidates in these areas, capturing information about their progress at six key points of assessment. These data can then be disaggregated and used to assess specific program standards. The six points of assessment are described briefly below.
1) Program Entry – Candidates must demonstrate adequate content knowledge prior to their admission to teacher education. Content knowledge is developed through the university’s Core Curriculum, virtually all of which is completed prior to admission to Teacher Education. Candidates must have earned a minimum GPA 2.7 for admission to all programs except Physical Education, which accepts a 2.5 GPA. Candidates must also have passed the Georgia Assessments for the Certification of Educators (GACE) Basic Skills Exam or exempt this exam with SAT combined score of 1000, ACT combined score of 43, or GRE combined score of 1030. Once in Teacher Education, candidates must maintain a 2.7 GPA and have a C or better in all professional education courses and courses in their teaching field. Data are provided in Standard 2, Exhibit 2 and summarized in Table 1.
Table 1: Candidate Data at Entrance to Teacher Education
|Assessment||Academic Year||Pass Rate/GPA|
|GACE Basic Skills||2009-10||100%|
|Mean GPA Score at Entrance||2009-10||3.32|
2) Mid-point Assessment – As candidates progress through courses and field experiences, they are expected to develop meaningful learning opportunities that facilitate learning for all of their students. They engage in ongoing reflection, making instructional adjustments as necessary to enhance student learning. They apply their content knowledge, along with their understanding of human growth and development, diversity, and exceptional education as they develop lessons to make ideas accessible to all students. At approximately the mid-point of their programs, candidates are assessed on their knowledge, skills, and dispositions. Typically this assessment is conducted with multiple sources of evidence. For example, faculty rate candidates’ lesson plans or another key artifact, looking for knowledge of key content, pedagogical knowledge, and ability to organize the lesson and differentiate instruction. The candidates might also be scored on a “peer teaching” practice of this same lesson. A mid-point conference is held in some programs to discuss progress and the candidates’ dispositions. Mean scores on key mid-point assessments in the majority of the programs fall within the Acceptable (2) and Proficient (3) categories on the four point scale. Mid-point data is provided in Exhibit 9, Table 3.
3) Clinical Practice – In order to enter clinical practice, candidates must have completed course work in both content and education courses, have a cumulative GPA of 2.7 or higher, have cleared the criminal background check, and have purchased liability insurance. They are then placed in schools, and we make every effort to give them experience with diverse populations. As candidates develop, deliver, and assess instruction during their internship, they are expected to consider school, family, and community contexts so that they can connect new concepts to students’ prior experiences. They are expected to demonstrate a variety of teaching strategies that differentiate instruction, integrate technology into their lessons, and demonstrate solid knowledge of content. To assess the candidate during this experience, both the university faculty member and the field-based supervising teachers use the Teacher Education Field Experience Evaluation (TEFEE).
The TEFEE is designed to assess candidates’ content knowledge, their professional and pedagogical knowledge and skills, and the dispositions identified in our Conceptual Framework. Data from the TEFEE show that candidates receive scores of 4 on nearly all standards, indicating that they are well prepared for clinical practice. Nearly half of the programs had average scores of 3 on candidate performance related to knowledge of the growth and development of children and how to use data to inform future lessons and monitoring. Although this is still a strong score, it gives us guidance on areas that may need attention across the unit. (See Exhibit 9, Tables 5 and 6.)
4) Effect on Student Learning – Throughout all instructional Blocks, candidates learn to plan units and lessons by first, selecting standards; second, selecting appropriate authentic assessments and evaluative methods; and third, planning instruction that will provide experiences to enable all students to demonstrate mastery of the standards. During the internship semester, all initial preparation programs require candidates to design a unit (generally two weeks) that they will teach while they are in clinical practice. The candidate also designs an appropriate pretest to administer in advance of beginning the unit (to allow time to review the results and modify plans as needed) and a post-test to administer subsequent to the unit. The plans and the results of teaching the lesson demonstrate candidates’ abilities not only to design and administer the assessment, but also to use basic research methods to gather data about their effect on the students’ learning, to analyze data from the pre- and post-test, and to use the information to modify their subsequent lessons. These unit plans and lessons, faculty feedback, and the candidates’ reflections are archived in FolioTek, which is electronic portfolio development software. A rubric is used to evaluate candidates’ abilities with the construction and implementation of the unit lessons as well as the analysis and reporting of pre- and post-test results. (See Exhibit 9, Table 3.)
5) Program Exit – There are three types of assessment data that measure candidates’ knowledge, skills, and dispositions at program exit:
Program Exit Assessment – Initial preparation programs have identified a comprehensive artifact (e.g., a reflective portfolio, a unit lesson plan, etc.) that is scored on a four-point scale. This is a similar process to the assessment conducted at mid-point. Faculty members score the student on content knowledge, knowledge of pedagogy, and dispositions based on the content of the assignment. They rate candidates on their ability to link content to pedagogy, to differentiate instruction for all students, and to integrate technology into their lessons. Data from program exit allows programs to assess program standards and, with the new assessment system, is enabling us to compare candidates’ mid-point and exit assessments, establishing the value our programs add to candidates’ readiness to teach. Exit data for initial programs are provided in Exhibit 9, Table 3, and generally fall within the Acceptable (2) and Proficient (3) categories. Comparisons of mid-point data to exit data are provided in Exhibit 9, Table 4.
Exit GPA - Professional and pedagogical knowledge and skills of candidates in initial preparation programs are also evaluated based on their overall GPA at program completion. Although a 2.7 overall GPA is required for professional certification in Georgia, the College strives to significantly exceed this. The mean grade point average (GPA) of initial candidates at the point of graduation for the academic year 2008-2009, for example, was 3.14. For the academic year 2009-2010 it was 3.17.
Georgia Assessment for the Certification of Educators (GACE Content Exam) – At exit from the program, all candidates seeking certification must pass a content exam in their field, the GACE exam. This assessment measures the content knowledge and pedagogical knowledge needed to successfully perform the job of an educator in the state. Pass rates for initial certification candidates from the University of West Georgia show that our candidates know their content areas. For the 2008-2009 academic year (the most recent year of available data), 19 of the 24 teacher education programs in the unit had a100% pass rate. With the exception of one year in Art Education (in which only four candidates took that test), the pass rates for all programs exceeded 85% over the past three years. (GACE Content Scores by program are provided in Exhibit 9, Table 7.)
The university also receives data on the percentage of candidates correctly answering questions related to each standard addressed in a given exam. This has proven to be very useful information for program improvement. Although our students do very well, we examine closely any standard in which fewer than 70% of candidates answered questions correctly. Viewing these data over multiple years reveals strengths and a few areas of weakness that the programs work to improve through changes in curricula, sequencing of courses, instructional approaches, and/or the mode of delivery.
6) Post Graduation – Three post-graduation instruments provide data on candidates’ readiness for their professional roles:
Undergraduate Program Evaluation (UPE) – At the end of their student teaching internship, candidates are asked to do an assessment of the way their programs prepared them. These data are then returned to the program for review. The 0-5 scale used on this instrument has not yet been adjusted to match the 1-4 scale of the unit’s other assessment instruments. With a mean score of 4.5 there is definite agreement that these courses contributed positively to their preparation for teaching. (See Exhibit 9, Table 8.)
Board of Regents’ (BOR) Survey of Graduates – The BOR surveys graduates from all institutions every few years and distributes these data to universities in the system. The data are not disaggregated by program and therefore can only be interpreted to reflect opinions across all programs that prepare candidates for initial certification. The survey includes responses to positive statements in the categories of Content and Curriculum; Knowledge of Students, Teaching, and Learning; Learning Environment; Classroom Program; School-wide Assessment; Planning and Instruction; and Professionalism. Data show that UWG graduates feel extremely well prepared for their professional roles. (See Exhibit 6.)
BOR Employer Surveys – The BOR also routinely surveys employers about their satisfaction with graduates from institutions in the University System of Georgia. In the most recent data we have received, approximately 96 of our known employers completed surveys, and 100% said they would encourage candidates to attend initial preparation programs at the University of West Georgia. Data indicate that at least 90% of employers agree or strongly agree that UWG prepares candidates well for teaching. (See Exhibit 7.)
Advanced Preparation Programs
Advanced candidates are expected to have a more integrated understanding of the components of good teaching, including the ability to: 1) reflect on their own practice; 2) engage in professional activities; 3) have a thorough understanding of the school, family, and community context; 4) incorporate technology to enhance instruction; 5) collaborate with the professional community; 6) have familiarity with current research and policies related to schooling; 7) be able to analyze educational research and policies and explain the implications in their own practice and profession; and 8) use all of these skills to enhance their teaching. The assessment system for advanced candidates follows closely that of initial preparation candidates, taking into account that these candidates are typically already working in schools. The assessment system for advanced programs is outlined below:
1) Program Entry – Advanced programs have slightly varied entrance requirements, but typically require candidates to have a bachelor’s degree in the field of study, a minimum GPA of 2.7, and a combined GRE score of 800 or higher. Most candidates for entry to advanced programs already hold initial certification, which also means they have passed the GACE content exam, and so 100% of our advanced graduates meet this competency. Letters of recommendation and interviews are also frequently part of the program entry assessment of candidates.
2) Mid-point Assessment – Due to the variety of programs and degree/certification options, the types of mid-point assessments vary greatly. Examples are unit plans, papers, case studies, electronic portfolios, projects, logs, and reflections papers. Some of the programs have limited data due to small numbers of completers for any one year and to recent changes to the assessment and evaluation systems. Across programs reporting data at the mid-point, the mean scores fall primarily in the Acceptable (2) and Proficient (3) ranges. (See Exhibit 9, Tables 9, and 10.)
3) Effect on Student Learning – Advanced programs report data on NCATE Standard 1D, Effect on Student Learning for Teacher Candidates. Mean scores reported across all programs for these standards are at the Acceptable (2) and Proficient (3) levels except one program at the mid-point. Mean performance of candidates in that program was in the Acceptable range on the exit assessment. (See Exhibit 9, Table 9.)
4) Exit Assessment – The most common type of endpoint assessment for the advanced programs is a comprehensive written examination, perhaps accompanied by an interview or a specialized rubric. The exams generally consist of questions developed by members of the candidate’s graduate committee and provided to the candidate prior to the exam date. On the exam date the candidate writes responses under strict exam conditions. These responses are evaluated by the appropriate committee member. The results are generally Pass/Fail, but candidates have the option to re-take the exam. Most candidates pass on the first attempt and any who re-take generally have done further preparation and pass the re-take. Consequently, the pass rate is very close to 100%. In some cases there is also a requirement for a formal presentation of the candidate’s work in a seminar session. (See Exhibit 9, Tables 9 and 11.)
5) Post Graduation – Early Childhood advanced candidates are surveyed after graduation to assess their perspectives on their preparation. The majority (89.5%) agree/strongly agree with a series of questions about the extent to which the program prepared them for their professional work. (See Exhibit 9, Table 12.)
Other School Professionals
Two of our programs that prepare other school professionals hold national accreditation: School Counseling (which holds CACREP accreditation) and Speech-Language Pathology (which holds ASHA Accreditation). Three other programs, Instructional Technology, and the doctoral programs in School Improvement and Professional Counseling and Supervision, do not lead to professional certification and are not discussed at length in this report. However, the above programs, along with our two certification programs – Educational Leadership and School Library Media – have robust assessments systems. They have rigorous entrance requirements that go beyond minimum GPA averages and GRE scores. For example, Educational Leadership candidates must currently be in a leadership role and be supported by their district to enroll in this program, which is offered on site as an embedded, performance-based model. School Library Media candidates are interviewed and must have outstanding letters of recommendation. The pass rate on GACE Content Exam for Educational Leadership and School Library Media candidates is 100%.
All programs that prepare other school professionals assess candidates on SPA standards at multiple points in the program. Candidates’ mean scores fall within the Acceptable (2) range and for all standards the scores at the endpoint are within the Proficient (3) range. (See Exhibit 9, Tables 14 and 16.)
In addition to content, candidates are assessed on their knowledge of theory and their ability to apply theory to practice. They are expected to be able both to conduct action research in their respective settings and to know current research in the field. Candidates are also assessed on professional dispositions and the effect their role has on student learning. The following excerpt about UWG candidates’ effect on student learning is from one of the Educational Leadership reports,
“The percentage of candidates (24%) reaching exemplary status is also impressive given the importance of school leaders making an impact on student learning and the rigor of the [Georgia] Leader Key Rubrics. Producing leaders who can impact student learning is an important objective of the program, as is the belief that school leaders have a responsibility to model the concept that all students can learn. Candidates are specifically required to report how their work has impacted student learning during the time in the UWG program.”
B.2b Briefly summarizes the most significant changes related to Standard 1 that have led to continuous improvement.
Over the past few years, the College of Education has made several changes related to Standard 1 that fall into two categories: Changes to Program Assessment and Changes to Programs as a Result of Assessment. These are discussed below.
Changes to Program Assessment
The biggest change has been the move away from the use of FolioTek, which has served primarily as a system to archive candidate work, to a more consistent, comprehensive series of assessments using relational databases. The reason for the change was twofold. First, FolioTek is not a data analysis program, and so it was difficult to retrieve the data in meaningful ways. Second, FolioTek is built on a dichotomous rating system, using only “met” or “not met,” resulting in data that did not provide specific guidance for program improvement. Because virtually all students meet standards by the time they graduate, there has been no discrimination between indicators that could lead to specific program changes. The system currently in development will still allow for data “storage” and will also promote sharing of data between centers (e.g., Office of Field Experience), academic programs, the unit, and the University. It will also provide a means for aggregating, disaggregating, analyzing, and reporting candidate data to inform decision making.
Improvements to the assessment system, which began in 2009 and are ongoing, are designed to accomplish three primary goals. First, assessment changes were approached from a unit perspective with six points of assessment established and broad categories that work across all programs. From there, individual programs can expand to add assessments that will provide data that can answer specific program questions and inform improvement. Second, all assessments were shifted to a four-point scoring system (unacceptable, acceptable, proficient, exemplary), providing programs more discerning data and giving the unit the ability to summarize data across programs. Third, multiple points of responsibility for the assessment system have been created to ensure data are collected, analyzed, and used for improvement on regular cycles. We also have worked to create a culture of assessment in the College, yet we know there is more to do in terms of establishing a fully integrated assessment system. Accomplishing our goal of making all databases relational will ensure that we can disaggregate data in multiple ways and can follow any student or group of students from admission through certification.
Institutional instruments have also been revised. The TEFEE has undergone several modifications over the past few years. Different versions have been designed to be more in line with the capabilities of the candidates at different stages (Blocks) in the programs. It has been re-worded for clarification and converted from a descriptive score to a quantitative (4 point) system that can be more easily analyzed. This has allowed us to make comparisons from year to year, among campuses, between online vs. face-to-face environments, etc. We know there is more to do, however, to create a stronger instrument for the assessment of field and clinical experiences. First, more professional development is needed on the rating scale to eliminate the preponderance of very high scores. Second, the Georgia Department of Education is piloting a new assessment instrument for clinical experience (Class Keys), and UWG College of Education has become part of the pilot program. Data are being analyzed in spring 2010, and if the state adopts this instrument for teacher evaluations, we plan to adapt it for the evaluation of interns.
Changes to Programs as a Result of Assessment
Programs are moving from an informal, somewhat anecdotal approach to program improvement toward one that relies increasingly on assessment data. At department meetings, chairs lead discussions on data that is emerging from the comprehensive assessment system. This has led to several program improvements across the College. For example, data on candidate surveys have led several programs to move their course delivery online. GACE Content data reveal standards where programs may need to shore up content or thread it more systematically throughout the program. Other assessments pointed to the need for more research courses or changes to program assessment. Programs have responded by adding or modifying courses, changing course sequences, or modifying required assignments. Examples of changes to programs as a result of assessment can be seen in Standard 2, Exhibit 8.