The Fable of the "B"s
An Analysis of Variation in BAverages Across Georgia’s School Systems
by Noel D. Campbell and Kim I. Melton
Noel D. Campbell Ndcampbell@ngcsu.edu is an Assistant Professor of Business Administration, North Georgia College and State University. Kim I. Melton is an Associate Professor of Business Administration, North Georgia College and State University.
If you do not like the background color, you can change it by highlighting a color you prefer in the scroll box below.
Georgia’s meritbased HOPE scholarship program awards scholarships to Georgia high school students who graduate with a "B" grade point average and attend college instate. To retain the scholarship, students must maintain a "B" grade point average with a minimum number of credit hours. Using HOPE eligibility and SAT data from Georgia public school systems, this study examines whether there are systematic differences in how "B" averages are awarded to Georgia’s students across school systems. The study finds evidence that not all "B" averages are created equal: for similar average SAT achievements there are systematic differences in HOPE eligibility across Georgia’s public school systems. After calculating the differences between predicted and actual HOPE eligibility, an FTest rejects the hypothesis that all school systems award similar grade point averages for similar SAT achievement. Furthermore, Tukey’s HSD method of multiple comparisons reveals systematic differences between school systems. Based on similar SAT scores, students who fail to qualify for HOPE in some counties may qualify in others. 
Introduction
Georgia’s meritbased "Helping Outstanding Pupils Educationally" (HOPE) scholarship program awards scholarships to Georgia high school students who graduate with a "B" grade point average and attend college instate. This study examines whether there are systematic differences in how "B" averages are awarded to Georgia’s students across school systems; whether it is easier to earn a "B" average, and become eligible for a HOPE scholarship, in some of Georgia’s school systems than in others. This becomes an interesting question considering the dual political purposes behind the HOPE scholarship: firstly, to expand educational access to good students regardless of social or economic status, and, secondly, to encourage those students to remain in state for their education. A meritbased scholarship designed to expand educational access should not discriminate according to the county of a student’s origin. Thus, this study seeks to establish whether such differences in HOPE eligibility beyond that explained by educational performance exist as an indicator of a need for further research in this area.
This study employs a threeyear panel set of HOPE eligibility and SAT data with Georgia’s public school systems (i.e. Georgia county and city school systems) as the crosssectional unit. By comparing predicted and actual values for HOPE eligibility, primarily using an FTest and Tukey’s HSD method of comparisons, the hypothesis that all high school "B" averages are created equal in Georgia is rejected. Given the same average educational aptitude or achievement, it is easier to earn a "B" GPA in high school, hence HOPE eligibility, in some school systems than in others.
Background and Literature
For qualified graduates of Georgia’s high schools, the HOPE scholarship pays tuition and fees plus a book stipend to attend Georgia’s public colleges and universities. For qualified graduates attending instate private colleges and universities on a full time basis, HOPE pays a cash award similar in size to the benefit paid to students attending public colleges. For a rising college freshman to qualify for HOPE, a Georgia high school student must graduate with a "B" or better cumulative grade point average. Georgia phased out all income caps on eligibility by 1996, but beginning in 2000 the state only considers GPA in core academic classes when determining eligibility. To retain the HOPE scholarship, students must maintain a "B" grade point average while in college.
Based on its perceived success, Georgia’s HOPE program has served as the model for numerous states enacting less comprehensive meritbased scholarship programs. Given this national interest and a growing wealth of data, scholars are examining the effects of the HOPE program. Rubenstein and Scafidi (2000) consider the incidence of the implicit lottery tax combined with the distribution of benefits from all lotteryfunded educational programs, including HOPE. They conclude that nonwhite, lower income households tend to purchase more lottery products, hence bearing more of the tax burden, and receive fewer benefits than white and higher income households. They further find that HOPE benefits particularly accrue to higher income, better educated households.
Dee (1998) examines whether HOPE has resulted in immigration to Georgia. He finds that on the Georgia side of Metropolitan Statistical Areas (MSAs) that straddle Georgia’s borders, residential construction in Georgia increased by 30 percent relative to the nonGeorgia areas, and the real value of this construction increased by 40 percent. Furthermore, using similarly specified models, Dee also finds that Georgia public elementary school enrollments were rising in these MSAs.
Several studies examine the HOPE scholarship in higher education. Dee and Jackson (1999) examine a cohort of Georgia Tech students to determine the individual probabilities of college students losing HOPE eligibility. They find a strong relationship between measures of student ability in high school and retention of HOPE scholarships, with better students more likely to retain HOPE. After controlling for choice of academic discipline and student ability, they find the probability of retaining HOPE does not differ dramatically by race or ethnicity. Of particular interest to this study they state, "the empirical relevance of unobserved student attributes is underscored by the joint significance of fixed effects for each student’s county of origin…." (p. 381). Thus, when discussing HOPE retention among college students, the county of a student’s origin is of particular importance. Their main finding is the dramatic differentials in the probability of retaining HOPE across academic majors. Their models report students majoring in engineering, computing, or the natural sciences are 21 percent to 51 percent more likely to lose HOPE than students majoring in other disciplines.
Dynarski (2000) and Cornwell, Mustard, and Sridhar (2000) investigate HOPE’s impact on college enrollment. Dynarski estimates the HOPE scholarship increased college attendance rates in Georgia seven to eight percent for 18 to 19 yearolds. Furthermore, white, higherincome families account for the largest portion of this increase. Cornwell, Mustard, and Sridhar discover HOPE has differing effects, depending on whether the school in question is public or private, and whether it is a twoyear or fouryear institution. They estimate HOPE has increased Georgia firsttime freshman enrollment by eight percentage points, with most of the increase accounted for in fouryear schools. They conclude that HOPE has served primarily to influence college choice (that Georgia retains her high school graduates) rather than expand access to higher education (that more of Georgia’s graduates go on to colleges and universities).
HOPE has been less examined at the high school level. Bugler, Henry, and Rubenstein (1999) examine whether HOPE has caused high school grade inflation. Bugler, et al. find that HOPE eligibility grew rapidly, from below 50 percent of Georgia’s high school graduates in 1993 to nearly 60 percent in 1999. Over the same period, Georgia’s average SAT scores and cumulative grade point averages also rose. Using graduates’ cumulative grade point averages and average SAT scores as their measures, Bugler, et al. find no evidence that HOPE has caused or accelerated grade inflation. While grade inflation may be occurring, they find it to be part of a longterm, national trend that predates HOPE.
The Empirical Methodology and Data
This study assumes welleducated, wellprepared students will tend to do well on standardized tests of student preparation/ability/achievement. Additionally, they will earn "B" averages or better, thereby qualifying for HOPE. In this fashion, school systems whose students are better prepared on average, using nationally standardized measures, should also have higher than average percentages of HOPE qualifiers. The converse would also be true. If "B" GPAs are the same across school systems, school system average scores on standardized measures should be predictably correlated with HOPE eligibility. Therefore, comparison of predicted HOPE eligibility to actual HOPE eligibility will allow researchers to examine whether "B" GPAs are created equal across the state.
This study does not seek to conduct a comprehensive survey of why HOPE eligibility differs from school system to school system. Rather, it seeks to determine whether such differences exist, which would indicate a need for further research. One may make many different arguments as to why predicted eligibility and actual eligibility vary systematically across school systems. One may appeal to customary economic and demographic factors. For example, parents in lower income counties may be more strongly motivated to see their children qualify for HOPE. If this strong motivation manifests itself as active parental involvement in schoolwork, it should correlate with better academic performance and better SAT scores, this study’s independent variable. However, because the HOPE eligibility standard is based on subjectively assigned high school grades, the possibility exists that citizens may influence HOPE eligibility within their school system by applying pressure at a local level. Parents and students can directly pressure teachers and principals at a low personal cost, yet relatively high personal benefits. Additionally, teachers and principals can allocate state government resources (HOPE scholarships) at almost no cost by inflating grades to increase HOPE eligibility. School systems which are successful at applying local pressure on high school GPAs will have higher rates of HOPE eligibility than the rate predicted by their students’ average academic achievement. Of course other factors may exert powerful influences as well. For further investigation into some of these factors, please see Bradbury and Campbell (2001).
Each of Georgia’s counties has a single independent school district, often comprised of several high schools. Therefore, the HOPE eligibility of a particular county is the total number or percentage of all students who qualify from the various high schools within the county school district. In addition there are twentyone city school districts located within various counties, but operated independently of the county system. When necessary, city system data was incorporated into county data. Each system’s average standardized test scores are regressed on each school system’s percentage of students qualifying for HOPE to generate residuals. Analysis of these residuals indicates whether certain school systems award more than average or less than average HOPE eligibility for similar average scholastic preparation/ability as measured by the standardized tests. For interested readers, our data is described in the Appendix.
Empirical and Statistical Results
This study’s empirical approach is discussed in greater detail in the Appendix. School system average SAT scores are used to predict an expected level of HOPE eligibility—i.e., the expected percentage of HOPE eligible students in the school system. Subtracting the predicted value of HOPE eligibility from each school system’s actual value generates a set of "differences." So, a school system can "under award" HOPE, where the actual HOPE eligibility is less than that predicted by the system’s average SAT score, or a school system can "over award" HOPE, where actual eligibility is greater than that predicted by the system’s average SAT score. As examples, the most extreme values uncovered by this procedure are as follows: Towns County awarded HOPE eligibility to 26 per cent more students than expected in 1998, while Glascock County awarded HOPE eligibility to nearly 32 per cent fewer students than expected in 1999. Using various statistical tools, the authors analyze these "differences" within and across school systems to determine whether there are systematic differences in the way HOPE eligibility is earned: do certain school systems consistently "over award" or "under award" HOPE eligibility relative to the state at large? Are "B" averages unequal as one goes from system to system?
For a first pass, a visual analysis was conducted. The "differences" were ordered by magnitude for each year, and then split into groups of 17 counties to approximate deciles (each group of 17 represents 9.82 per cent of the total). The top two groupings, that is, the groups with the greatest "overaward" of HOPE per year are reported in Table 4. The bottom two groupings, that is, the groups with the greatest "underaward" of HOPE per year are reported in Table 5.
In the top "overaward" grouping (Table 4), fifteen systems appear twice during the threeyear sample. The likelihood of at least this many systems appearing two or more times simply by chance in the top "overaward" grouping is only 0.000084. Twentysix systems appear at least twice in the top two groupings combined. The likelihood of such an extreme concentration occurring in the top two groupings at least twice by chance is 0.011. Eight systems appear in all three years. The likelihood of such an extreme concentration occurring in the top two groupings at least by chance is 0.000053. These findings imply that systems that "over award" HOPE eligibility in one year tend to do so in following years.
In the top "underaward" grouping (Table 5), fourteen systems appear at least twice during the three year period, and three appear all three years. The likelihood of observing at least this many systems in the worst "under award" grouping by chance is 0.00029. Looking at the top two "underaward" groupings together, twentysix systems occupy 60.8% of all possible entries. The likelihood of such an extreme concentration is 0.025. Ten systems appear all three years. The likelihood of such an extreme concentration is 0.00000093. These findings imply that systems that "under award" HOPE eligibility in one year tend to do so in following years.
The "differences" are then examined for systems that make wide swings during the three years—systems appearing in the top group and in the bottom group at least once during the period. Only two such systems were detected. The likelihood of seeing this few systems with such wide swings from bottom "under award" to top "over award" and vice versa by coincidence is only 0.0052. Again, systems that "over award" HOPE eligibility in one year tend to do so in following years, and vice versa. Thus there exists evidence of more systems ranking similarly over the three year period than one would expect, and fewer systems making wide swings than one would expect.
Statistical tests were conducted to determine the expected percentage of HOPE qualifiers for each school system each year, to determine differences between the actual allocation percentage of HOPE qualifiers and the expected percentage of HOPE qualifiers (residuals), to test for systemtosystem differences, and to explain the differences found. As indicated earlier, the expected percentage of HOPE qualifiers is determined from the regression equation obtained from each year’s data. These predicted values are then compared to the actual percentage of HOPE qualifiers, producing a "difference" (residual) for each system for each year. If GPAs are awarded equally, the average "difference" (residual) for each county should be zero. Analysis of variance (ANOVA) was used to test whether the average residuals were the same from school system to school system. The F test associated with this analysis supports the conclusion of school system to school system differences.
Once differences were confirmed, attention was shifted to categorizing the differences. Tukey’s HSD method of multiple comparisons was used. This approach allows researchers to group school systems together and talk about differences between groups. As shown in Table 6, Parts A through C, Tukey’s HSD method indicates that school systems should be separated into twentynine groups. Some groups contain a single school system while others contain multiple school systems. For example, Group 10 includes Valdosta City, Burke County, and Vidalia City. Rather than talking about each of the systems in this group separately, any result that applies to one of these systems will also apply to the other two systems in Group 10. Based on this analysis, all but 49 school systems (those in Group 15) are providing significantly different HOPE eligibility to students from at least one other school system in the state, after accounting for student academic aptitude.
These findings allow the authors to reject the hypothesis that all "B" averages are created equal: that for similar average SAT achievements there are systematic differences in HOPE eligibility across Georgia’s public school systems. Therefore, based on similar SAT scores, students who fail to qualify for HOPE in one school system may have qualified in another, and vice versa.
Conclusions
This study finds evidence that not all Georgia high school "B" averages are created equal. It finds systematic differences in the way Georgia’s public school systems award "B" averages and hence eligibility for HOPE scholarships. Based on average student achievement measured by average SAT scores, some school systems systematically award more HOPE eligibility relative to the state wide average. Conversely, some school systems systematically award less HOPE eligibility relative to the state wide average. Dee and Jackson find the county of a student’s origin is important in predicting HOPE retention among college students. Similarly, this study finds the county of a student’s origin is important in predicting initial HOPE eligibility.
These results seem contrary to the spirit and political appeal of meritbased scholarships. There may be many reasons for these results. However, given the correlation between economic and demographic factors and SAT scores, the authors are inclined to believe the explanation lies outside such factors. Rather, because the HOPE eligibility standard is based on subjectively assigned high school grades, citizen influence at the local level may be responsible. This study does not conclusively demonstrate this, but its current results indicate a need for further research.
Sources
Bradbury, John Charles and Noel D. Campbell, "Who Gets HOPE? A Political Economy Analysis of the Determinants of HOPE Eligibility," manuscript under review, North Georgia College and State university, 2001.
Bugler, Daniel T., Gary T. Henry and Ross Rubenstein, "An Evaluation of Georgia’s HOPE Scholarship Program: Effects of HOPE on Grade Inflation, Academic Performance and College Enrollment," Council for School Performance, 1999.
CornwellMustard HOPE Scholarship page, www.terry.uga.edu/hope/home.html
Cornwell, Christopher M., David B. Mustard, Deepa J. Sridhar (2000) "The Enrollment Effects of MeritBased Financial Aid: Evidence from Georgia’s HOPE Scholarship." University of Georgia Working Paper, Athens, GA.
Dee, Thomas S., Linda Jackson (1999) "Who Loses HOPE? Attrition from Georgia’s College Scholarship Program." Southern Economic Journal, V. 66, no. 2: 379390.
Dee, Thomas S. (1998) "Tiebout Goes to College: Evidence from the HOPE Scholarship Program." Georgia Institute of Technology Working Paper, Atlanta, GA.
Dynarski, Susan (2000) "Hope for Whom? Financial Aid for the Middle Class and Its Impact on College Attendance." NBER Working Paper 7756, Cambridge, MA
Education Commission of the States, www.ecs.org
Georgia Public Education Report Card, 19952000, Georgia Department of Education, 205 Butler Street, Atlanta, GA 30334.
Rubenstein, Ross, and Benjamin P. Scafidi (2000) "Who Pays and Who Benefits? Examining the Distributional Consequences of the Georgia Lottery for Education." Andrew Young School for Policy Studies, Georgia State University, Atlanta, GA.
System Average SAT Scores
Multiple Regression 
19961997 
19971998 
19981999 

Maximum: 
1082 
1079 
1082 
1048 
Minimum: 
703 
741 
703 
732 
Mean: 
922.87 
923.42 
922.82 
922.38 
Median: 
928 
928 
926 
928 
Std. Deviation: 
62.00 
59.71 
63.82 
62.76 
Note: Reported mean, median, and standard deviation are for school systems, and are not weighted by the number of students in each system.
Table 2
Percentage of Students Eligible for HOPE
Pooled Set 
19961997 
19971998 
19981999 

Maximum: 
87.86 
87.86 
78.05 
81.36 
Minimum: 
17.76 
17.76 
21.19 
28.57 
Mean: 
53.09 
50.99 
53.36 
54.92 
Median: 
53.70 
50.53 
53.75 
54.93 
Std. Deviation: 
11.07 
11.70 
10.52 
10.67 
Note: Reported mean, median, and standard deviation are for school systems, and are not weighted by the number of students in each system.
OLS Regression Results
Pooled Data Set 
19961997 
19971998 
19981999 

Intercept (tstatistic) 
55.88 (5.61) 
71.95 (7.08) 
42.64 (4.71) 
42.93 (4.57) 
SAT (tstatistic) 
0.11 (19.09) 
0.13 (12.12) 
0.10 (10.64) 
0.11 (10.44) 
Year (tstatistic) 
2.02 (4.48) 
x 
x 
x 
RSquared 
0.43 
0.46 
0.40 
0.39 
Fstatistic 
191.80 
146.80 
113.22 
109.09 
The dependent variable is the percentage of the graduating class eligible for HOPE. p < 0.00001 for all tests.
Largest Positive Residuals
TOP GROUPING 

1997 

1998 

1999 

System 
Residuals 

System 
Residuals 

System 
Residuals 
Pike 
19.04 
Towns 
26.018 
Harris 
16.81 

Dalton City 
18.29 
Bremen City 
19.686 
Dooly 
16.54 

Crawford 
17.83 
Terrell 
18.266 
Bremen City 
16.23 

Gwinnett 
16.82 
Mitchell 
14.148 
Dalton City 
15.34 

Decatur City 
15.24 
Habersham 
13.730 
Buford City 
15.25 

Trion City 
13.82 
Miller 
13.703 
Evans 
14.99 

Madison 
13.82 
Banks 
13.555 
Mitchell 
14.20 

Candler 
13.73 
Gwinnett 
13.362 
Hancock 
13.44 

Bacon 
13.72 
Cherokee 
12.901 
Catoosa 
13.41 

Wilkinson 
13.50 
Crawford 
11.816 
Fulton 
13.35 

Cherokee 
13.15 
Wayne 
11.522 
Pike 
13.33 

Carrollton City 
13.06 
Rome City 
10.829 
Stephens 
12.72 

Wayne 
12.83 
Talbot 
10.496 
Lincoln 
12.66 

Hall 
12.60 
Lincoln 
10.223 
Carrollton City 
12.62 

Catoosa 
12.37 
Dodge 
10.016 
Habersham 
12.36 

Talbot 
11.59 
Macon 
9.981 
McDuffie 
12.20 

Dodge 
11.43 
Cobb 
9.566 
Cobb 
10.64 

SECOND GROUPING 

Toombs 
7.45 
Fayette 
6.67 
Coweta 
7.11 

Floyd 
7.37 
Lumpkin 
6.61 
Fayette 
6.81 

Calhoun City 
7.20 
Dalton Cty 
6.58 
Paulding 
6.75 

Lee 
6.96 
Murray 
6.49 
DeKalb 
6.63 

Gilmer 
6.83 
Pulaski 
6.19 
Terrell 
6.38 

Paulding 
6.74 
Stephens 
6.09 
Henry 
6.34 

Lincoln 
6.67 
Chickamauga City 
5.63 
Meriwether 
6.31 

Cobb 
6.56 
Carroll 
5.60 
Jenkins 
6.21 

Murray 
6.35 
Walker 
5.55 
Seminole 
5.80 

Pierce 
6.28 
Harris 
5.49 
Haralson 
5.59 

Atlanta City 
6.27 
Washington 
5.26 
Banks 
5.46 

Banks 
6.18 
Warren 
5.22 
Atlanta City 
5.43 

Buford City 
5.73 
Hancock 
5.16 
Dodge 
5.30 

Fannin 
5.73 
Bibb 
5.15 
Gordon 
5.28 

Spalding 
5.69 
Carrollton City 
5.14 
Barrow 
5.28 

Pulaski 
5.50 
Jefferson City 
5.05 
Murray 
5.18 

Montgomery 
5.05 
Atlanta City 
4.98 
White 
4.92 
Largest Negative Residuals
LOWEST GROUPING 

1997 

1998 

1999 

System 
Residuals 

System 
Residuals 

System 
Residuals 

Jefferson 
27.77 
Putnam 
23.77 
Glascock 
31.87 

Jenkins 
19.04 
Wilkes 
22.41 
Putnam 
27.37 

Bartow 
18.74 
Jefferson 
19.43 
Baldwin 
19.21 

Wilkes 
17.08 
Glascock 
19.01 
Jones 
16.56 

Vidalia City 
16.07 
Burke 
17.33 
Jefferson 
15.82 

Terrell 
16.00 
Randolph 
16.49 
Jasper 
15.30 

Baldwin 
15.48 
Monroe 
16.39 
Screven 
15.26 

Heard 
14.58 
Jasper 
15.21 
Burke 
14.16 

Franklin 
14.02 
Rabun 
14.00 
Valdosta City 
13.70 

Turner 
13.31 
Jones 
13.13 
Heard 
13.46 

Grady 
13.09 
Atkinson 
13.02 
Wilkes 
12.86 

Sumter 
12.83 
Baldwin 
12.89 
Vidalia City 
12.76 

Wheeler 
12.60 
Valdosta City 
12.05 
Stewart 
11.82 

Screven 
12.22 
Twiggs 
11.47 
Atkinson 
11.52 

Jackson 
12.11 
Bacon 
11.18 
ThomastonUpson 
11.09 

Union 
12.11 
Bulloch 
10.82 
Brantley 
10.84 

Marion 
10.89 
ThomastonUpson 
10.64 
Treutlen 
10.53 

SECOND LOWEST GROUPING 

Chattooga 
10.76 
Laurens 
10.47 
Marion 
9.63 

Oglethorpe 
10.52 
Evans 
10.47 
Gilmer 
8.89 

Valdosta City 
10.15 
Screven 
10.23 
Pulaski 
8.68 

Laurens 
10.03 
Hart 
10.08 
Lanier 
8.65 

Dooly 
9.62 
Sumter 
9.55 
Oglethorpe 
8.65 

Gainesville City 
9.59 
Lowndes 
8.98 
Long 
8.63 

Telfair 
9.27 
Bryan 
8.85 
Dougherty 
8.09 

Bulloch 
8.94 
Tattnall 
8.65 
Turner 
7.97 

Putnam 
8.93 
Heard 
8.58 
Troup 
7.82 

Jeff Davis 
8.85 
Meriwether 
8.54 
Bulloch 
7.77 

Monroe 
8.80 
Chatham 
8.53 
Sumter 
7.69 

Lowndes 
8.75 
Appling 
8.48 
Jeff Davis 
7.50 

Jasper 
8.69 
Dade 
7.92 
Greene 
7.37 

Cook 
8.63 
Troup 
7.63 
Elbert 
7.10 

Seminole 
8.11 
Wheeler 
7.22 
Fannin 
6.92 

Warren 
8.05 
Commerce City 
7.16 
Tattnall 
6.82 

Pelham City 
7.70 
Newton 
7.10 
Worth 
6.67 
Table 6, Part A
Tukey’s HSD Result
With important caveats (See Appendix.) some systems in Part A may be distinguished from some systems in Part C.
Group 
System 
Average 
1 
Jefferson 
21.0039 
2 
Putnam 
20.0228 
3 
Wilkes 
17.4487 
4 
Baldwin 
15.8617 
5 
Glascock 
15.3711 
6 
Jasper 
13.0685 
7 
Screven 
12.5713 
8 
Heard 
12.2039 
9 
Jones 
12.0138 
10 
Valdosta City Burke Vidalia City 
11.9671 11.828 11.6677 
11 
Sumter ThomastonUpson 
10.0229 9.48451 
12 
Bulloch Monroe Laurens Atkinson 
9.17546 8.5861 8.12525 8.04607 
13 
Troup Randolph 
7.6555 7.5892 
14 
Turner Chattooga Oglethorpe Jeff Davis Wheeler Bartow Lanier Jenkins Chatham Marion Jackson Long Lowndes 
7.55637 7.36347 6.87586 6.8519 6.79668 6.68799 6.55858 6.47972 6.34665 6.2659 6.2401 6.04171 6.02642 
Table 6, Part B
Tukey’s HSD Result
With important caveats (see Appendix) Systems in Part B may NOT be distinguished from systems in Parts A or C.
Group 
System 
Average 
15 
Tattnall McIntosh Twiggs Newton Treutlen Grady Worth Pelham City Liberty Appling Union Hart Glynn Greene Franklin Coffee Brantley Gainesville City Crisp Stewart Dougherty Colquitt Polk Whitfield Commerce Cty Bleckley Cook Butts Brooks Ben Hill Warren Bryan Thomasville City Johnson Rabun Berrien Meriwether Columbia Pickens Dade Elbert Decatur Houston Seminole Marietta City Early Ware Fannin Gilmer 
5.66417 5.51659 5.48519 5.44394 5.41537 5.31269 4.93788 4.65428 4.5973 4.52324 4.4338 4.27377 4.08492 3.92807 3.62229 3.3073 3.29743 3.04848 2.95638 2.90332 2.73226 2.66805 2.63899 2.48132 2.44001 2.37619 2.33896 2.15777 2.14134 2.01962 1.98765 1.97237 1.75379 1.74564 1.7097 1.66852 1.65384 1.64688 1.63953 1.47033 1.27361 1.05114 0.93721 0.43325 0.40908 0.30507 0.25493 0.24686 0.23496 
Table 6, Part C
Tukey’s HSD Result
With important caveats (See Appendix.) some systems in Part C may be distinguished from some systems in Parts A.
Group 
System 
Average 
16 
Barrow Telfair Pierce Social Circle City Lamar Walton Richmond Thomas Candler 
0.151359 0.37462 0.462123 0.518332 0.596201 0.644797 0.753554 0.877047 0.943614 
17 
Pulaski Macon Montgomery Wilcox Taylor Henry Bacon White Clarke Clinch Echols Peach Spalding Dawson Camden Washington Hall Morgan Walker Clayton Muscogee Gordon Terrell Madison Cartersville Cty Haralson Calhoun City Decatur City Trion City Calhoun 
1.005377 1.069153 1.10455 1.288948 1.320003 1.363132 1.365827 1.38775 1.402195 1.44157 1.44833 1.616493 1.702298 1.825035 1.963456 2.075638 2.125922 2.147279 2.252347 2.586992 2.827473 2.863061 2.882001 2.908186 2.920966 3.065039 3.307131 3.331711 3.480058 3.480123 
18 
Coweta Irwin Toombs Lumpkin Dublin City Miller Lee Jefferson City Dooly Wilkinson Evans Effingham 
3.742263 3.751327 3.819466 3.978274 4.041438 4.042061 4.345638 4.470975 4.910781 4.951762 4.978837 5.034477 
Group 
System 
Average 
19 
Carroll Fayette Charlton Tift Forsyth Atlanta City 
5.340349 5.359358 5.379706 5.419144 5.439704 5.561309 
20 
Talbot Emanuel Stephens Murray Bibb Rockdale Douglas Buford City Paulding Chickamauga Cty 
5.690117 5.788564 5.867415 6.004947 6.050232 6.05641 6.388988 6.961348 7.19368 7.285924 
21 
Rome City DeKalb Floyd Banks 
7.970485 8.033867 8.316361 8.399112 
22 
Wayne Oconee 
8.498836 8.710668 
23 
Dodge Cobb 
8.916228 8.922988 
24 
Hancock 
9.006222 
25 
McDuffie Lincoln Pike Towns Catoosa Carrollton City Mitchell Harris Fulton 
9.707582 9.852597 9.88298 9.959312 10.22443 10.27454 10.4164 10.67324 10.72224 
26 
Habersham Cherokee 
11.58778 11.75663 
27 
Gwinnett Crawford 
13.21387 13.32704 
28 
Dalton Cty 
13.40449 
29 
Bremen City 
15.08814 
Statistical Appendix
Data Source: Data was drawn from the Georgia Department of Education’s Public Education Report Card. This study considers only public school systems, as comparable educational data are unavailable from private schools. The study examines 173 of Georgia’s county and city school systems, discarding seven systems for insufficient data from 19961997 to 19981999. Baker, Chattahoochee, Clay, Quitman, Schley, Taliaferro, and Webster counties were discarded. These years were chosen because there was no "structural" change in the HOPE program: all income caps were phased out by 1996, and the phasein of "core" GPA requirements did not begin until 2000. Average SAT scores by school system (county) by year were chosen as the independent measure of student achievement. Other moments and measures of variation of this variable are not available. While average SAT scores may not be ideal, ease of access and general acceptance may warrant their use. Furthermore, such standardized scores provide a consistent measure of academic achievement that will not differ across school systems, as internal grading standards (leading to an assigned GPA) may differ. The number of students eligible for HOPE and the total graduating class are reported directly. From this data, the authors calculate the percentage of HOPE eligible students. Selected summary statistics are presented in Tables 1 and 2. Potential complications could arise because many students take the SAT in their junior year of high school, and many students take the test more than once. The data does not allow the authors to discriminate along such margins.
Regression Analysis and Generation of Residuals: System average SAT scores and YEAR is regressed on the percentage of graduating students eligible for HOPE using a first order linear regression model:
to generate predicted values. Additionally, models including the percentage of students taking the SAT as a regressor were estimated. Different counties have different rates of students taking the SAT. Students with little or no interest in secondary education will forego taking the SAT. One can imagine a county in which relatively few students take the SAT, but those who do have the strongest interest in and best prospects for attending college. In this instance this study’s measure of systemwide average SAT scores would be biased, "skimming the cream" of the county’s students. Therefore, these models were reestimated including the percentage of students taking the SAT. Quantitatively, the results did not differ markedly. Qualitatively, the analysis and conclusions were unchanged.
Subtracting predicted values of HOPE eligibility from actual values of HOPE eligibility generated the residuals; thus, residuals are measured in percentage of students. A positive residual indicates that school system awarded more HOPE eligibility (more "B" averages) than the state at large for similar average SAT scores. A negative residual indicates that school system awarded less HOPE eligibility (fewer "B" averages) than the state at large for similar average SAT scores. Regression results are presented in Table 3. Since differences were noted from year to year, individual models were fit for each year. Using the data for each school system for three years, homogeneity of variance across school systems was considered as well as homogeneity of variance across SAT scores. Less than 3.5% of the school systems show more variation from year to year than would be expected based on the data available. Homogeneity of variance across SAT scores was analyzed through traditional regression residual analysis, and homogeneity of variance across school systems was analyzed through quality control methods that address stability of variation from small samples.
Estimating Likelihood of Empirical Results: For each statement about likelihood of observing a certain grouping, the same approach was followed. The authors order the residuals by magnitude for each year, divide the residuals into approximate deciles (17 observations per group—9.82 % of the observations), and count the number of times the stated outcome is observed. The top two groupings, that is, the groups with the greatest "overaward" of HOPE per year are reported in Table 4. The bottom two groupings, that is, the groups with the greatest "underaward" of HOPE per year are reported in Table 5. Probability of such extreme groupings occurring when the residuals are rank ordered are calculated based on the binomial distribution:
2. P(X = x) = _{n}C_{x}p^{x}(1p)^{nx} x = 0, 1, 2, …, n
Where n = number of trials, X = number of successes observed, x = the number of successes of interest, and p = probability of success on a single trial, _{n}C_{x} = n!/x!(nx)!
and
3. P(X ³ x) = P(X = x) + P(X = x+1) + … + P (X = n)
For example: Suppose one wants to determine the probability that at least fifteen school systems will appear in the top "overaward" group at least twice in the three year period.
P(X ³ 15) = P(X=15) + P(X=16 ) + …+ P(X=173) = 1 – [P(X=0) + P(X=1) + … + P(X=14)]
X = the number of systems appearing in the top tier two or more years in three years
n = the number of trials; i.e., the number of systems included = 173
p = the probability of success for any system; i.e., the probability a system is in the top tier two or more years where this is calculated by another binomial:
4. p = P(Y ³ 2) = S _{y=2}^{3} _{3}C_{y} (17/173)^{y} (117/173)^{3y}
Y = the number of times in the top tier
n_{y} = 3
p_{y} = 17 / 173; the likelihood of being in the top 17 of 173 by random assignment
In the top "overaward" grouping fifteen systems appear twice during the threeyear sample. These fifteen systems account for 58.8% of the entries in this table. Based on calculations from the above formulas, the likelihood of at least this many systems appearing two or more times by chance is only 0.000084.
Similar calculations produce the additional results. In the second "overaward" grouping ten systems, 27.4% of the 51 possible entries are accounted for by six systems. Two systems appeared in this grouping all three years. Looking at the top two "overaward" groupings together, twentysix systems appear at least twice in three years (with eight of these systems appearing all three years). The likelihood of at least extreme groupings occurring by chance are 0.011 and 0.000053 respectively.
In the top "underaward" (bottom) grouping fourteen systems appear at least twice during the three year period (with three of these appearing all three years). The chances of observing at least this many systems by coincidence are 0.00029 and 0.00064. In secondtobottom grouping, eight systems appear twice in three years. Looking at the top two "underaward" (bottom) groupings together, twentysix systems occupy 60.8% of the possible cells in the table (with ten of these systems appearing all three years). The likelihood of such extreme groupings is 0.025 and 0.00000093 respectively.
Seeking out systems that make wide swings during the three years (appearing in the top group and in the bottom group at least once during the period), only two systems are detected. The chance of seeing this few systems with such swings, by coincidence, is only 0.0052. If one considers school systems that appear in the top thirtyfour and the bottom thirtyfour during the threeyear period, one finds eleven systems. The likelihood of this few systems making such a swing, by chance alone, is only 0.0000031.
Analyzing School SystemtoSchool System Differences: The regression models developed for each year were used to generate residuals. In turn, these residuals were used as input to a oneway Analysis of Variance (with three observations per school system). ANOVA tests to see if effects are the same from school system to school system. The analysis supports the conclusion of systemtosystem differences (F = 3.89; pvalue = 3.95x10^{27}). Since differences were noted, Tukey’s HSD method of multiple comparisons was used for post hoc comparison of school systems. The average residual was computed for each school system, these were ranked, and differences were analyzed. Differences of at least 20.99 were statistically significant (a = .05). Using Tukey’s procedure the authors are able to cluster the school systems into 29 groups. Conclusions for any school system within a group will apply to all school systems in that group. The average residuals and the groups are shown in Table 6. Results show statistically significant differences between Group 1 (Jefferson County) and any system in Groups 16 or above—i.e., the deviation in percent of students eligible for HOPE in Jefferson County is significantly lower than the deviation from 91 systems (almost 53% of the systems). Similarly, the following statements can be made in terms of statistically significant differences: Systems in Group 2 can be distinguished from systems in Groups 17 or above; Group 3 from Groups 18 or above; Group 4 from Groups 19 or above; Group 5 from Groups 20 or above; and so on through Group 14 from Group 29. Therefore, on the high end, the deviation for Bremen City is significantly higher than the deviations for 33 school systems (approximately 19%). The systems in Group 15 cannot be distinguished from any of the other groups. Therefore, all but 49 school systems (those in Group 15) are providing significantly different HOPE eligibility to students from at least one other school system in the state.
Note: The authors wish to thank Edward Lopez, John Charles Bradbury, and Carole Scott for helpful comments. All remaining errors are the authors’ responsibility.