School Poverty Data and Analysis

Relevant indicators:

  • School poverty

The school poverty data presented in the Atlas are derived from the National Center for Education Statistics (NCES) Common Core of Data (CCD) Public Elementary/Secondary School Universe Survey. Survey responses are submitted annually to NCES by state education agencies in the 50 states, the District of Columbia, and other U.S. territories and outlying areas. The data is then cleaned and standardized by CCD survey staff and made available to the public. All public elementary and secondary schools from pre-kindergarten through 12th grade with a positive total student count (based on the NCES variable MEMBER) in each year were included in our analysis of school poverty. This includes both regular schools as well as special education, vocational education, alternative, charter, magnet, and Title 1-eligible schools.

The share of students eligible for free or reduced price lunch (FRPL) was calculated at the school level by dividing the count of students eligible for FRPL (NCES variable TOTFRL) by the total student count (NCES variable MEMBER). Schools were then classified into four groups—school poverty level categories—based on this share (low, mid-low, mid-high, and high), and the number and shares of students by school poverty level category were aggregated to the various Atlas geographies for each racial/ethnic group. For the vast majority of schools, the total student count is consistent with the sum of the counts by race/ethnicity. For a small number of schools, however, it is slightly higher given that the latter excludes any students belonging to an unknown or non-CCD race category. For this reason, data for all racial/ethnic groups combined (the "All" category) reported in the Atlas is based on the sum of student counts by race/ethnicity. School classification by type (e.g., “Primary schools,” “Middle schools,” and “High schools”) is based on the highest and lowest grades offered at each school, following the categorization found in NCES variable LEVEL

1 = Primary (low grade: PK through 03; high grade: PK through 08)
2 = Middle (low grade: 04 through 07; high grade: 04 through 11)
3 = High (low grade: 07 through 12; high grade: 12 only)
4 = Other (any other configuration not falling within the above three categories, including ungraded and operational schools with nonapplicable grade spans)

While data for “Other” schools are not broken out separately, they are included in the data reported for “All public schools.”

Attaching accurate county and city tags (for the 100 largest cities) was done with a great deal of care. Our approach was based on the general observation that geographic information for schools in the NCES data appeared to be more accurate in later years than in earlier years, and that schools could be consistently linked across survey years by the NCES school ID (NCES variable NCESSCH).

The first step was to apply county, city name, state abbreviation, and zip code information (based on school location, not mailing address) from the latest survey year available backward to fill in missing or potentially erroneous information in all earlier years for each school. Given that county tags are available in the NCES data beginning with the 2009–2010 school year (at least among the years included in our analysis), this “backward casting” of geographic information filled in county tags for the majority of schools in each year, with the primary exception being schools that were included only in the 1999–2000 survey but not in any later surveys where county information was collected. To fill in county tags for remaining schools, we used a variety of geographic crosswalks (e.g., between census places and counties, and Zip Code Tabulation Areas [ZCTAs] and counties). 

Matches based on the census place-to-county crosswalks were prioritized over matches based on the ZCTA-to-county crosswalks given that places are generally more likely to be fully contained within a single county. While ZCTAs (defined by areal polygons) do not correspond perfectly with the postal zip codes provided in the NCES data (which are defined by a set of mail delivery routes), they correspond well enough to use for our matching purposes. 

For about 100 schools for which county tags were still missing, or for which there was any significant doubt around the accuracy of a county code that was attached using one of the aforementioned geographic crosswalks, we conducted internet searches and used GIS software and Google Earth to confirm the correct county. Uncertainty around the county code attached using one of the aforementioned geographic crosswalks can arise in cases where a census place or ZCTA intersects more than one county. In such instances, the census place or zip code was assigned to the county containing the largest share of its population, and that population share was preserved in the crosswalk. “Significant doubt” around the accuracy of crosswalk-based matches is defined as a population share associated with a match that falls below 80 percent.

The procedure for attaching accurate city tags (for the 100 largest cities included in the Atlas, which are based on 2010 jurisdictional boundaries) to schools was far more complicated. The primary challenge was the fact that we only had the city name to match by (rather than a unique ID), and for large cities, the city name indicated in the NCES data is often not a real city but rather a “neighborhood” or other area within a city that has a colloquial name. For example, a total of 41 unique city names were reported by schools located within the City of Los Angeles, including names from Arleta to Woodland Hills, while 51 unique city names were reported by schools located within New York City. In some cases, city names reported by schools reflected former independent cities or CDPs that had been annexed by one of the 100 largest cities prior to the survey year, but the outdated city names were reported nonetheless. Another challenge was that the city names in the NCES data were not always consistently reported across schools, with inconsistencies related to abbreviations, hyphens, and misspellings/typos often found in the reported city names. For example, a total of six unique city names were reported for schools located in Oklahoma City, including “OK CITY,” “OKC,” “OKLAHOMA CITY,” “OKLAHOMA CIT,” and “OKLA CITY.”

To address these challenges, we mapped all schools in the 2013–2014 survey (which included latitude and longitude coordinates) using GIS software. We then intersected the resulting set of points with a polygon shapefile for the 100 largest cities, used in the Atlas. This allowed us to create a crosswalk between all of the various city names reported by schools and our internal codes for the 100 largest cities. In cases where only a portion of the schools reporting a particular city name fell within one of the 100 largest cities, all schools reporting that city name were considered either inside or outside that city based on where the largest share of all students were located, and this student share was preserved in the crosswalk. In the end, the crosswalk included a set of city names reported by schools (including both real “neighborhood” and other colloquial names as well as misspellings) that corresponded with each of the 100 largest cities.

For two of the 100 largest cities (Las Vegas, NV, and Miami, FL), a majority of students attending schools that reported an address in these two cities actually attended schools located outside the official 2010 city boundaries. However, given that we had no viable way of distinguishing schools falling inside or outside the boundaries of the 100 largest cities in earlier years of the survey (given that no geographic coordinates are reported), data for all schools reporting location in Las Vegas or Miami are included in the data found in the Atlas for these two cities, for the sake of consistency over time.

Given that most schools reported the same city name in earlier years of the NCES survey (even if it differs from how other schools in the same city report it), application of this city names–to–100 largest cities crosswalk to schools in earlier years of the survey captured the vast majority of schools in each of the 100 largest cities in each year. Prior to application, however, the crosswalk was augmented by comparing the list of 297 city names found in an initial draft of the crosswalk to a list of the full set of unique city names reported in any year of the NCES data included in our analysis. The latter list was scanned for both apparent misspellings of the city names found on the first list and the names of cities that we knew had been annexed by one of the 100 largest cities prior to 2010. This final scan resulted in as additional 21 city names being added to the crosswalk, for a total of 318 unique city names that were associated with the 100 largest cities found in the Atlas. This final crosswalk based on analysis of the 2013-2014 data was also applied to attach city tags to schools in subsequent years based the city name reported school level data, after augmenting it by adding any new unique city names found in the new years of data. 

It is important to note that the measure of school poverty used, the share of students eligible for FRPL, is not always reported and is subject to some degree of error at the school level. The reasons for this include the fact that the count of students deemed FRPL-eligible may be taken at a different time than the total student count, and in some states, a single school may administer the free lunch program for a group of schools (in which case its count and share of FRPL-eligible students would be overstated). However, it is likely that any bias caused by these inconsistencies in reporting at the school level are largely mitigated once the data is aggregated across the many schools in a given Atlas geography. It is also important to note that the Healthy, Hunger-Free Kids Act of 2010 changed eligibility requirements and this can impact comparability of the school poverty data over time. In particular, the Act introduced the Community Eligibility Option (CEO), available in 11 states (including the District of Columbia) by the 2013–14 school year and in all states in the 2014–15 school year, which allows more children to be eligible for FRPL. See this NCES blog post for more information.

Given the prevalence of missing data for some schools and changes to eligibility requirements in recent years, we took precautions to avoid reporting data that are inaccurate or misleading. First, we do not report school poverty information if ten percent or more of the relevant student population attends schools that do not report valid (non-missing) FRPL eligibility data. Second, after making an initial calculation of the overall share of students eligible for FRPL based on available data for the 2009–10 through the latest year available, we examined changes in this measure over time for all 731 Atlas geographies and noted any dramatic year-to-year changes. School poverty data for a some Atlas geographies in certain years were set to missing based on this examination.