Frequently asked questions about our data
Key measures are indicators that help judge how we are doing in a given area. For example, the high school graduation rate is a measure of how well we are preparing our students for post-secondary education and the workforce. When indicators are measured over time, we can trace trends—are we improving, staying the same, or getting worse? By measuring these kinds of data, we can:
In each of the topic areas, advisory groups comprised of experts in the field, and academic and business leaders, convened to choose the indicators. They used a set of criteria and guidelines set forth by a Technical Advisory Committee.
Relevant – relates to stated topic goals.
Valid – truly measures what it is intended to measure.
Consistent over time – regularly collected the same way.
Leading – signals broader changes to come, allowing the community to respond proactively
Policy-responsive – can be impacted by policy changes within a relatively short time period.
Affordable – can be easily collected within project budget.
Understandable – easy for our target audience to understand.
Comparable – allows for comparison within the region, by different groups
Standardized – allows for comparison with other regions, metro areas, states, or countries.
Outcome-oriented – reflects changes “on the ground” or actual impacts on the community, rather than change to inputs, such as funding or policies that could eventually lead to community change.
The advisory groups were asked to choose two to four indicators for each topic that best met selection criteria. However, for each topic area, there may be more useful information to foster deeper understanding of the issue. In the More Measures section, measures allow for a more thorough look at the topics, and include a variety of data that enables the reader to dig deeper.
Typically we use the Minnesota Initiative Foundation (MIF) regions to construct the regions from county-level data, as shown on our homepage map. When data are not available for the MIF regions, we create regions that are close to, but not exactly the same as, the MIF regions.
For measures that use American Community Survey data, we create alternative regions from the Census Bureau’s Public Use Microdata Areas (PUMAs). These are the smallest geographies available that give full coverage across Minnesota. However, if this doesn’t meet your needs, you can also create your own regions by adding together data from the available counties in the “By county” breakdown.
When data are only available by metropolitan area, we include the area that is contained partially or fully within the MIF region. Click here for a list of the counties found in each metropolitan area. Metropolitan areas are also referenced in other tables and graphs throughout the site, most commonly those that rank the Twin Cities relative to other areas.
Most of the key measure data has been compiled by Wilder Research staff using data from one or more credible sources. The source(s) citation for each key measure is provided below each data graph. To find more detailed information, use the drop down menu found above each key measure graph, go to View, and choose "Data & notes."
Additional source citations:
Whenever possible, Minnesota Compass tries to rely on a single data source within a key measure to maintain consistency across charts. Occasionally, we have to use data from several data sources, usually so that we can provide data for smaller geographies (e.g., counties, small cities) or for smaller sub-groups in the population (e.g., foreign born, older adults).
For example, our Voter Turnout key measure uses data from the Minnesota Secretary of State for the overall turnout figure. But we use data from the Current Population Survey in order to look at turnout by race and other demographic characteristics. Another example is our Poverty measure, where we use estimates from the American Community Survey for many population breakdowns, but data from the Small Area Income & Poverty Estimates (SAIPE) for our county-level poverty figures.
In rare instances, this means that an estimate for the same geography (e.g., Minnesota) in the same year is slightly different across charts. One of the primary reasons for this difference is that the estimates are from different data sources (the other primary reason for this difference is described in the next FAQ). You can find additional information and guidance on data sources used for each chart by hovering over the VIEW tab and selecting “Data & notes.”
One of our main data sources for key measures on Minnesota Compass is the American Community Survey (ACS), an ongoing, year-round survey of households in the United States conducted by the U.S. Census Bureau. The ACS produces estimates that describe average characteristics of an area over a specific time period. Depending on the size of a population, we need to use 1, 3, or 5 years of pooled data to publish reliable estimates. ACS 1-year estimates describe average characteristics over a year, while ACS 3- or 5-year estimates describe average characteristics over that three or five year time period. Additional information on interpreting single- and multi-year estimates can be found here.
In general, our preference is to use 1-year estimates to track trends whenever possible, but we use multi-year estimates (3-year or 5-year) for smaller geographies or for smaller sub-groups in the population to adhere to our standards of data reliability. This means that estimates for the same geography (e.g., Minnesota) may be slightly different across charts – even when the estimates come from the same data source – because the estimates refer to slightly different time periods.
Sampling error in data occurs when estimates are based on a sample population. The margin of error (+/-) represents the estimated size of sampling error associated with the estimate. When possible, Compass presents in its tables the error margins for a 90% or 95% degree of confidence (the range in which the true value falls) depending upon the standard used by the original data source.
For example, The American Community Survey, which is used extensively throughout Minnesota Compass, publishes a margin of error with a 90% confidence. This is the range in which the true value will fall 90% of the time, i.e., nine times out of 10. For example, given a data estimate of 49% with a margin of error of +/-1%, one can be 90% confident that the true percentage lies between 48% and 50%.
Please note: if the data from two years are different, but their margins of error overlap, one cannot be certain that there has been a true change. For example, if the uninsured rate in your community was10 % (+/-3%) in the year 2000 and then 11% (+/-3%) in 2001, the ranges around the estimates are overlapping (an estimated 7-13% uninsured in 2000, and an estimated 8-14% uninsured in 2001). Therefore, we cannot know whether the “increase” from 10% to 11% in one year was due to chance, or is an actual increase in the uninsured rate in the community. Multiple data points over time can help to determine if there may be an underlying trend.
In general, margins of error become larger as the size of the group or level of geography gets smaller. For example, data about American Indian residents in Minneapolis will have a larger margin of error than data about all Minneapolis residents. When looking at graphs about small groups or small geographies, Minnesota Compass encourages users to consult the margin of error on the “Data & notes” page, which can be accessed for any graph by changing the View menu in the gray banner above the graph. In some instances, Minnesota Compass staff have chosen not to publish data if the margins of error are too large to draw reasonable conclusions.
The results of surveys are subject to other types of error besides sampling error. Compass users are encouraged to consult the original data source for additional details about methodology and potential sources of error.
In addition, not all data found on Minnesota Compass is survey data; therefore, some data does not have posted margins of error. Data about educational test scores, low birth weight babies, and crime, for example, come from actual reported counts so there is no sampling error involved. However, there may be other types of error present, such as crimes that occurred but were not reported to police. In addition, some data for small numbers may be suppressed to protect privacy. Where this is known, Minnesota Compass staff have indicated this on the “data & notes” page, which can be accessed for any graph by changing the View menu.
Finally, as with all human endeavors, it is possible that Minnesota Compass staff may have introduced an error. If you think you have found an error or have a question about the data, please let us know. We appreciate all feedback that can help us improve our website!