Asking The Right Questions: Tips On Collecting Constituent Data

As I complete the twenty-first Financial Sustainability for Independent Schools Forum, I am reflecting on some of the most common dilemmas faced as school professionals attempt to sustain schools for the future. A major component of long-term sustainability is that of really understanding the opinions of a school’s constituents. Our understanding of our constituents is only as good as the questions we ask them, though. When asking for the opinions of the groups we serve, it’s important to construct a reliable survey that provides valid data for the purpose sustaining the school.

One of the more challenging issues for independent schools is capturing reliable and valid data to help construct what we call the “value narrative.” The value narrative is the story that communicates the school’s value to current and prospective families and is essential to ensure that a school is able to sustain enrollments, which ultimately allows for the long-term sustainability for the school. Although most schools attempt to collect constituent opinion data, the process of ensuring that the data is meaningful is not always a simple process.

As the development or selection of a survey is undertaken, it might be helpful to review a lesson from a graduate research course. The concepts of reliability and validity are very important if we expect to obtain meaningful data. The first reminder is that it is possible to have a reliable survey without it being a valid survey; however, it is impossible to have a valid survey that is not reliable.  The highest level of validity is set by its reliability.  Never can validity of an item or survey exceed that of its reliability.

Reliability addresses the notion of whether or not a measurement instrument (survey) is measuring consistently. Does a survey item consistently measure, in question form, the construct intended to be addressed by the survey developer? The issue of validity, on the other hand is whether a measurement is appropriate for the use for which it is intended. For example, a perfectly reliable measure of assessing peoples’ attitudes toward global warming would not be a valid instrument to make decisions regarding teacher compensation. The instrument must be reliable and then must be used in a valid manner if it is to be useful for decision making or planning. A survey can be a reliable instrument but the next question is whether the data obtained is useful for decision making.

Constituent opinion data is most often collected by schools through surveys of students, parents, faculty, and alumni. Although surveys are convenient and somewhat easy to administer, care should be given to make certain that the data collected is useful. Too often, questions are constructed assuming that everyone will interpret the question as asking exactly the same concept—a crucial mistake in collecting reliable data.

For example, a survey may ask the question “Is XYZ School a safe environment?” In this question, can we assume that everyone answering the question interprets “safe” in exactly the same way? It’s highly unlikely that this is will be the case. The concept of safety can involve many different forms, such as physical safety from outside predators, physical safety from the school facilities, physical safety from other students, emotional safety from other student bullying, or even spiritual safety. It is unlikely that one question asking if the school is safe will elicit a consistent meaning from those responding.

A simple, yet helpful technique to assess whether or not respondents are interpreting the survey question the same way, is to conduct a simple “face validity” exercise. The survey developer should individually interview at least 10 different people, similar to those who will be surveyed, and ask them to explain what each question appears to be asking. If the responses are consistent across all of those interviewed, as well as with the survey developers intent, it is likely that the item has “face validity” and it appears that the question is actually understood to be asking what is intended to be asked. On the other hand, if those interviewed give inconsistent interpretations of the question or they are inconsistent with the developer’s intention, the question should be scrapped or reworked.

Another common mistake in survey item development is asking a compound question. This occurs when a survey presents a question that touches on multiple issues or concepts but allows only for a single answer. For example, a question that states, “Do teachers express a caring and helpful attitude?” is asking two different questions. Do teachers express a caring attitude is one question and do teachers express a helpful attitude is another question. Although not always a perfect indicator, one clue that multiple questions may be contained in an item is the presence of the conjunction and, or a comma separating thoughts. For a question to be reliable, it must only address one issue or assess one attitude. If a question contains multiple concepts, it is impossible to interpret the data accurately because it will not be clear to which portion of the question they responded or how the respondent aggregated the overall concept.

The two previous issues affect the reliability of a survey. If the respondents do not interpret the concepts of the survey questions consistently or if the items are compound questions, the data generated will prove to be useless. Any actual correlation with respondent’s attitudes will be a mere coincidence.

The second portion of the data collection equation is ensuring that the data collected is valid for the purpose which it is intended to address. One of the more common debates of validity is made regarding standardized test scores as a predictor of college success. There is little debate that the SAT is a highly reliable instrument that indeed measures what it intends to measure. It is a completely different issue, however, of whether or not the SAT scores are valid for predicting college success. Based on multiple studies, as well as my personal research on this matter, I would suggest that SAT scores do account for a small portion of the variance in predicting college success. However, the variance accounted for tends to be quite small and raises the question of validity; are the results of the SAT appropriately used as a predictor of college success?

For constituent surveys constructed by independent schools, the researcher or surveyor should make sure that the questions being asked are appropriate for informing the school’s plan and decision process. We must ask the right questions if we expect to get data that will inform our programs. A survey item that solicits a constituent’s opinion on an issue for which they are not informed is not valid. You might ask your constituents to reflect on an overall impression of teacher attitude, however, you would never expect to receive valid input from the broader constituent group on teacher quality.

Constituent surveys serve an important purpose in independent schools, and as such, it’s imperative that they be constructed and administered appropriately. Constituent surveys will assist in gaining some important perspective, yet they will only provide data as useful as the survey design is strong.

One thought on “Asking The Right Questions: Tips On Collecting Constituent Data

  1. Pingback: Part of the AISAP 52 series

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s