In a research study, the best case scenario is that everybody who matters takes your survey. When a population is the right size, and accessible, you could reach out to all of them. This kind of survey that covers the entire target population is called a census.
However, most of the time we don’t have the ability to survey the entire population. This is when information on sampling matters. There are many methodologies for determining sample sizes. In our case, we plan to aim for a random sampling.
We’ll sample your entire primary membership: this means we don’t include affiliates, or vendors, or secondary members (for example). Since we’ll only achieve a % of your population, ideally, we want to have confidence that the survey responses represent your overall membership. Here are four details to consider:
1. Confidence Level
This means how sure you can be about the responses you get. Confidence level is a percentage. It represents how often the true percentage of the population who would pick an answer falls within the % range of possible answers. A 95% confidence level means you can be 95% certain you’ll get answers that are within the true mean of your population.
2. Sample Size
Think of this as the return rate you need to achieve. The larger your sample size, the more ‘sure’ you can be that the answers truly reflect your population. This means that for a given confidence level (let’s say 95%) and the larger your sample size, the smaller your confidence interval. However, a bigger sample size does not require a bigger return. In fact, the bigger the sample size, the smaller response you need to reach your target confidence level.
3. Margin of Error
The margin of error is the price you pay for not getting feedback from your entire membership. It describes the range that the answer likely falls between if we had responses from everyone, instead of just a sample. Survey data provides a range, not a specific number. Let’s use 5% as the margin of error in the following example:
You uncover that 68% of members strongly agree “the organization stands for something important in your industry”. That means that the real percentage of members with this opinion is between 63% and 73% (within 5% above and/or below 68%). This example assumes a 5% margin of error.
Next time you survey, if you ask the same question and learn that 71% of members strongly agree. The ratings are statistically similar because they fall within the margin of error (63-73%). The increase is not statistically significant. If results are outside of the 5% margin of error you can assume something really changed.
4. Population
The key to a statistically valid survey is to identify a genuine random sample of your population – so we’ll send the survey to all members; not a selected sample. Many associations believe only the happy members will respond; others believe only angry members will take the time to complain. We find that with the right communication (promotion) and incentive, and a set of unbiased questions that allow some ability for members to write in their answers, quality feedback is not only possible, but likely.
Below are two options for response rates. Scenario 2 is more valid, but also more challenging to achieve. So, while it would be ideal to reach Scenario 2, we’re confident somewhere between these two scenarios will give you feedback that represents your entire membership.
These calculations below are based on 1,200 members.
You can calculate your possible response rates at this site. It will do all the math for you: http://www.surveysystem.com/sscalc.htm
To help achieve the results you aim for above, consider the way you ask your members to participate and offer them a survey incentive.
There’s a communications collective emerging from nSight Marketing’s client work. Our first project is a member survey research project to analyze member perception across associations and the keys to their success. Click here for more information.
Contributors:
surveysystem.com
surveygizmo.com
growthpad.com
Leave a Reply