Chapter 8 Non-response

One of the most serious things that can go wrong in a sample survey is that respondents either don’t give us any information (unit non-response), or don’t answer some of our questions (item non-response).

Without the information that these non-respondents failed to provide, we have no way of knowing whether they are different from our actual respondents, and if our estimates would be different in any significant way if they had responded.

In sample surveys it’s much much better to do everything possible to avoid non-response than it is to fix it afterwards. All methods of fixing it up rely on largely untestable assumptions.

8.1 Item Non-response

One way of dealing with item non-response is to treat it as unit non-response, and remove all information from any respondents who don’t answer all the questions which we need in our analyses. This is call complete case analysis and this is what iNZight does (and it reports how many observations it has deleted when it does so).

Complete case analysis is rather wasteful of data.

Alternatives include various versions of imputation in which values for missing items are invented by some method or other, according to a set of assumptions. iNZight isn’t set up to do this yet.

8.2 Unit Non-response

If we have non-responding units in our data set, then we can make the assumption that they would have responded similarly to other respondents who share similar characteristics, and we can make weighting adjustments to increase the sampling weights of the sample members who did respond to account for sample members who did not.

We can diagnose the existence of non-response by noticing that the proportions of respondents in our sample do not follow the expected proportions (even allowing for possibly unequal selection probabilities). For example in household surveys we often see a smaller fraction of respondents coming from young adults that we expect based on what we know about the age structure of the population. In particular the age group 18-24 is particularly hard to survey: this group is very hard to contact and/or find at home for an interview. We therefore increase the sample weights of the 18-24 year olds who do respond to account for the ones we expected to see in our survey.

One way of doing this is post-stratification, which is also called calibration. We need to classify the population into subgroups (called post-strata), and we need total population sizes within those subgroups. These sizes, which we call population benchmarks might come from the sample frame, or some other source like the census if we’re surveying people in the general population.

Calibration is a good thing to do whether or not there is suspected non-response: we can be unlucky in a simple random sample and end up with sample proportions that are at significant variance from the population. This is what was done in Section 5.1.2 when the one-stage cluster sampling weights \(M/m\) were replaced with the SRSWOR sampling weights \(N/n\) to ensure that the sample weights added up to the known population size \(N\).

To get iNZight to modify the weights by poststratification we need to:

  • Specify the correct sample design for the data
  • Identify a variable, or variables, which defines the post-strata, and for which we have population benchmarks
  • Give these population sizes to iNZight for each of the post-stratification variables

To do this we

  • Go to Dataset > Survey Design > Specify design ... and specify the sample design
  • Go to Dataset > Survey Design > Post stratify ... and then
  • Choose the variable or variables for which we have population benchmarks
  • Then for each value of the post-stratification variables tell iNZight the population size (We can type these in directly, or iNZight allows these to be read from a file: which must be in two column format, the first being the stratification variable and the second being the population size)

After doing this we carry out our analyses as normal.

If we have two post-stratification variables (e.g. age-group and sex) we will be in one of two situations.

  1. We have benchmarks for the full cross-classification of the two variables (i.e. population benchmarks for each age-group+sex combination).
  2. We have benchmarks for the two variables separately (i.e. totals by age group, and then separately totals by sex).

In Case 1. we need to create a single new variable (say, age-sex) that has all the cross-classified combinations. We need to add this variable to the dataset, and then supply the benchmark totals for each value of the new variable age-sex.

In Case 2. we supply the benchmark totals for each variable separately. We have less information than in Case 1, and iNZight first needs to estimate the cross-classified table (which it does by a method called raking).

Both methods work with more than two variables.

8.3 Example - Post-stratification

Consider estimates of the total number of students enrolled in California Schools from the two-stage cluster sample apiclus2. The direct estimate from the dataset of the population total of the enroll variable is from the inference output:

====================================================================================================
                                  iNZight Summary - Survey Design
----------------------------------------------------------------------------------------------------
        Primary variable of interest: enroll (numeric)
                                      
        Total number of observations: 126
   Number omitted due to missingness: 6
   Total number of observations used: 120
           Estimated population size: 5129
----------------------------------------------------------------------------------------------------
   2 - level Cluster Sampling design
   With (40, 126) clusters.
   survey::svydesign(id = ~ dnum + snum, fpc = ~ fpc1+fpc2, data = dataSet)
====================================================================================================

Summary of enroll:
------------------

Population estimates:

      25%   Median     75%      Mean       SD        Total   Est. Pop. Size   |   Sample Size   Min    Max
   234.04    407.0   805.5   526.263   359.35   2639272.93             5129   |           126   112   2237

Standard error of estimates:

    57.76    140.1   163.4    80.341    34.82    799637.77             1473                               

Design effects:

                               6.143                 24.19                                                

====================================================================================================

However we can improve this if we post-stratify by School Type stype. The total numbers of Elementary, High and Middle Schools in the population are 4421, 755 and 1018 respectively. These are also available in the file api-stype-benchmarks.csv.

====================================================================================================
                                  iNZight Summary - Survey Design
----------------------------------------------------------------------------------------------------
        Primary variable of interest: enroll (numeric)
                                      
        Total number of observations: 126
   Number omitted due to missingness: 6
   Total number of observations used: 120
           Estimated population size: 6194
----------------------------------------------------------------------------------------------------
   2 - level Cluster Sampling design
   With (40, 126) clusters.
   survey::calibrate(design_obj, ~stype, pop.totals)
====================================================================================================

Summary of enroll:
------------------

Population estimates:

      25%   Median      75%      Mean       SD         Total          Est. Pop. Size   |   Sample Size   Min    Max
   229.42   400.80   724.99   507.484   349.40   3074076.353   6194.0000000000000000   |           126   112   2237

Standard error of estimates:

    33.47    74.59    83.55    48.295    18.51    292584.376      0.0000000000002216                               

Design effects:

                                2.338                  2.339                                                       
====================================================================================================

After post-stratifying by stype the standard error of the total has dropped dramatically from 799638 to 292584 (with a corresponding decrease in the Deffs).

We can additionally post-stratify by awards (without cross-classifying with stype - the benchmarks are available in api-awards-benchmarks.csv), and we don’t get any further improvement: there’s no useful additional information there.

====================================================================================================
                                  iNZight Summary - Survey Design
----------------------------------------------------------------------------------------------------
        Primary variable of interest: enroll (numeric)
                                      
        Total number of observations: 126
   Number omitted due to missingness: 6
   Total number of observations used: 120
           Estimated population size: 6194
----------------------------------------------------------------------------------------------------
   2 - level Cluster Sampling design
   With (40, 126) clusters.
   survey::calibrate(design_obj, ~stype + awards, pop.totals)
====================================================================================================

Summary of enroll:
------------------

Population estimates:

      25%   Median     75%      Mean       SD         Total          Est. Pop. Size   |   Sample Size   Min    Max
   229.66   401.25   721.9   507.558   348.27   3079347.216   6194.0000000000000000   |           126   112   2237

Standard error of estimates:

    30.35    73.44    83.1    48.129    17.59    286397.443      0.0000000000001864                               

Design effects:

                               2.337                  2.248                                                   

====================================================================================================