Quality of Survey Data: How to Estimate It and Why It Matters

A new article in Harmonization: Newsletter on Survey Data Harmonization in the Social Sciences by Melanie Revilla, Willem Saris and the Survey Quality Predictor (SQP) team

There is no measurement without error. However, the size of the error can vary depending on the measure used. In particular in social sciences survey data, the size of the error can be very large: on average, 50 percent of the observed variance in answers to survey questions is error (Alwin 2007). The size of the error can vary a lot depending on the exact formulation of the survey questions used to measure the concepts of interest (Saris and Gallhofer 2014) and also across languages or across time. Thus, one of the main challenges for cross-sectional and longitudinal surveys, in order to make meaningful comparisons across groups or time, is to be able to estimate the size of the measurement error, and to correct for it.

SQP is based on 3,700 quality estimates of questions obtained in more than 30 European countries and languages…

One way to estimate the size of the measurement errors (both random and systematic errors) is the multitrait-multimethod (MTMM) approach, first proposed by Campbell and Fiske (1959), and then developed further by many authors (in particular, Jöreskog 1970; Andrews 1984; Saris and Andrews 1991). In this approach, different questions (called “traits”) need to be repeated using different methods (for example, a 5-point scale, a 7-point scale and an 11-point scale). Usually, for identification purposes, at least three different traits are measured using at least three different methods. Under quite general conditions (cf. Saris and Andrews 1991), this approach allows estimating the reliability (1- random errors variance) and validity (1 – method error variance) of a set of survey questions. By taking the product of these reliability and validity, we get an estimate of the total quality of a survey question, which can also be defined as the strength of the relationship between the concept one is really interested in and the observed answers. The closer to 1 this quality estimate is, the lower the level of measurement errors for a given question.
One of the main limits of the MTMM approach lies in the necessity of repeating similar questions to the same respondents, which can lead to cognitive burden, higher cost, longer surveys, etc. Moreover, the results from the MTMM analyses are specific to the questions included. It is not possible to generalize from these questions to all the other questions in the survey; though, it is also not possible to repeat all the survey questions, which would be like asking respondents to complete twice the same survey in a row.

Therefore, Saris and Gallhofer (2014) proposed an alternative: use cumulative data from past MTMM experiments for a meta-analysis and investigate the effect of questions’ characteristics and questions’ context on the reliability and validity. Then, use this information to predict the quality of new survey questions based on their own characteristics. This is what the Survey Quality Predictor (SQP) software does in a user-friendly way.

SQP is a survey quality prediction system for questions used in survey research and a database of questions with information about their quality. The software is available for free at sqp.upf.edu. SQP is based on 3,700 quality estimates of questions obtained in more than 30 European countries and languages by MTMM analyses using the True Score model proposed by Saris and Andrews (1991). Most of these MTMM experiments have been done in the European Social Survey (ESS). Indeed, in each ESS round, four to six MTMM experiments are included in almost all the participating countries. In each experiment, three traits are measured using three or four different methods. SQP provides the users the possibility to consult the MTMM estimates for all these questions and languages. In addition, the program makes predictions of the quality of new questions on the basis of information about the choices that have been made with respect to the questions’ characteristics. The user needs to code the characteristics of his/her questions, and in that way, can get a prediction of the quality, without needing to collect any new data. Some brief tutorials explaining what SQP is and how it works are available at: https://www.youtube.com/channel/UCpljiQFlE4j5CYI-rqMKDig

The information from SQP or from the MTMM experiments can be used in different ways. In particular, it can be used before data collection to help designing the questions (Revilla, Zavala and Saris 2016), and after data collection in order to correct for measurement errors (De Castellarnau and Saris 2014; Saris and Revilla 2016). These are two crucial steps in order to get proper estimates of the substantive relationships of interest. However, even if the tools are available, in practice, these techniques are not implemented by most researchers. We believe that for the future of survey research, this issue needs to be given more attention.

 

References

Alwin, D.F. (2007). Margins of error: A study of reliability in survey measurement. Hoboken, Wiley.

Andrews, F. (1984). Construct validity and error components of survey measures: A structural modeling approach. Public Opinion Quarterly, 46:409–442.

Campbell, D. T., and D. W. Fiske (1959). Convergent and discriminant validation by the multitrait-multimethod matrices. Psychological Bulletin, 56, 81-105.

De Castellarnau, A. and Saris, W. E. (2014). A simple way to correct for measurement errors. European Social Survey Education Net (ESS EduNet). Available at: http://essedunet.nsd.uib.no/cms/topics/measurement/

Jöreskog, K.G. (1970). A general method for the analysis of covariance structures. Biometrika, 57:239–251.

Revilla, M., Zavala Rojas, D., and W.E. Saris (2016). “Creating a good question: How to use cumulative experience”. In Christof Wolf, Dominique Joye, Tom W. Smith and Yang‐Chih Fu (editors), The SAGEHandbook of Survey Methodology. SAGE.

Saris, W.E. and F.M. Andrews (1991). Evaluation of measurement instruments using a structural modeling approach. In P. Biemer, R.M. Groves, L. Lyberg, N. Mathiowetz, S. Sudman (Eds.), Measurement errors in surveys (pp. 575-597). New York: Wiley.

Saris, W.E. and I.N. Gallhofer (2014). Design, evaluation and analysis of questionnaires for Survey research, Hoboken, Wiley (second edition).

Saris, W.E., and M. Revilla (2016). “Correction for measurement errors in survey research: necessary and possible”. Social Indicators Research, 127(3): 1005-1020. First published online: 17 June 2015. DOI: 10.1007/s11205-015-1002-x

Melanie Revilla is researcher at the Research and Expertise Centre for Survey Methodology (RECSM) and adjunct professor at Universitat Pompeu Fabra (UPF, Barcelona, Spain).

Willem E. Saris is Professor and researcher at the Research and Expertise Centre for Survey Methodology of the Universitat Pompeu Fabra (Spain). Together with Daniel Oberski, he was awarded the AAPOR Warren J. Mitofsky Innovators Award 2014 for the Survey Quality Predictor (SQP 2.0).