- Research note
Comparing the order of the London Measure of Unplanned Pregnancy and the Demographic and Health Survey question on pregnancy intention in a single group of postnatal women in Malawi - the effect of question order on assessment of pregnancy intention
BMC Research Notesvolume 11, Article number: 487 (2018)
To investigate the effect of question order on women’s responses to the London Measure of Unplanned Pregnancy (LMUP) or the pregnancy intention question of the Demographic and Health Survey (DHS) when both are asked in the same survey. We collected data on pregnancy intention from a cohort of 4244 pregnant women in Malawi who were re-interviewed at 1, 6 and 12 months postnatally. Women in Zone 1 were asked the LMUP, then antenatal questions, then the DHS pregnancy intention question, women in Zone 2 were asked the DHS pregnancy intention question, then antenatal questions, then the LMUP; women in Zone 3 were only asked the DHS pregnancy intention question. We used linear regression to compare the LMUP score and ordinal regression to compare DHS categorisations of pregnancy intention across Zones, adjusting for baseline socioeconomic differences between the Zones.
We found no effect of question order on the assessment of pregnancy intention by the LMUP. There were differences in the assessment of pregnancy intention when the pregnancy intention question in the DHS was used, however this seemed to be due to baseline sociodemographic differences between the groups of pregnant women being compared, and not due to question order.
Questions about pregnancy intention have been asked in large scale surveys around the world for over 50 years . The purpose of these questions is to estimate the proportions of women with intended (or unintended) pregnancies and to use this information to understand the levels of desired fertility, need for family planning, and population growth patterns [2, 3]. Since the 1980s the main source of information on pregnancy intention in developing countries has been the Demographic and Health Surveys (DHS), based on a question asked up to 5 years after a birth: “At the time you became pregnant, did you want to become pregnant then, did you want to wait until later, or did you not want to have any (more) children at all?” The responses are categorized, respectively, as “intended”, “mistimed” and “unwanted” pregnancy, with “mistimed” and “unwanted” combined to estimate “unintended” pregnancy.
The DHS question follows a conceptualisation that was developed in the United States via the Growth of American Families Surveys in the 1950s , the National Fertility Surveys in the 1960s and 1970s [5, 6], and has continued from the 1970s to the present data with the periodic National Survey of Family Growth (NSFG) . Over the last 20 years, however, there has been discussion about the validity of methods to measure pregnancy intention, particularly given the increased complexity of family formation patterns worldwide, the critiques of models of rational action within reproductive health, and the growing contribution of psychometric methods of measure development to all areas of social and health measurement [8,9,10,11,12,13]. As a response, the London Measure of Unplanned Pregnancy (LMUP) was developed in the early 2000s [14, 15]. It is a psychometrically valid and reliable tool comprising six questions which produce a score of 0–12, with higher scores indicating a more planned/intended pregnancy. The LMUP is now widely used, with eleven validated language versions across nine countries and more in progress [16,17,18,19,20,21,22,23]. Naturally, there has been a desire to compare the LMUP with other forms of measurement of pregnancy intention [24, 25], however we have been concerned about the best way to do this [26, 27] given the findings of Kaufmann et al. .
In the 1990s Kaufmann et al. carried out an experiment within the Arizona Women’s Health Survey. Using a randomized crossover design, they asked women two sets of pregnancy intention questions: the question sequence from the National Survey of Family Growth and a single question closely based on the DHS question. Women were randomised to which question they answered first, with the subsequent pregnancy intention question separated by a body of intervening items on sexual experience and contraceptive use. The findings showed that the NSFG and DHS questions yielded similar proportions of “intended”, “mistimed” and “unwanted” pregnancies, yet a quarter of women gave discordant responses and there was an effect of question order: “the percentage of pregnancies classified as mistimed was greater in response to whichever intendedness question was presented later” in the survey  (p. 814–5). This finding led Kaufmann et al. to question the validity of the underlying concepts, particularly “wantedness”. For us, it also leaves open the question of whether it is feasible to compare the LMUP with other pregnancy intentions questions simply by asking individual women two different sets of questions within one survey.
The fact that preceding questions, or the context of the survey, can affect how individuals respond to a particular survey question is well known, usually described as a ‘framing effect’. There have been various investigations into framing effects. Recent studies examining question order in surveys, on topics as diverse as opinions on assisted dying and reported experiences of bullying to rankings of priorities in a Delphi Survey, have found significant differences in responses according to where questions are placed [29,30,31]. A small body of work exists around single questions on general health status (e.g. Would you say your health in general is excellent, very good, good, fair, or poor?). These studies show that responses to the question on general health status vary according to whether the question is placed before or after other questions on health or life satisfaction, although the effects can vary in size and by language [32,33,34,35,36]. There has also been some examination of the effects of instrument order. For instance, experiments varying the order of general health-related quality of life measures with disease/condition-specific measures, which are often asked together in surveys, have generally shown little effect on either [37,38,39,40] or an effect only in some domains, such as mental health . One study examined the effect of framing on a validated instrument, the Hospital Anxiety and Depression Scale (HADS), and found that preceding questions affected responses to the HADS .
In order to assess the effect of question order on women’s responses to the LMUP (a validated instrument) and the DHS question on pregnancy intention (a single survey question), we use data from a cohort study of pregnant women in Malawi .
We collected data on pregnancy intention from a cohort of over 4200 pregnant women in Mchinji District, Malawi from March 2013 to July 2014. The methodology for recruiting and following up the cohort, and a description of the women included, have been described elsewhere . Women were interviewed antenatally and at around 1-to-2, 6, and 12 months after the end of pregnancy. The LMUP was asked antenatally and at each postnatal follow up. The DHS question on pregnancy intention was only asked postnatally, as per standard practice.
Twenty-five clusters were randomly selected from 49 pre-defined areas covering the whole of Mchinji District . These were grouped into three geographical Zones; 1, 2 and 3. To investigate any effect of question order, the LMUP and DHS questions were asked in a different order postnatally in each Zone. Women in Zone 1 were asked the LMUP first, followed by questions on antenatal issues, before being asked the DHS question. Women in Zone 2 were asked the DHS question, then the antenatal questions and then the LMUP. Finally, women in Zone 3 were only asked the DHS question.
We examined the effect of question order by comparing the LMUP score (Zones 1 and 2 only) or DHS categorisation (all three Zones) at postnatal follow ups at 1-to-2 and 6 months. We did not include the 12-month data due to the small numbers at this time point (see Fig. 1). We compared the LMUP scores across the Zones using linear regression of the full zero to twelve score with robust standard errors, as recommended when using the LMUP as an outcome measure . Power calculations confirmed that we had > 95% power to detect a difference of at least 0.3 points on the LMUP scale, a difference not deemed to be clinically significant. We used ordinal logistic regression to compare the DHS categorisations of intended, mistimed and unwanted across the Zones. We used the command “omodel” to test the proportional odds assumption and where this was violated we used the “gologit2” command to autofit a partial proportional odds model .
Given evidence of the determinants of pregnancy intention , we looked and adjusted for baseline differences in socio-economic status, marital status, maternal age, maternal education and number of live children between the Zones to ensure that we were only seeing the effect of question order. Variables were removed in a manual backwards stepwise fashion, starting with the largest p value and finishing when all variables with p-values > 0.1 had been removed. Zone remained in the model regardless of p-value as this was the variable of interest. To account for the fact that reported intention can change over time and to increase the comparability of the groups, we restricted the analysis to women who had been interviewed at 6 months antenatally and were interviewed postnatally at 1-to-2 and 6 months. All analyses were conducted in STATA version 15 (StataCorp. 2017. Stata Statistical Software: Release 15. College Station, TX: StataCorp LLC).
Figure 1 shows the number of women completing the LMUP and/or DHS at each postnatal follow up in each Zone. There were statistically significant differences between the Zones at baseline: socio-economic status (SES) (p < 0.001), education level (p = 0.006), age (p = 0.031), marital status (p = 0.018) and number of live children (p = 0.009) (see Additional file 1: Table S1).
There were no significant differences between the LMUP scores in Zones 1 and 2 at either of the postnatal follow-ups, even without adjusting for the baseline differences in socio-demographics, as shown in Table 1. Multivariate models were created to check for negative confounding but Zone remained insignificant in these models at both time points. There was no significant difference in the proportion of women who changed their LMUP score between either antenatal and 1–2 month postnatal (p = 0.733) or between 1 and 2 months and 6 months postnatally (p = 0.941) suggesting that there was no effect of question order on the stability of the LMUP.
There was a statistically significant difference in the proportion of pregnancies categorised as intended on the DHS measurement of pregnancy intention across the Zones at the first postnatal visit (p = 0.025), as shown in Table 2. Once baseline differences in socio-demographics between the Zones were controlled for, the differences in the DHS categorisations were not statistically significant (p = 0.177). For the analysis at 6 months postnatally, a partial proportional odds model had to be fitted for the univariate model as the assumption of proportional odds was violated. There was a borderline significant difference between the Zones (p = 0.087) which again became non-significant when adjusted for socio-demographics on multiple ordinal regression (p = 0.992). There was no significant difference in the proportion of women who changed their DHS categorisation between 1 and 2 months and 6 months postnatally (p = 0.488) suggesting that there was no effect of question order on the stability of the DHS.
We found no effect of question order on the LMUP score at either postnatal time point, in either unadjusted or adjusted analyses. We found no effect of question order on the DHS categorisations at either postnatal time point once we had adjusted for baseline socio-demographic factors. We therefore conclude that there was no effect of question order on either measure of pregnancy intention.
Kaufmann et al. found evidence of an effect on question order in their data, in particular they found more “mistimed” pregnancies in response to whichever question was asked second . In our data, had we just compared Zone 1 (LMUP then DHS) with Zone 2 (DHS then LMUP) then our findings would be the same as Kaufmann et al. This is because there was a higher proportion of “mistimed” pregnancies in Zone 1, where the DHS was asked second, than there was in Zone 2, where the DHS was asked first, at both postnatal time points. However, Zone 3, where only the DHS was asked, had the highest proportion of “mistimed” pregnancies at both postnatal time points. Since women in Zone 3 were not asked the LMUP postnatally, the proportion of mistimed pregnancies could not have been influenced by the LMUP. This suggests that the differences in the proportion of mistimed pregnancies between the Zones were not due to question order. Indeed, when baseline socio-demographic differences were accounted for, the differences in DHS categorisation across the Zones were no longer significant. In contrast, despite the differences in baseline socio-demographic factors across the Zones, the LMUP was not significantly different between the Zones at any time point, indicating that this more nuanced measure of pregnancy intention is probably more reliable than the DHS.
The lack of an effect of question order in our analyses is encouraging as it suggests that it is possible to compare measures of pregnancy intention by means of comparisons within individuals in the context of a survey.
We were not able to randomise question order at the individual level, meaning that known and unknown confounders were not balanced across the Zones. However, we were able to adjust for known confounders. We can only conclude that there is no effect of question order on reported pregnancy in the Chichewa language; others have found that the effect of question order may differ by language, so our finding should be verified in other languages .
Demographic and Health Survey
London Measure of Unplanned Pregnancy
National Survey of Family Growth
Campbell AA, Mosher WD. A history of the measurement of unintended pregnancies and births. Matern Child Health J. 2000;4(3):163–9.
Melvin CL, Rogers M, Gilbert BC, Lipscomb L, Lorenz R, Ronck S, et al. Pregnancy intention: how PRAMS data can inform programs and policy. Matern Child Health J. 2000;4(3):197–201.
Shupe AK, Smith AE, Stout CL, McLaughlin H. The importance of local data in unintended pregnancy prevention programming. Matern Child Health J. 2000;4(3):209–14.
Freedman R, Whelpton PK, Campbell AA. Family planning sterility and population growth. New York: McGraw Hill; 1958.
Ryder NB, Westoff CF. Reproduction in the United States 1965. Princeton: Princeton University Press; 1971.
Westoff CF, Ryder NB. The contraceptive revolution. New Jersey: Princeton University Press; 1977.
NSFG. About the National Survey of Family Growth 2013. http://www.cdc.gov/nchs/nsfg/about_nsfg.htm. Accessed 18 June 2013.
Santelli JS, Rochat R, Hatfield-Timajchy K, Gilbert BC, Curtis KM, Cabral R, et al. The measurement and meaning of unintended pregnancy. Perspect Sex Reprod Health. 2003;35:94–101.
Santelli JS, Lindberg LD, Orr MG, Finer LB, Speizer I. Toward a multidimensional measure of pregnancy intentions: evidence from the United States. Stud Fam Plann. 2009;40(2):87–100.
Bachrach CA, Newcomer S. Intended pregnancies and unintended pregnancies: distinct categories or opposite ends of a continuum? Fam Plann Perspect. 1999;31(5):251–2.
Petersen R, Moos M. Defining and measuring unintended pregnancy: issues and concerns. Women Health Iss. 1997;7(4):234–40. https://doi.org/10.1016/S1049-3867(97)00009-1.
Stanford JB, Hobbs R, Jameson P, DeWitt MJ, Fischer RC. Defining dimensions of pregnancy intendedness. Matern Child Health J. 2000;4(3):183–9.
Trussell J, Vaughan B, Stanford J. Are all contraceptive failures unintended pregnancies? Evidence from the 1995 National Survey of Family Growth. Fam Plann Perspect. 1999;31(5):246–7.
Barrett G, Wellings K. What is a ‘planned’ pregnancy? Empirical data from a British study. Soc Sci Med. 2002;55:545–57.
Barrett G, Smith SC, Wellings K. Conceptualisation, development and evaluation of a measure of unplanned pregnancy. J Epidemiol Community Health. 2004;58:426–33.
Borges AL, Barrett G, Dos Santos OA, Nascimento Nde C, Cavalhieri FB, Fujimori E. Evaluation of the psychometric properties of the London measure of unplanned pregnancy in Brazilian Portuguese. BMC Pregnancy Childbirth. 2016;16:244. https://doi.org/10.1186/s12884-016-1037-2.
Hall JA, Barrett G, Mbwana N, Copas A, Malata A, Stephenson J. Understanding pregnancy planning in a low-income country setting: validation of the London measure of unplanned pregnancy in Malawi. BMC Pregnancy Childbirth. 2013;5(13):200. https://doi.org/10.1186/1471-2393-13-200.
Morof D, Steinauer JE, Haider S, Liu S, Darney P. Evaluation of the london measure of unplanned pregnancy in a United States Population of Women. PLoS ONE. 2012;7(7):e35381. https://doi.org/10.1371/journal.pone.0035381.
Rocca CH, Krishnan S, Barrett G, Wilson M. Measuring pregnancy planning: an assessment of the London measure of unplanned pregnancy among urban, south Indian women. Demogr Res. 2010;23:293–334.
Roshanaei S, Shaghaghi A, Jafarabadi MA, Kousha A. Measuring unintended pregnancies in postpartum Iranian women: validation of the London measure of unplanned pregnancy. East Mediterr Health J. 2015;21(8):572–8.
Almaghaslah E, Rochat R, Farhat G. Validation of a pregnancy planning measure for Arabic-speaking women. PLoS ONE. 2017;12(10):e0185433. https://doi.org/10.1371/journal.pone.0185433.
Habib MA, Raynes-Greenow C, Nausheen S, Soofi SB, Sajid M, Bhutta ZA, et al. Prevalence and determinants of unintended pregnancies amongst women attending antenatal clinics in Pakistan. BMC Pregnancy Childbirth. 2017;17(1):156. https://doi.org/10.1186/s12884-017-1339-z.
Goossens J, Verhaeghe S, Van Hecke A, Barrett G, Delbaere I, Beeckman D. Psychometric properties of the Dutch version of the London measure of unplanned pregnancy in women with pregnancies ending in birth. PLoS ONE. 2018;13(4):e0194033. https://doi.org/10.1371/journal.pone.0194033.
Aiken AR, Westhoff CL, Trussell J, Castano PM. Comparison of a timing-based measure of unintended pregnancy and the London measure of unplanned pregnancy. Perspect Sex Reprod Health. 2016;48(3):139–46. https://doi.org/10.1363/48e11316.
Drevin J, Kristiansson P, Stern J, Rosenblad A. Measuring pregnancy planning: a psychometric evaluation and comparison of two scales. J Adv Nurs. 2017;73(11):2765–75. https://doi.org/10.1111/jan.13364.
Barrett G, Morroni C, Stephenson J, Hall J, Morof D, Rocca CH. Regarding “Identifying women at risk of unintended pregnancy: a comparison of two pregnancy readiness measures”: measuring pregnancy intention. Ann Epidemiol. 2014;24(1):78–9. https://doi.org/10.1016/j.annepidem.2013.09.007.
Barrett G, Hall JA, Stephenson J. Measuring pregnancy intention: the complexity of comparison. Perspect Sex Reprod Health. 2017;49:69–70.
Kaufmann RB, Morris L, Spitz AM. Comparison of two question sequences for assessing pregnancy intentions. Am J Epidemiol. 1997;145(9):810–6.
Magelssen M, Supphellen M, Nortvedt P, Materstvedt LJ. Attitudes towards assisted dying are influenced by question wording and order: a survey experiment. BMC Med Ethics. 2016;17(1):24. https://doi.org/10.1186/s12910-016-0107-3.
Brookes ST, Chalmers KA, Avery KNL, Coulman K, Blazeby JM, group Rs. Impact of question order on prioritisation of outcomes in the development of a core outcome set: a randomised controlled trial. Trials. 2018;19(1):66. https://doi.org/10.1186/s13063-017-2405-6.
Huang FL, Cornell DG. The impact of definition and question order on the prevalence of bullying victimization using student self-reports. Psychol Assess. 2015;27(4):1484–93.
Crossley TF, Kennedy S. The reliability of self-assessed health status. J Health Econ. 2002;21:643–58.
Bowling A, Windsor J. The effects of question order and response-choice on self-rated health status in the English Longitudinal Study of Ageing (ELSA). J Epidemiol Community Health. 2008;62(1):81–5. https://doi.org/10.1136/jech.2006.058214.
Lee S, Grant D. The effect of question order on self-rated general health status in a multilingual survey context. Am J Epidemiol. 2009;169(12):1525–30. https://doi.org/10.1093/aje/kwp070.
Garbarski D, Schaeffer NC, Dykema J. The effects of response option order and question order on self-rated health. Qual Life Res. 2015;24(6):1443–53. https://doi.org/10.1007/s11136-014-0861-y.
Lee S, McClain C, Webster N, Han S. Question order sensitivity of subjective well-being measures: focus on life satisfaction, self-rated health, and subjective life expectancy in survey instruments. Qual Life Res. 2016;25(10):2497–510. https://doi.org/10.1007/s11136-016-1304-8.
McColl E, Eccles MP, Rousseau NS, Steen IN, Parkin DW, Grimshaw JM. From the generic to the condition-specific?: Instrument order effects in Quality of Life Assessment. Med Care. 2003;41(7):777–90. https://doi.org/10.1097/01.mlr.0000068536.92694.5e.
Cheung YB, Lim C, Goh C, Thumboo J, Wee J. Order effects: a randomised study of three major cancer-specific quality of life instruments. Health Qual Life Outcomes. 2005;3:37. https://doi.org/10.1186/1477-7525-3-37.
Rat AC, Baumann C, Klein S, Loeuille D, Guillemin F. Effect of order of presentation of a generic and a specific health-related quality of life instrument in knee and hip osteoarthritis: a randomized study. Osteoarthritis Cartilage. 2008;16(4):429–35. https://doi.org/10.1016/j.joca.2007.07.011.
Kieffer JM, Verrips GH, Hoogstraten J. Instrument-order effects: using the oral health impact profile 49 and the Short Form 12. Eur J Oral Sci. 2011;119(1):69–72. https://doi.org/10.1111/j.1600-0722.2010.00796.x.
Childs AL. Submacular surgery trials research G. Effect of order of administration of health-related quality of life interview instruments on responses. Qual Life Res. 2005;14(2):493–500 (Epub 2005/05/17).
Hauksdottir A, Steineck G, Furst CJ, Valdimarsdottir U. Towards better measurements in bereavement research: order of questions and assessed psychological morbidity. Palliat Med. 2006;20(1):11–6. https://doi.org/10.1191/0269216306pm1098oa.
Hall JA, Barrett G, Phiri T, Copas A, Malata A, Stephenson J. Prevalence and determinants of unintended pregnancy in Mchinji District, Malawi; using a conceptual hierarchy to inform analysis. PLoS ONE. 2016;11(10):e0165621. https://doi.org/10.1371/journal.pone.0165621.
Lewycka S. Reducing maternal and neonatal deaths in rural Malawi: evaluating the impact of a community-based women’s group intervention. London: University College London; 2010.
Hall JA, Barrett G, Copas A, Stephenson J. London measure of unplanned pregnancy: guidance for its use as an outcome measure. Patient Relat Outcome Meas. 2017;8:43–56. https://doi.org/10.2147/PROM.S122420.
Williams R. Generalized ordered logit/partial proportional odds models for ordinal dependent variables. Stata J. 2006;6(1):58–82.
JH and GB had the idea for the paper. JH, GB and JS were involved in the design of the original study. JH collected and analysed the data. JH and GB wrote the first draft of the manuscript. All authors read and approved the final manuscript.
We would like to thank the LMUP team fieldworkers who collected the data used in this analysis as well as all the women who consented to take part in the study from which these data were drawn. We would also like to thank Alexey Zaikin for his advice on the power calculations.
The authors declare that they have no competing interests.
Availability of data and materials
The datasets analysed for this paper are available from the UCL Discovery database linked to the publication record in the UCL Research Publication Service. The dataset can be accessed here: https://doi.org/10.5522/00/7.
Consent for publication
Ethics approval and consent to participate
The study from which these data were drawn was approved by the UCL Research Ethics Committee and the College of Medicine Research Ethics Committee at the University of Malawi, Reference Numbers 3974/001 and P.03/12/1273 respectively. All participants gave written informed consent to take part in this research.
The study from which these data were drawn was funded by a 3-year personal Research Training Fellowship from the Wellcome Trust to Dr. J. Hall, Award Number 097268/Z/11/Z. The funders had no role in the design, collection, analysis, or interpretation of data; in the writing of the manuscript; or in the decision to submit the manuscript for publication.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.