Are responses to official consultations and stakeholder surveys reliable guides to policy actors’ positions?

Karin IngoldKarin Ingold

Policy scholars are interested in the positions and preferences of politically involved actors. Those preferences can either serve as independent variables (for example, to explain coordination among or the strategic behaviour of actors), or as dependent variables (for example to evaluate actors’ coherence over time). But how do I identify these policy positions or preferences? Should I perform interviews or code the official statements of actors involved in policymaking? How valuable are my survey results in comparison to media data? These are typical questions concerning methods of data gathering and there are unlikely to be absolute answers to the question of which is the best method. However, our recent Policy & Politics article contributes to the discussion regarding these questions and is based on unique data drawn from three cases. Using these data, it compares actor statements about policies, gathered once through surveys and once through text coding official statements.

Interestingly, the results show a general pattern of actors demonstrating a tendency to value policies more positively in the (later) survey situation than in the (earlier) official statement. It is mainly the losers in the policy process who tend to improve their assessment between the consultation phase and the survey phase, and therefore show a higher discrepancy than other actors. This might be an indicator of a ‘correction’ of their position once they know about their policy defeat, but this is not a trivial result. First, it shows that the timing of the data gathering seems to be relevant. Second, when thinking about theories like the Advocacy Coalition Framework (Sabatier and Jenkins-Smith, 1993), which conceptualises policy beliefs and preferences as stable, this result reveals at least two interesting rationales: either actors in the survey situation correct their positions and thereby act through mechanisms of social desirability; or, policy positions are not as stable as some frameworks might predict. The exceptions to the trend of evaluating instruments more positively in the survey than in the consultation are also interesting: target group actors evaluate relevant policy instruments more negatively and thereby also demonstrate a higher discrepancy between officially- and survey-stated policy positions than other actors. However, our models show no significant or large effect for the target group predictor variable.

What are the broader implications of these empirical findings for (comparative) policy studies? As losers of the political game seem to have a systematic tendency to improve their instrument assessment between consultation and survey, it is worth reflecting about who the losers of the process might be, particularly when only working with survey data. This innovative finding is highly significant: policy analysis aims at knowing ‘who gets what, when, how’ (according to the seminal question asked by Lasswell, 1956). Indeed, it should be more difficult to accurately know ‘who gets what’ if one cannot fully trust how actors defeated during a policy battle will report on their policy positions or if they will have a tendency to quickly change their positions.

Furthermore, as our research demonstrates, actors have a tendency to change their positions before and after the introduction of a policy, so the timing of data gathering seems to be crucial. Thus, the effects of social desirability and the “correction” of the actor’s own position can change before and after a policy is introduced. In short: timing and actor-type matters when researchers are reaching conclusions about policy positions, their relevance for policymaking, or their stability over time.

You can read the original research in Policy & Politics:

Ingold, Karin; Varone, Frédéric; Kammerer, Marlene; Metz, Florence; Kammermann, Lorenz; Strotz, Chantal (2020) ‘Are responses to official consultations and stakeholder surveys reliable guides to policy actors’ positions?’, Policy & Politics, DOI: https://doi.org/10.1332/030557319X15613699478503

If you enjoyed this blog post, you may also be interested to read:

Why advocacy coalitions matter and practical insights about them

Policy design and the added-value of the institutional analysis development framework

Practical lessons from policy theories

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s