Special issue blog series on advancing our understanding of the politics behind nudge and the ‘behavioural insights’ trend in public policy.
Sarah Cotterill
The quality of the reporting in behavioural public policy research is often poor, making it difficult for the reader to understand what the intervention was or how the research was done. In 2018 a review was published about choice architecture and nudges: behaviour change interventions where the environment or decision-taking context are designed in such a way that people are nudged toward more beneficial options. The review found 156 studies, and reported an excessive amount of bad practice: only two per cent followed a reporting guideline, only seven percent were informed by a power calculation, none of the studies were pre-registered and the descriptions of the interventions were non‐exhaustive, with frequently overlapping categories. The quality of many studies is too poor to allow meta-analysis and the behavioural interventions are not described in sufficient detail to delineate one from another or allow replication.
There is lots of guidance out there that researchers can use to improve the robustness of their behavioural public policy research. Use of guidelines and checklists can make the research more transparent and convince sceptics of the value of behavioural public policy research. In our recent article in Policy & Politics we argue that the question is less about whether to use a checklist or not, but which checklist to use.
Our three top tips to follow to improve research quality:
#1 Paint a clear picture. Even the best policy interventions will not be picked up and implemented by anyone else unless they are described clearly in the first place, in a way that delineates them from other similar interventions. One tool with a long track record is TiDieR. And be very clear what the active behaviour change ingredients are by using the Behaviour Change Technique Taxonomy. This can help us avoid confusing each other by describing the same technique in multiple different ways.
#2 Trial planning and reporting. Trials are not the only method for studying public policy: but if you do a trial, don’t waste effort on a badly designed trial. A protocol, pre-registration and a pre-specified statistical analysis plan all bring rigour. There are lots of guides out there to help with this, and we signpost them in our article.
#3 Simplify. There is significant overlap in the items required for trial planning and reporting and we argue the need for one reporting template containing all the items required, with an indication of which items are needed for the different stages of trial planning and reporting.
Our P&P article provides evidence that behavioural public policy researchers sometimes let themselves down by not following the best research practice, and offers guidance on how to make research more rigorous, more transparent and more translatable to other settings.
This blog post was originally published on Discover Society as part of Policy & Politics section on 7 October 2020.
You can read the original research in Policy & Politics:
Cotterill, Sarah; John, Peter; Johnston, Marie (2020) ‘How can better monitoring, reporting and evaluation standards advance behavioural public policy?’, Policy & Politics, DOI: https://doi.org/10.1332/030557320X15955052119363 [Free]
Read the other blog pieces in the series:
If you enjoyed this blog post, you may also be interested to read:
Introduction to the upcoming special issue: Beyond nudge: advancing the state-of-the-art of behavioural public policy and administration [Free]