
Since coming to office, the UK government has made mission-led policy the focus of its approach.
The Plan for Change demonstrates the central role being played by missions, and my colleague Amanda Rowlatt provides excellent insights into how missions might affect the policy-making process from her time at the heart of government.
But how will we know if missions have ‘worked’? This article explores the role of evaluation in mission-led government. We consider what is different when evaluating a mission compared with evaluating an individual policy or programme.
Evaluation is critical for evidence-led policy. There is a commitment to evaluation across government. The Evaluation Task Force, a joint Cabinet Office-HM Treasury unit, recently organised its second government evaluation conference, proudly sponsored by Frontier, which brought together hundreds of evaluation practitioners and senior leaders from across government.
This article aims to help those now putting evaluation theory into practice for mission-led policy.
Evaluation matters for missions
As set out in the Magenta Book, evaluation plays two core functions:
1. Learning: has something worked as intended, why (not) and what are the lessons learned for future policy?
2. Accountability: has policy delivered for taxpayers?
Both of these are as true for missions as they are for any individual policy or programme. Indeed, the opportunity for high-quality evaluation of missions may be even greater. If missions offer a long-term vision, then we can evaluate over the longer-term as well. This provides real value for the learning function to shape the direction of mission-led policy, and for increased rigour to support the accountability function.
A theoretical underpinning is essential
Any good evaluation requires an upfront understanding the intended benefits, and why interventions should achieve them. This requires a theory of change.
For individual policies, a logic model is often used to visualise the theory of change, tracing the causal pathways from specific policy inputs and actions to ultimate impacts.
For missions, things are trickier. The government has articulated longer-term shifts they would like to see under each mission, but not the specific policies or activities that help to deliver them, making it hard to know where to start in developing a mission-level theory of change.
However, there is still an important role for theory at the heart of mission-led evaluation. The five missions are in policy areas where substantial independent research exists to understand barriers, enablers and drivers of the changes the government has indicated they want to see. The What Works Centres, for example, have helped collate evidence around issues such as crime and health (and are themselves subject of a past evaluation by us). The Productivity Institute represents a UK centre of excellence around the barriers to and enablers of growth.
A theory of change for missions could therefore take the form of a systems-level assessment of what theory and evidence tells us are the most important determinants of achieving the mission, what the external barriers to success are, and where evidence suggests the relevant policy action needs to be taken. This should draw heavily on existing research syntheses, and usefully draw on independent validation from experts and senior Departmental leaders most relevant to each mission as the basis for policy development.
Each mission is likely to lead to a range of supporting policies and programmes. By having a mission-level theory of change in place, policy-level theories of change and logic models can be developed, nested within and showing synergy and consistency with the underlying mission-level theory. This offers a strong, positive opportunity for policy design to show that intended outcomes and impacts are consistent with the theory and evidence relating to the mission itself. Where the mission-level theory is more uncertain or untested, it also offers the opportunity for future evaluations to contribute to the wider evidence base and understanding – reinforcing the learning function of evaluation.
Even without a mission-level theory of change, evaluations of policies or programmes that have clear resonance to different missions should still articulate policy-level theories of change that show how the policy could support mission delivery. This will help evaluators demonstrate the strategic benefit of different interventions.
The emphasis on learning, and the need for a clear theoretical underpinning, are consistent with past Frontier work reviewing the evidence base around the evaluation of mission-led research and innovation (R&I) interventions. This was conducted as part of the ongoing evaluation of the Industrial Strategy Challenge Fund (ISCF) with RAND Europe on behalf of UK Research and Innovation (UKRI). The current missions clearly extend well beyond the boundaries of R&I alone, but the lessons and frameworks identified in that review are still useful in the broader context of mission-driven policy (Section 2 of the ISCF Evaluation Framework report).
Milestones can bring structure to evaluation
A defining feature of missions is to be long-term, and ambitious. That has advantages for consistent, driven policy-making. But it poses challenges for evaluation in knowing whether and when we can say ‘mission accomplished’.
By drawing on the theory of change, however, it is possible to develop milestones or staging posts, consistent with the ultimate objective of the mission, that allow for an earlier assessment of whether the missions are on track.
The government have set a milestone for each mission, with varying degrees of clear relevance to the ultimate goals as set out in Amanda’s paper. A clear theoretical underpinning and understanding of the system in which missions are being set would, though, allow for a clearer rationale for each milestone to be established.
Regardless of the strength of the theory supporting each milestone, it is clear the government intends them to be used as an accountability mechanism:
“We have always been clear that our missions represent a long-term plan for the country.... We also need to set clear milestones to track our progress and allow the British people to hold us to account.... These milestones are ambitious but achievable. For the next five years, they will give our country the stabilising certainty of a clear destination."
- Plan for Change: Milestones for Mission-led Government
This suggests milestones should form a strong part of any mission-level evaluation, with the evaluation testing whether or not they have been achieved.
But an evaluation should go beyond simply trying to test whether milestones are hit, and start to explore why. A clear theory can help not only define the milestone, but also identify the most important external factors which will drive success. Evaluation can then explore the role these have played against any policy action in meeting the milestone. For example, in assessing whether “1.5 million safe and decent homes” are built in England over the Parliament – one of the milestones for the ‘economic growth’ mission – any evaluation is likely to have to consider factors such as planning rules, capacity and skills in the construction sector, immigration policy, land availability and use, building regulations, other infrastructure demands (and more) as critical influences on whether the milestone can be met.
Counterfactuals are critical
Evaluations of policies and programmes emphasise the importance of the counterfactual – what would have happened without the policy? This is essential to provide a robust assessment of additionality.
For missions, the counterfactual is unclear. Without missions, it may be that many of the policies and programmes implemented to support it would still happen, at least in some form. However, a mission lens could mean policies are more effective and co-ordinated, such that a mission-led approach is more impactful than a counterfactual ‘non-mission’ approach.
There is likely to be policy interest in knowing both whether:
■ The mission-led approach was more effective than a non-mission approach; and
■ Government intervention (relative to no intervention) contributed to the success of the mission and hitting key milestones
The former suggests a role for process evaluation as part of an overall evaluation of missions. This is likely to require input from senior officials to understand whether the mission lens is making a tangible difference to the way that policy is designed and delivered. This could also be done relatively quickly, allowing for early learnings to be taken on board.
We adopted this approach, for example, in our ongoing evaluation of the Strength in Places Fund (SIPF) for Research England. As the largest ‘place-based’ R&I programme in the UK, our evaluation framework included specific questions about whether the place-based lens had led to a different outcome in terms of collaborations than similar place-agnostic interventions. This was tested through qualitative insights from those receiving funding and delivering the support comparing their experiences of SIPF with other R&I programmes.
Evaluating the impact of a mission-led approach compared with a non-mission counterfactual offers a great opportunity to learn not just what works, but how policy design and delivery interact with impact, with potential for cross-government learning.
Evaluate the mission, not just individual policies
When evaluating whether government intervention contributed to the success of the mission, the level of the evaluation needs to be clear. In order to evaluate missions, we need to evaluate the mission as a whole – not any set of policies and programmes that are put in place to support its delivery.
Of course, it is very likely that policy-level evaluation evidence will be a core part of the evidence base for a mission-level evaluation. A mission-level evaluation will, however, need to consider the way in which individual policies interact in service of the mission, thinking through overlaps, synergies and reinforcing mechanisms that might only be apparent at mission-level. These will be most pronounced where different policies are tackling similar parts of the underlying mission-level theory of change. A additional ‘mission-level’ evidence will also need gathering, in particular qualitative perspectives from senior policy leaders tasked with delivery of the mission as a whole, to understand whether and how the whole is greater than the sum of the parts.
Given the complexity of the missions and the range of factors affecting success, it is likely that theory-based evaluation methods will be needed to evaluate the way that interventions contributed to achieving missions and milestones.
Existing Magenta Book guidance emphasises the role of complexity approaches in evaluation which could be particularly relevant for mission-led evaluation, in particular exploring interactions and feedback loops between interventions that support a mission or milestone. Mission-level evaluation could also seek to identify ‘tipping points’, where a critical mass of co-ordinated interventions achieve unique impacts or substantially larger impacts than if they were delivered in isolation.
In implementing individual policies to support each mission, a small number of critical common metrics should also be defined, which should be monitored and tracked within policy-level evaluations. These metrics should clearly relate to the milestones, informed by the mission-level theory of change. This will aid an overall mission-level evaluation and provide a basis to combine evidence from different interventions. This learning was identified as well in the context of the ISCF interim impact evaluation, where attempts to aggregate from the evaluations of individual Challenge Funds to a ‘Fund-level’ assessment were hampered by a lack of common metrics or measures.
However, recognising that individual policies are likely to be tackling different aspects of the mission, the number of common metrics should be small, and focused on those most relevant to a given mission or milestone, to allow flexibility in policy-level evaluation.
The time is now
The government has set out its missions and a set of initial milestones. The time is now to start developing an effective evaluation plan.
Evaluating the success of missions will be important to generate buy-in among both policy makers and the wider public and build support for the missions to continue into the longer-term – certainly beyond the life of this Parliament.
A robust theory of change for each mission, which helps connect the milestones to the ultimate mission objectives, should be developed and accompanying key metrics defined. This should draw on the participation of policy leads and expert stakeholders. Existing or new policies that are then defined as relevant to the missions should reflect the mission-level theory and key metrics within their own evaluation plans.
Finally, the government should commit to a transparent evaluation of the missions and a review of lessons learned towards the end of the Parliament. The results, available to the public, will help both government and opposition parties determine policy towards missions for the next Parliament and beyond.