Skip to main content

Making safety evidence accessible

This page is approximately a 4 minute read

This page was published on

Boiler blueprint

Page author

Allan Little, Director, Mission Economics

Page author

Sara MacLennan, Visiting Senior Research Fellow, Centre for Economic Performance, LSE

Part of the role of the new Lloyd’s Register Foundation Global Safety Evidence Centre is to develop and implement methods to ensure the work we fund to improve safety is evidence-based and impactful. That starts with getting the grant application and review process right, and so we commissioned Mission Economics to develop a framework. The authors, Allan Little and Sara MacLennan, explain.

A framework for value

Evidence is central to engineering a safer world. That's why Lloyd's Register Foundation has committed £15 million over the next decade to establish the Global Safety Evidence Centre – a hub for building and sharing knowledge about what works to make people safer.

But here's the challenge: evidence-based grant-making can involve complex frameworks, methods, and technical jargon that shuts out smaller organisations, international partners, and frontline innovators who lack formal evaluation expertise.

Our three-month scoping project set out to change this. Working alongside the Evidence Centre team, we developed a practical, accessible framework that enables any organisation – regardless of size, location, or analytical capacity – to demonstrate the value, and value for money, of their safety work.

Who is this framework for?

This seven-stage model is designed to support funders and grant applicants of all sizes and locations. Applying the framework proportionately means that it should scale from small community projects to major research programmes. The aim is to help all grants contribute to the pipeline of ‘what works’ evidence that the Centre will build.

Stage 0: Strategic Fit – starting with the right problem

A project can only deliver ‘value for money’ if it tackles problems the Foundation actually wants to solve. We call this 'Stage 0' because it determines whether value for money is even possible, and whether grants reflect the Foundation's 2024-2029 strategy priorities.

Stage 1: SMART Objectives – turning ambitions into commitments

Vague ambitions produce vague results. Compare these two statements:

  • "This grant will improve maritime safety in developing regions."
  • "This grant will reduce fishing vessel accidents in Lake Victoria by 10% within 12 months through implementing real-time stability monitoring systems on 50 vessels."

The second creates a concrete commitment that can be measured and achieved. We provide examples to help craft objectives that are Specific, Measurable, Achievable, Relevant, and Time-bound.

Stage 2: Theory of Change – rallying evidence behind objectives

This stage links activities to impacts through a logical chain. Done badly, it represents wishful thinking. Done well, it brings intellectual honesty about evidence quality and addresses key assumptions.

Stage 3: Appraisal – value for money, without the economics PhD

A fully quantified cost-benefit analysis can be unattainable or require resources disproportionate to the grant itself. Instead, we recommend a manageable framework built on the ‘four Es’:

  • Economy: getting inputs at the right price.
  • Efficiency: cost per output delivered.
  • Effectiveness: whether the approach actually works.
  • Equity: targeting those most in need.

This allows organisations to demonstrate value more systematically without requiring specialist economic expertise.

Stage 4: Monitoring – moving beyond "we'll track progress"

Strong monitoring requires specificity. Rather than planning to "monitor training completion”, effective approaches specify "monthly tracking of participant completion rates through the online learning management system, with quarterly verification through certification records”. We provide examples that organisations can adapt to their context and resources.

Stage 5: Evaluation – understanding why things work

If monitoring tracks what happened, evaluation explains why it happened and whether money was well spent. We outline different evaluation strategies, ensuring at least some are within reach for all grants.

Stage 6: Feedback – sharing what worked (and what didn’t)

Learning goals and dissemination plans shouldn't be afterthoughts. From the beginning, strong grant applications identify what knowledge gaps they'll address and how insights will reach others who can use them to improve safety outcomes. 

What next?

Our project developed through a series of 'topic notes', allowing us to work alongside the Centre, exploring themes in decision science and economics. For example, we drew on our expertise in wellbeing economics to explore how subjective wellbeing measures can be used to measure and value safety impacts. In our final report, we deliberately scaled back to establish some core principles for all organisations to follow, but there is scope to push the boundaries by exploring different ways to measure and value safety outcomes. 

We've also translated this framework into 'Beacon', a prototype web application that walks the Foundation through each stage with templates, examples, and sector-specific guidance. While experimental and unpublished, it shows how digital tools could make evidence-based approaches even more accessible globally.