Skip to main content

World-first comprehensive safety argument to assure AI

This page is approximately a 2 minute read

This page was published on

robotic system with dim lighting

The Centre for Assuring Autonomy (CfAA), a partnership between Lloyd’s Register Foundation and the University of York, has published a comprehensive approach to safety cases for the assurance of AI and autonomous systems. Read the full article on the CfAA website.

The BIG Argument launch

The Balanced, Integrated and Grounded (BIG) argument addresses AI safety at both the technical and the sociotechnical levels and takes a whole system approach to AI safety cases, demonstrating how the entire safety argument can be brought together. 

As AI and autonomous systems become wide-spread across society the use of safety cases to assure such technologies are becoming increasingly relied on by developers and regulators to address the emerging challenges these technologies present. The BIG Argument introduces a way to address these challenges in a sustainable, scalable and practical way.

A whole systems approach to AI safety cases

Professor Ibrahim Habli, Research Director, CfAA, talks more about the programme.

BIG demonstrates that prioritising safety can go hand in hand with innovation. It builds on the three leading safety assurance frameworks and methodologies developed by the CfAA. 

  1. Principles-based Ethics Assurance (PRAISE)
  2. Assurance of Autonomous Systems in Complex Environments (SACE)
  3. Assurance of Machine Learning for use in Autonomous Systems (AMLAS)

As well as addressing concerns with autonomous systems and robotics, BIG enables and supports the safe deployment of frontier AI models addressing a critical gap in development and deployment.

CfAA Director, John McDermid OBE FREng, co-author said: “The BIG Argument represents an important step in the integration and consolidation of different aspects of safety assurance like our SACE and AMLAS methodologies. It creates a cohesive approach that is applicable to many domains and sectors such as maritime, automotive and healthcare and is an exciting next step in the evolution of our work here at the CfAA.”

The BIG Argument paper aims to improve transparency and accountability for the safety of AI, ultimately contributing to and shaping the development, deployment and maintenance of justifiably safe AI systems. 

Read more and download the full paper here.