A new draft standard from The British Standards Institution (BSI) features two pieces of research from the Assuring Autonomy International Programme (AAIP).
The standard, ‘BS 30440 – Validation framework for the use of AI within healthcare’, references the AAIP Assurance of Machine Learning for use in Autonomous Systems (AMLAS) guidance. It also links to the Human Factors in Healthcare AI white paper based on work undertaken as part of an AAIP demonstrator project and published by the Chartered Institute of Human Factors and Ergonomics.
BS 30440 is being published to ensure the development of safe, effective and ethical AI healthcare products. The products will be evaluated on a specific criteria, including clinical benefits, standards of performance, successful and safe integration into the clinical work environment, ethical considerations and socially equitable outcomes from system use.
In 2016, Lloyd’s Register Foundation’s Foresight Review of Robotics and Autonomous Systems identified that a big obstacle to gaining the benefits of AI, robotics and autonomous systems was their assurance and regulation. It was these findings that inspired the Foundation and the University of York to invest and collaborate on AAIP – one of the only programmes of its kind in the world that is focused on the safety of assurance of robotic and autonomous systems.
AAIP’s state-of-the-art research, training and guidance are providing a framework for the safe adoption of robotic and autonomous systems in industry. Working with regulators and standards bodies like BSI is a key strategic priority for the implementation of AAIP’s work in society.