
New technologies to improve health and safety performance
Discovering Safety joins forces with Safetytech Accelerator to build a Smarter Regulation Sandbox.
This page is approximately a 5 minute read
This page was published on
The government’s long-awaited AI Opportunities Action Plan places a welcome emphasis on the potential benefits of AI including the ability to foster growth and to provide better public services. However, to accelerate growth and unlock the benefits in the ways the Government hopes, we need to consider refining the opportunities set out in the plans. Centre for Assuring Autonomy Director, Professor John McDermid OBE FREng shares six ways the Action Plan can be enhanced to steer its way to success. CfAA is a partnership between the University of York and Lloyd’s Register Foundation dedicated to pioneering research and innovation at the intersection of AI and safety.
For example, the notion of AI Growth Zones (AIGZ) is very welcome, and the opportunity should be taken to ensure they are well-distributed geographically, supporting the whole nation, including exploiting brownfield sites such as York Central. At the same time, questions over the climate impact of generative AI must be addressed and the Government could look towards use of Small Modular Reactors (SMRs) to deliver power to local data centres where renewables aren’t available.
As the PM mentioned in his speech launching the AI Opportunities Action Plan, the availability of data is a critical factor in AI research and development. If the UK is to be an “AI maker” the creation of high-value datasets is critical. However, to go a step further, for the UK to be able to export AI-based systems they need access to datasets relevant to their deployment domain. This means putting in place government-level international data sharing agreements, enabling UK-based firms to access world markets. Also, there will be cases where there will be a need to share data, for example for consistent behaviour of self-driving vehicles. Finding mechanisms to encourage a “collaborate to compete” model, perhaps by supporting particular industry sectors to share data through the National Data Library would help UK industry to grow and to safely deploy AI-based technologies.
The need for lifelong skills as set out in the Government’s plan are also welcome. It is likely that there will be many, in all walks of life, who will need to become skilled and informed users of AI, even if they don’t contribute to its development. Taking a more formal approach in the area of developing future leaders in AI development, the recent Centres for Doctoral Training (CDTs) in AI will be important hubs to expand and share our knowledge around all the facets of AI. For example, the SAINTS CDT at York spans disciplines including law, health sciences, philosophy and sociology as well as computer science, giving graduates a rounded understanding of all the factors surrounding effective, ethical, safe and secure introduction of AI systems. In terms of the proposed plan to attract a few (potential) international leaders to the UK, this will be helpful, but not sufficient; a significant rethink of the immigration system is needed to enable companies and universities to attract overseas talent (as students or staff) without the current high costs.
The proposed support for regulators is vital. Regulators are in the front line for approving – or not – AI-based systems and our work with regulators in different sectors has highlighted the need to bolster their expertise and level of resource so as to allow safe and ethical innovation, at pace. Whilst the AI Safety Institute (AISI)can support regulators, most will have to deal with systems using more ‘conventional’ AI, and as such will need access to and support from experts in these areas, such as robotics and autonomous systems. Also, from our experience of working with HSE, OfGem, ONS, OPSS, MCA, MHRA, VCA – to mention but a few – it is clear that they have common concerns, although there are some domain-specific issues. Supporting cross-regulator activity to define common approaches, where possible, will help them cope with demand, will help industry by getting more consistent approaches between regulators, and thus contribute to growth.
Supporting the AI assurance ecosystem is crucial. Effective assurance is how we minimise risk of deploying AI that is capable of causing harm, whether that be discrimination, misinformation, or physical. If serious incidents occur, then there is a risk of a public backlash against AI and with that the loss of growth opportunities. Whilst it is important to address Frontier AI, as is being done by the AISI, there is also a need to address assurance in more conventional AI such as that being used in robots, self-driving vehicles and healthcare applications right now, which continues to be our focus at the Centre for Assuring Autonomy (CFAA).
Investment is clearly needed on evaluation methods and tools, but this should build on the experience of the established safety engineering community to foster a “design for safety and assurance” culture in the industry, rather than the current approach which could be characterised as “build-test-improve”. As well as AISI fast grants, there needs to be sustainable, multi-year funding for work on assurance methods and tools. Such tools are difficult to produce, and they need to evolve as the AI technology evolves. It also takes time for the methods to diffuse into the development community and short projects will not achieve this. Work on assurance has the potential to achieve economic growth for the UK – assurance services are likely to be highly valuable and profitable. At least one of the AIGZ should focus on this – and perhaps all the AIGZ should contribute to work on assurance a part of a national network.
The plan introduces the notion of – scan-pilot-scale – for use of AI in government. This structure, which can build on the successful use of sandpits in many areas, is commendable in that it addresses the “valley of death” that can occur between successful trials and wide-scale deployment. But, for critical applications, regulators need to be part of the process – involved in pilots to conduct a “pilot clearance” to accelerate formal approvals and to realise the benefits of wider deployment.
This action plan is a great start. I urge government to look at the above suggestions seriously as a way of gaining even more for the UK out of AI and achieving growth which is beneficial for all working people as well as the tech entrepreneurs and the wider economy. Finally, to deliver the action plan the government needs to draw on the existing centres of excellence in the UK, and not just rely on the government-funded activities.