Skip to main content

The AI Safety Paradox

This page is approximately a 2 minute read

This page was published on

Engineer with hard hat viewing digital overlay of power grid infrastructure at dusk with transmission towers.

The purpose of this Literature Review and Positioning paper is to inform the forthcoming Foresight Review into the Safe Adoption of AI in Engineered Systems by: 

  • exploring key themes related to AI adoption that have relevance for the future safe adoption of AI in engineered systems; and

  • identifying useful areas for future investigation. 

Safe Adoption of Artificial Intelligence in Engineered Systems

This Foresight Review into the Safe Adoption of AI aims to understand the current and future impacts of Frontier AI on engineered systems and develop recommendations for future safe adoption. 

AI development and deployment are fast-moving, complex fields of practice that are influenced by technological breakthroughs, a competitive global marketplace, and conflicting international regulatory standards. Additionally, perspectives on the likely success of future developments in the field are still emergent and wide ranging. 

This literature review surveys a significant body of relevant material across the fields of critical infrastructure, worker safety, and environmental safety to provide a clear starting point and evidence base for further foresight activities. It surfaces multiple paradoxical outcomes created by the use of AI in complex environments and examines the disconnect between practical and applied advances in safety with the new risks and harms created by emerging and general purpose AI systems. 

This has been published alongside 'Levers for the Safe Adoption of AI' Positioning Paper, which sets out the midpoint perspective of the project. 

Download the Literature Review

Lloyd's Register Foundation Foresight Review AI - Literature Review

Download Lloyd's Register Foundation Foresight Review AI - Literature Review (PDF, 1.52MB)

Citation

If you wish to use and reference The AI Safety Paradox: A Literature Review on the Safe Adoption of Artificial Intelligence in Engineered Systems report in your own work, please include the following DOI: https://doi.org/10.60743/sf8a-p461

Example Citation in IEEE Style:

Lloyd's Register Foundation, “The AI Safety Paradox: A Literature Review on the Safe Adoption of Artificial Intelligence in Engineered Systems,” Lloyd's Register Foundation, 2026. doi: 10.60743/SF8A-P461.

Download the Positioning Paper

Lloyd's Register Foundation Foresight Review AI - Positioning Paper

Download Lloyd's Register Foundation Foresight Review AI - Positioning Paper (PDF, 739.19KB)

Citation

If you wish to use and reference the Levers for the Safe Adoption of AI: Foresight Review Positioning Paper in your own work, please include the following DOI: https://doi.org/10.60743/pzwc-ba94

Example Citation in IEEE Style:

Lloyd's Register Foundation, “Levers for the Safe Adoption of AI: Foresight Review Positioning Paper,” Lloyd's Register Foundation, 2026. doi: 10.60743/PZWC-BA94.