Overall, we found that:
1) The opacity of AI supply chains is a structural barrier to safety. The lack of transparency over the provenance of training data, the invisibility of human labour in ostensibly automated systems, and the absence of standardised environmental impact measurement mean that organisations procuring or deploying AI-enabled tools frequently lack the information necessary to conduct meaningful risk assessment or due diligence. This opacity is intensified by proprietary systems, corporate publication policies, and fragmented global supply chains that combine to obscure the full range of upstream and downstream impacts.
2) Existing assurance and governance frameworks are insufficient to address the sociotechnical complexity of AI safety. The literature consistently shows that technically focused assurance methodologies fail to capture the broader societal, environmental, and labour impacts that accompany AI deployment. Risk frameworks that treat AI safety as a primarily technical problem miss more systemic effects: for instance, the rebound effects that undermine the environmental case for autonomous vehicles, the mental health impacts of human–robot collaboration, or the ways algorithmic bias in financial services deepen existing social divisions. A sociotechnical approach that integrates technical, social, environmental, and economic dimensions would improve safety assurance by offering a more complete picture of the end-to-end impacts of adopting a given system.
3) The international regulatory landscape is fragmented in ways that compound rather than mitigate risk. The fundamentally divergent approaches of the European Union, United States, and China reflect competing visions of AI's role in economic and societal development. None of the frameworks examined fully address supply chain transparency, worker protections, or environmental justice, and the competitive pressures between nations and firms continue to accelerate product release cycles in ways that outpace regulatory capacity. Top-down global alignment appears unlikely in the short term, suggesting that alternative mechanisms — including sectoral standards, procurement requirements, and multi-stakeholder governance initiatives — will need to play a more prominent role.
4) The distribution of AI's costs and benefits is profoundly unequal. Across every domain reviewed, the harms of AI development and deployment are disproportionately borne by those with the least power to shape its trajectory: workers in Low and Middle-Income Countries who label data and moderate content under exploitative conditions; communities in climate-vulnerable regions whose water sources are contaminated by mineral extraction or whose air quality is degraded by data centre operations; and populations subject to biased automated decision-making in domains such as finance, welfare, and justice. The literature makes clear that AI safety cannot be meaningfully assessed without attending to these distributional questions, and that governance frameworks which fail to centre the rights of affected communities will entrench rather than address existing inequalities.
5) The pressure to rapidly deploy emergent and untested technologies can displace both governance and assurance. The growing body of evidence on environmental, labour, and societal harms has not so far served to limit the scope of General Purpose AI development; as such, without the development of new technical models or a pivot to a “safety-by-design” approach, the continued expansion of GPAI systems will likely continue to generate safety deficits that are displaced onto the most vulnerable.
The next stage of the Foresight Review will explore two different approaches to improving prospects for safe adoption.
Improved provenance and assurance are important levers for organisations that are purchasing or integrating AI systems or building tools and systems on top of externally created models. Routes to achieving this include:
- AI Supply Chain Transparency, characterised by high-quality data about provenance that runs both upstream and downstream from development and deployment. In addition to technical transparency regarding data and model composition, this could include data on workplace conditions for those involved in hardware production and data labelling, clear data on environmental impacts, and detailed risk assessments regarding future social impacts.
- Sociotechnical Evaluations and Assurance Methods that run “end-to-end” across the AI supply chain, treating safety not as a fixed property of a system but as an emergent characteristic.
We will also explore future development models. It seems probable that the major AI labs will continue to develop and refine their own general purpose AI models; as such, the most interesting opportunities for future development may lie in refining and developing alternative approaches to AI development, including:
- Future Developments in Narrow AI, such as applied machine learning, Frugal AI, and the use of small models, that may be easier to safely assure.
- Computing within Planetary Boundaries, sustainable approaches to hardware and software development
- “Safety-by-Design” as a development approach to current and future AI hardware and software development.