
New technologies to improve health and safety performance
Discovering Safety joins forces with Safetytech Accelerator to build a Smarter Regulation Sandbox.
This page is approximately a 4 minute read
This page was published on
Banner image: Artistic visualisation by Yutong Liu. Commissioned by Doing AI Differently. Conceptual brief Drew Hemment. CC-BY 4.0
New research argues for AI design that reflects the complex cultural challenges faced by its rapid deployment across society. The new vision presents an opportunity for the humanities to contribute to fundamental advances in AI technology and help to solve complex real-world challenges.
‘Doing AI Differently’ is an initiative led by The Alan Turing Institute, the University of Edinburgh and the UK’s Arts & Humanities Research Council (AHRC-UKRI), in partnership with Lloyd’s Register Foundation. It offers a unique perspective on how the humanities, art and qualitative social sciences can diversify the AI landscape and tackle current challenges within the industry, including design limitations, in-built bias, and wider ethical concerns.
The paper argues that the text and images generated by today’s AI systems resemble more closely the kinds of cultural artefacts studied by humans than a mathematical equation or a spreadsheet of data. Humanities insights are therefore needed to make sense of these systems – both to understand their capabilities and to design systems that are more effective and more responsible.
The paper draws parallels with the rapid development of social media platforms – released with minimal contextual safeguards and benchmarked on simplistic engagement metrics – now linked with societal harms. It sketches an understanding of how to avoid those mistakes with today’s fast pace of AI deployment.
Researchers suggest that interpretive depth is needed. AI systems must be able to understand context, cultural nuance, and multiple perspectives. This will become increasingly important as the remit of AI expands to solving more complex modern-day challenges, from climate change to superbugs. This falls precisely within the expertise of humanities, arts and qualitative social sciences, which all specialise in understanding cultural meaning.
Professor Drew Hemment, Theme Lead for Interpretive Technologies for Sustainability at The Alan Turing Institute, said: "AI systems are increasingly operating in sensitive domains – such as healthcare, climate action and democratic discourse – where cultural context and interpretive judgement are essential. Yet there is a fundamental gap. We need systems that can better engage with and represent the richness and diversity of human meaning."
"The last few years have made it clear that AI can be a formidable tool to solve modern day challenges. But these systems often fail when nuance and context matter most."
In the white paper, researchers outline three core innovations. These include the creation of interpretive technologies – systems built to work with ambiguity, context, and plurality from the outset – new AI architectures that expand today’s narrow design space, and human-AI ensembles that enhance human capabilities rather than replace them.
Prof. Hemment added: "We’re at a pivotal moment for AI. Decisions being made today about AI architecture will shape the systems we live with for years to come. We have a narrowing window to build in interpretive capabilities from the ground up. This is our opportunity to shape a new generation of AI – one that amplifies rather than erodes human potential."
Alongside this is the launch of an international funding call to fund collaborative projects in this space delivered by the UKRI Arts and Humanities Research Council and the Social Sciences and Humanities Research Council of Canada (SSHRC) pending final funder approvals. Through this funding opportunity, Doing AI Differently is building a global research community across six continents, creating a roadmap for action for responsible AI.
Jan Przydatek, Director of Technologies at Lloyd’s Register Foundation, concluded: “Too little research exists when it comes to alternative ways of using AI in safety-critical domains.
“As a global safety charity, our priority is to ensure future AI systems, whatever shape they take, are deployed in a safe and reliable manner. This is exactly what Doing AI Differently is about, by setting out a research agenda for Interpretive AI – systems designed to respond to cultural meaning, contextual depth and multiple perspectives.
“This research proposes a fundamental shift: bringing humanities expertise into the core of AI design, to unlock AI's potential to solve humanity's most complex challenges while ensuring these technologies amplify rather than diminish human agency and cultural diversity. We hope the paper will inform industry leaders and policymakers around the world as we take a crucial step towards the safe, scaled up deployment of AI in safety-critical areas.”