We’ve detected that you are using an outdated browser. This will prevent you from accessing certain features. Update browser

podcast image with new brand

The Global Safety Podcast - The Future of Data

Subscribe now to The Global Safety Podcast - our series exploring how experts keep the world safe.

 

Subscribe now wherever you get your podcasts:

 

 

 

 

 

 

 

Speaking on the Lloyd’s Register Foundation Global Safety Podcast, Dr Mhairi Aitken from The Alan Turing Institute has called for better regulation of AI and data collection and holding practises.

Her comments come just after Lloyd’s Register Foundation published the World Risk Poll which surveyed over 125,000 people in 121 countries. It found that three quarters of people worry about their personal data being stolen online (77%) and nearly two thirds of people (65%) wouldn’t feel safe in a self-driving car. The full poll findings can be found here.

On the podcast, available here,  Dr Aitken states:

“We need to demand responsible use of our data. We don't have high expectations of these companies to be handling our data responsibly or using it for social good.

“We should be holding these companies to account when they have such a big influence on our lives and on our society. We should be demanding that they use our data responsibly and that they provide some social benefit.”

The Global Safety Podcast, presented by Professor Danielle George and produced by Fresh Air Production, investigates the world’s biggest safety issues, and looks at the latest developments to safeguard our future in an unpredictable world. 

Episode five of the third series sees thought leaders including Dr Aitken, Lisa Allen from the Open Data Institute, Chris White from Lloyd's Register Foundation and Vladislav Tushkanov from Kaspersky discuss the future of data security.

White set out the evolving risks:

“Our lives depend on a data infrastructure, which we don't fully understand. For instance, a power station which is then connected to the Internet - do we fully understand the complex system that underpins that power station in its operation and its connection to the World Wide Web? Was that ever planned? Is it just an evolved system?

“There’s an increase in cascading risks as all of these systems start connecting. There's a lot of risk that we don't fully understand across the infrastructure which keeps society safe. That will continue until the literacy of the profession starts increasing and we start mapping the dependencies and interdependencies that manage our infrastructure.”

Dr Aitken goes on to set out further how we need to change the way we think about AI if we are to regulate it successfully:

“A lot of the public imagery around AI is really sensational. In reality AI is more efficient processes for processing large volumes of data, which is not headline grabbing stuff. But that's what AI does. It's a programme which is developed and designed by humans with all the imperfections, prejudices, values, everything that goes into that. We need to think of these systems in that way.

“Only by understanding that that's what they are, can we hold them to account, scrutinize them and demand that they are used appropriately.”

Discussing people’s perception of risk as evidenced by the World Risk Poll, White says:

“The majority of people who said they were worried about the theft of their personal information online were in five regions - central and western Africa, South Eastern Asia, Southern Africa, Latin America, Caribbean and Eastern Africa. So more people from low middle income countries feel that organisations and their governments may not be using their data in a responsible way.

“Globally, nearly two thirds of people would not feel safe in a self-driving car. Just 27% of people globally said they would feel safe. And in no country or region did more than 45% of people say they would feel safe. People with high levels of education are most likely to say they would feel safer in a self-driving car. About 35% of those with a post-secondary education responded in that way versus 25% of those with primary education or less. So the amount of education you have seems to give you more confidence that self-driving cars are safer.”

The recent Lloyds Register Foundation World Risk Poll report, ‘Perceptions of risk from AI and misuse of personal data’– found here, looks in depth at the poll data and concludes that: 

“The 2021 World Risk Poll highlights a number of factors that may make some social and economic groups in countries and territories around the world more hesitant than others to use internet or AI-based technologies. Some factors — including fear that they will be discriminated against or that their personal information will be used against them — are particularly relevant to low-income or marginalised groups in many countries.

“Policymakers must address such concerns if they are to close the digital divides that threaten to increase and perpetuate income inequality. People and groups who are hesitant to engage with digital technologies — either because they fear negative consequences or because using such technologies seems inconsistent with their beliefs — risk falling further behind people and groups who feel secure in using them to pursue educational and economic opportunities and improve their quality of life.”

 

Sign up for news from the Foundation

Latest News

Can't find what you are looking for?

Hit enter or the arrow to search Hit enter to search

Search icon

Are you looking for?