We’ve detected that you are using an outdated browser. This will prevent you from accessing certain features. Update browser

Risk, ‘psychic numbing’, and wellbeing take centre stage at Lloyd’s Register Foundation International Conference 2019

The Global Safety Podcast - Artificial Intelligence and Safety

Subscribe now to The Global Safety Podcast - our series exploring how experts keep the world safe.

 

Artificial Intelligence is all around us: it’s in our phones, our cars and our computers. Police departments, governments and hospitals use it, as do marketing companies, recruitment agencies, news sites and social media. The development and adoption of artificial intelligence is accelerating significantly, but the big question now is how do we maximise its benefits while avoiding its biggest risks? From robotic surgery to prevention of modern slavery, this episode of The Global Safety Podcast will be exploring ways in which AI is making the world a safer place, and the challenges and dangers faced in an era defined by rapid technological change.

Joining Tom Heap in Episode 7 of The Global Safety Podcast is:

  • Jan Pryzdatek, Director of Technologies, Lloyds Register Foundation (pronounced YAN)
  • Khyati Sundaram, CEO of Applied, a recruitment platform using AI to build predictive and de-biased hiring.
  • Alistair Denniston, Director of INSIGHT - the Health Data Research Hub for Eye Health, Consultant Ophthalmologist at University Hospitals Birmingham NHSFT & Hon Professor, University of Birmingham.
  • Ana MacIntosh and John McDermid from the Assuring Autonomy International Programme, a project led by the University of York, and funded by Lloyds register Foundation.
  • Adriana Bora, AI Policy Researcher at the Future Society. Adriana is an AI Policy Researcher at The Future Society, with a keen interest in AI applications to progress Human Rights, Emerging Economies and achieving the United Nations Sustainable Development Goals. Through her research, Adriana studies how augmented intelligence can accelerate the eradication of modern slavery.

 

Subscribe now wherever you get your podcasts:

Listen on Apple Podcasts

Listen on Spotify Listen on Google Podcasts Listen on Stitcher

Episode transcript

 

TOM When we think of artificial intelligence, the first thing that springs to mind for many of us is a futuristic vision full of robots and potentially humans in peril. But we are already living in a world that's pretty full of A.I.. It's all around us. It's in our smartphones, our cars, TVs, watches, computers, social media apps, and in some ways, it's making our lives easier and better. But are there hidden perils? Its use is progressing rapidly with Governments, police departments, health services all adopting it, as well as the private sector. But the big question for A.I. now is how do we maximise its benefits while at the same time avoiding the risks? Today we'll be finding out how we can make the world a safer place and also maybe make the world safe from A.I. And also maybe make the world safe from A.I..

 

Welcome to the Global Safety Podcast with Lloyd's Register Foundation.

 

And to help me figure all this out today, I have a brilliant panel of guests joining me via Zoom. Jan Pryzdatek, director of technologies at Lloyd's Register Foundation, Khyati Sundaram, CEO of applied a recruitment platform using AI to build predictive and non-biased hiring. And Alastair Denniston,director of Insight, the health data research hub for eye health consultant ophthalmologist at University Hospitals Birmingham and honorary professor of at the University of Birmingham.

 

Also later on, we'll be hearing from Adriana, Bora AI policy researcher at the Future Society, who will tell us how AI's helping combat modern slavery. And then Ana Mackintosh and John McDermott from the University of York talking about how they're working to make self-driving cars safe.

 

Well, I suppose the first question that we need to just get out there and have some definition on is what is AI? Jan, let's start with you and feel free to use some examples because it might help us or what is it?

 

JAN It's effectively hardware and software working together in order to make sense of information. And once they make sense of that information, they can do things that can achieve the objectives that it's being set. So what we're really saying is it's about pattern recognition. It's about gaining insight from data. And increasingly, it's about taking that insight and leading to decisions that have an impact in the real world.

 

TOM And what will be your clearest example that that kind of spells out what it is and couldn't happen without artificial intelligence?

 

JAN Well, if you think of something like doing a search on the internet, the internet is using lots of A.I. in order to understand what your question is and to find that information out there on the big worldwide web. And if it wasn't for A.I. looking for this information and the patterns, it just wouldn't be able to happen.

 

TOM Khyati Sundaram, question to you, really. What is A.I. and give me a cracking example of?

 

KHYATI But it is a broad collection of algorithms, and when we say AI instantly think of being very pedantic and thinking of machine learning, so any algorithm, any software that is written that learns on its own and the word machine learning can classifies as AI. The problem today is many algorithms, which is a simple rule that can be written or coded, and software is baptised as AI. So basic rule based automation that does not improve over time and does not learn over time on its own and is human dictated, should not classify as A.I.. And that's the distinction, A very clear example would be self-driving cars where there is automation. But they're also learning and adapting to the rules you would take as a human. And they would direct you over time. So that would be a A because they would be using machine learning. But a chatbot might not be full AI.

 

TOM Yeah, it's a useful distinction, Alastair, because I often think AI is kind of bolted on to fairly conventional technology in order to make it sound a bit more modern and sexy.

 

ALASTAIR Yeah, it's a great point, But I think the reason why there's been so much excitement about artificial intelligence recently is exactly what Khyati and Jan were touching on earlier is the ability not just to follow rules that are set by humans, but effectively for the algorithms to learn their own rules and make their own rules from the data that's coming into them. So the beautiful example Khyati used was was the driverless self-driving cars that effectively they're learning from the data that comes in to build their own rules by which they would then operate in a new, unseen situation in my field of medicine that that's something we're really excited about because there are there are high volume, repetitive tasks where we don't have enough doctors and health professionals to deliver efficiently and at scale. And the holy grail is to have artificial intelligence start to automate those processes so that we can identify and diagnose at scale and faster. You know that we can have expert level care in, you know, in the back of beyond because we're not limited to a finite number of of humans with that particular expertise that you can have 24-7 diagnostic availability at the same standard as as you might get in an expert centre.

 

 

TOM Jan How different is the reality of AI to the public's perception of it, you think?

 

JAN Well, it's an interesting question, and I think what I'd like to say at this point is that Lloyd's Register Foundation, we have something called the World Risk Poll that's been featured in this podcast before, where we've gone out to 150000 people across 142 countries to understand how they perceive risk. And in the 2019 poll, we asked a fairly simple question, which was if people believe the A.I. over the next 20 years would be mostly helpful or mostly harmful in their country. And the results of that were quite interesting. So if we look at a global picture, we were finding that 41 percent of of people that were involved said it would be positive for their country, and we had 30 percent of people who thought it to be negative and 29 percent of people who who didn't get an answer, And we were finding that people in southern Europe were amongst the most likely to say air technology, what would harm people in that country over the next 20 years? And that was 51 percent. And then you had a similar number of people saying it would be harmful in South America, Latin America and the Caribbean.

 

TOM Do you think that's based on that experience of AI? Or do you think it's a slightly wide suspicion of technology or science or advancement in some way? Is it really based on the knowledge of what A.I. is?

 

JAN I actually doubt that most people really understand what it is, and there is there appears to be a correlation between countries that think AI will do the greatest good, having the greatest trust in scientists and for countries that have the lowest trust in scientists thinking that AI will create the greatest harm. But on top of that, people in the age groups from 15 to 29 were far more likely to think that AI would be a force for good. And I suspect that may be because they've grown up with technology and they understand what it is. And the final insight was that women were more likely to find AI harmful than men.

 

 

TOM Would Alastair or Khyati, would you only comment on this?

 

KHYATI Well, it is very nuanced issue, isn't it, Tom? Algorithms, artificial intelligence are not foolproof. We know that, but neither are humans, with all of the evidence showing that if humans are left the taking people decisions, which is especially relevant in the field that I work in. We are going to take decisions that are adversely impacting a lot of the population and that ties into Yan's comment about why women feel that AI would probably create negative outcomes for them. And this is not just women. This is a lot of underrepresented populations of society. And that brings in a challenge of.

 

TOM You mean they might feel that because it has been bad for them and should be based on experience, is what you're saying.

 

KHYATI Definitely. Yeah.

 

TOM Yeah. Well, we're going to come on to how this works and how AI reinforces past prejudice in effect. We're going to come onto that in a minute. Did you want to comment on that, that perception of A.I. question?

 

ALASTAIR Yeah, I think one of the funny things about A.I. is it's one of those nebulous terms that people turn it into what they want it to be or what they fear it to be. So I think it's very good that you actually challenged us to come up with a definition at the beginning because my observation is that you need to refine what you're talking about before you, before you explore the attitudes, because because otherwise people will be talking about very different, different things. So I think there is a real perception problem and there is a trust problem and they're connected. And I think that's partly because, you know, people will distrust the bits of A.I., that they they fear trust the bits that they would like to have. But nobody's quite sure what the reality is. And you know, and so start with the definition and then take it from there.

 

TOM One thing that springs to my mind when I think about AI is self-driving or autonomous vehicles that we heard about a moment ago. And although with some way there with things like automatic cruise control and computer vision, fully autonomous cars are still, it would seem years may be decades away.]And that's according to Ana Macintosh and John McDermaid from the Assuring Autonomy International Programme, a project led by the University of York and funded by Lloyd's Register Foundation. Ana and John recorded this interview for us earlier in the year. [

 

 

 

**CLIP 1: AAIP**

 

ANA: The Assuring Autonomy International Program is a six year program that's looking at gaining more understanding about the safety of robotics and autonomous systems. Driverless cars are a good example of an autonomous system – those systems need to be able to make decisions based on their environment, they have to respond to what’s happening outside them on the road, but also respond to the driver or the passenger inside the car.

 

JOHN: Actually human beings build up an understanding of the world over many, many years before they learn to drive cars. You know, they know what children and bicycles are. They know what scooters are. They understand how people behave.The cars don't, all they do is have massive amounts of training and data from which they draw inferences. And, you know, they would need to see an astonishing amount of data to do with all the circumstances that exist and own that. Oh, by the way, the world changes. We introduce new types of cars or new road lamps or whatever. So having things that have learned enough to replace that sort of human intellectual capability, you know, is is a huge, huge problem.

 

 

ANA In deploying robotics and autonomous systems, I think it's really important to consider safety really early. You can't be thinking about it at late stage when the technology is already developed. You need to think about safety right at the beginning of the design process and design systems to be safe, not just have safety as an afterthought.

 

JOHN And sometimes people say, well, actually you're stopping this stuff being deployed. But I think what we're enabling deployment, if they're not safe, they're not going to be accepted. They're not going to be used.

 

ANA the work that we do in safety is completely critical to the deployment of these devices and with the help of the Lloyds Register Foundation in setting up the assuring autonomous international program, that's going to make a huge difference to advancing all of these technologies to the point where we can really have them on the roads or in factories or in people's homes. And that's the future.

 

 

JOHN: You know, we're still affected by, you know, covid-19. I think some of the early deployments are actually, you know, going to be around delivery robots. vehicle that could actually take food or medicines to to people. So it could actually have, you know, a huge benefit in helping society deal with these sorts of situations. So, yeah, it's a tremendous opportunity and are very fortunate to be involved in this, even though it's it's a not a trivial thing to to do is really, really worthwhile.

 

 

CLIP ENDS

 

 

 

TOM: Thanks to Ana and John of the University of York for that. Jan, I've always been a little bit sceptical about self-driving cars because it seemed to me that the simple wealth of information we bring to driving, which is a mixture of a mental and a physical and emotional task, is far more than just obeying the rules of the road. ]And it also strikes me Jan that artificial intelligence, has actually proved itself to be rather poor in the kind of mental physical interface where it requires a physical acts as well as a processing act. I was looking at something the other day in farming and yeah, A.I. and computers are brilliant at spotting whether whether fruits are ripe and they can do it incredibly quickly. But when you combine that with the picking task, it was absolutely hopeless.

 

JAN Well, that's right. I mean, we're talking about A.I. having lots of inputs that it needs to control and lots of outputs that it's trying to also control. So if you're looking at fruit picking, you have the challenge of identifying the fruits is that fruit ripen and then you will have some sort of a system to try and pick that fruit. And it needs to understand, does it pick it with a certain amount of pressure or does it squeeze it and press it? Or is there enough of a grip to be able to grab it? And you know, for us, even for a small child, it's pretty easy to be able to find some piece of fruit on a tree and drag it off. But for A.I, it's not a natural thing. It has to learn all these things and have systems in place to to understand the environment in which it's trying to work.

 

TOM Now we're going to hear from Adriana Bora, a policy researcher at the Future Society and member of Code 8.7, a collaboration of people and organisations using A.I. to eradicate forced labour, modern slavery, human trafficking and child labour in all its forms. She's in Australia, so we decided to record this interview with her at a slightly more sociable hour.

 

**CLIP 2: ADRIANA BORA**

 

ADRIANA: So modern slavery is truly a global crime, and based on our latest estimates we came, which came out in 2018. We know that around forty point three million people in modern slavery across the globe. When we look deeper into these statistics, we also realise that one in four victims of modern slavery is a child, and more than 50 percent of the victims are women and girls.    

 

With Project AIMS, which stands for AI against, modern slavery, we are trying to use this technology to help us accelerate the work that we are doing and trying to understand what businesses are doing to tackle slavery in their supply chain. So from this shocking statistics for forty point three million that of people that we had estimated to be modern slavery, sixteen millions of those are estimated to feed directly into the global supply chain of big corporations and therefore to end up in the products and the services that we are consuming every day.

 

The UK government at first in 2015  passed the legislation, called the Modern Slavery Act. And in this legislation, corporations that have a turnover of up thirty six million pounds are obliged to publish by year one statement to declare how they're ensuring that their supply chain is free of slavery. And now Australia passed a similar legislation and we see governments such as New Zealand and Canada and many across the globe looking at passing modern slavery legislation. So what this leaves us with is with thousands of statements that generally the research community and the civic society community doesn't have time to, to read and to benchmark. And therefore we don't have a clear understanding of what companies are doing and what companies can do more to tackle these horrible crime in their supply chain. And this is where AI comes into place. We use A.I. to put together all these statements and to summarise this information to benchmark their statements against a set of core metrics and to really be able to say yes, this year, this company has talked about their training for their employees. They have a whistle blowing policy in place or a modern slavery policy and a risk identification incident identification policy, and make this information available in a structured form for the community to use, furthering their research and their intervention.

 

Slavery it is a very physical act and it leaves traces. And therefore, whenever there are traces, there are opportunities to use data to map it back.

 

So the University of Nottingham, have been doing incredible work here, mapping illegal fishing by looking for the drying racks of the fish. but also to map even cases of slavery that are happening in more developed countries, such as the informal agriculture and strawberry picking fields and camps in Greece.

 

There are opportunities, to use data, for instance, on the social media platforms where a lot of the grooming and the recruitment of people into slavery is taking place. So we can use textual data, such as Project AIMS. We can use images from satellite or images collected by people with their cameras, use social media data in identifying people at risk to be recruited or recruitment networks and follow the money, which is always a really great opportunity to investigate behaviour. 

 

So we have a lot of data and traces across the globe and now we have an opportunity to make sense of this really incredible amount of unstructured data and structure it and make it available to community to link the pieces and put the puzzle together and create the evidence we need to eradicate this crime.

 

 

CLIP ENDS

 

 

 

TOM: Fascinating stuff. Thank you very much to Adriana Bora for that insight. But I want to hear from the panel now about what other safety applications are there for Al.

JAN?

 

JAN: So when it comes to making things safer, there are certain routes to do that. And I'll just name three of them. One is about taking a person out of a place of harm to stop them going into a place that is dangerous to them another way is to be able to monitor what's happening, how things are developing with real time data that you can be collecting. And the final thing is, it's about being able to understand what is happening so you can predict a future situation and take steps to avoid an accident before it happens. And AI has a really important role that it's already started to do in these types of areas. So you could think of something like a confined space so people go and work in confined spaces for various reasons to inspect, to to repair. And these are spaces that have hazards in them. As you know, Tom, confined spaces have areas where you may not have breathable air or breathable air may disappear in time. This can be very radioactive places if you're in a nuclear power station. They can be very hot if you're in a hot environment. So putting people's lives at risk in these locations is something that we tend to do today when we have to. But if you're using technology with sensors and A.I., it's able to monitor those locations either by something that's fixed in that location or alternatively on some sort of a robot or a drone that can go into those spaces to collect information that people can then use to understand and interpret. Another example, this is the ability to install sensors into things like bridges, and we have a recent example in Amsterdam, where the world's first 3D printed stainless steel bridge has been built and installed for the public in Amsterdam. And this is a technology which is new and there are certain uncertainties in there. But what we've done through the project there is we've made it into a smart bridge with sensors, with a digital twin. And we've got A.I. technology that's constantly monitoring the data, that’s streaming from that bridge in order to give us confidence in its condition to keep being safe, the members of the public and also to tell us when it might need someone to go and have a look.

 

TOM: And that’s more than monitoring, it’s learning about the bridge?

 

JAN: It’s basically picking up patterns of what normal looks like so that when something different happens it can identify… it xxxxx

 

TOM: Alastair, you mentioned earlier on some of the stuff to do with health care. And obviously you're involved, particularly in ophthalmology care, but perhaps you could give us a bit more of a flavour of how important it is and will be in health.

 

ALASTAIR So I definitely think this is part of the future. I don't think it's the whole future for health. I think the the sort of headlines about replacing doctors or replacing way to health professionals are misplaced. But I think it's about finding what are the real strengths of A.I., and I come back to this automation of high volume routine tasks. One application is for them to to highlight areas that might be easily missed by humans. So you can have a human in the loop who's checking, but the AI draws attention to areas that might be missed. You could have A.I. systems that do routine checking in the background so often, for example, radiology scans, et cetera, where you still have a human reporting. But they're also second read by a by a computer and so on. And I think. That that's in a sense, maybe the first application of some of these systems, this is the kind of safety safety that for most A.I. diagnostic and screening tasks, though, have been designed to to potentially also be used in as as replacing a human in the sort of diagnostic or screening process. But even though the technical capability is there, there's still that sense of, well, we need to know, can we actually trust this on these really critical decisions?

 

TOM So I was going to raise the T word - trust, you know, whether patients are more likely to forgive in a funny way, a human, they get it from that than what it considers a machine. Also, especially if it's made that mistake thousands of times.

 

ALASTAIR Well, it's funny that certainly when the data I've seen on on trust in this area is that humans are far, far more forgiving of poor performance from other humans than they are from a machine, as there's a powerlessness in terms of handling your care over to a machine that that people find very unsettling. Which is another reason why I think having humans and machines working together is is probably the most powerful and effective way going forward.

 

TOM: One of the risks we mentioned earlier about AI is what's called algorithmic bias, which I think basically means that it reinforces prejudices and biases that we've maybe had and built up through human society over the years that can then be reinforced by a system. But Khyati, tell me why what this is and why it's such a problem, particularly in your field of recruitment.

 

KHYATI Yeah, so I'm coming at it from the angle of hiring and recruitment where a lot of people decisions are rife with bias. But if you speak specifically about algorithms, or AI, which are in widespread use, by the way, are quite controversial in the impact they're having, in this particular field. There are three areas how you could have algorithmic bias. So one, very simply, you could say an algorithm is trained on past decisions and the recruiters in the past for biased and for the algorithm will replace but replicate what the recruiters have done in the past.

 

TOM Are we talking that if people have had, you know, gender bias in their recruiting or racial bias in their recruiting in the past? Or maybe age bias and their recruiting that these things are likely to be reinforced by an algorithm? Is that the point?

 

KHYATI Yes, any kind of bias. So this is non-specific, but it could be gender, age, ethnicity, any kinds of biases that play in a decision when you're making a hire. And all of this, if they were reflected in the data, made on the decisions that the recruiters were doing in the past. This would be reinforced in any algorithm that is trained on that data

 

TOM And we've seen examples of that.

 

KHYATI We have definitely seen examples of that. We have a very famous example from Amazon, where they had a black box HR that only ended up hiring or offering jobs to men because the entire training dataset was made on what good looks like and that looked like white men in the company. But the second way this can come in is and this is happening a lot right now is if the data is trained on the behaviours of a majority group. This is a particularly systemic problem in hiring, especially in technology, where we say that teams are very homogenous and we see that there aren't any women in coding sector or stem jobs or science jobs. And that could be because all of the data that is being used to hire is being trained on the majority group, so that could again be statistically white men.

 

TOM Just just to be absolutely clear about how this is working in some companies, if if I submit my application, no doubt online is is in some companies the first filter that's looking at it going to be using a robotic eye, an AI eye, if you like to look at it, is that is that what we're talking about and is looking to certain features as an initial filter?

 

KHYATI Yes. But I wouldn't necessarily call it an AI because I think it is in the rudimentary spectrum of algorithms or rules based technology where you're probably sending in your résumé or CV, and it has keywords and somebody has written a code that matches for the keywords that sit on the CV. And so it's arguably more rules based on it's not learning any more than that and evolving any more so than that, right? That's the very first step where bias can creep in. And that's that's one of the areas that we're looking at with Applied, especially in this field, is how do you create that dataset that is devoid of bias

 

TOM And how do we do that?

 

KHYATI The big answer is not using CVs and resumes. Which brings us in a completely different unchartered territory - is How do you hire? Because we've been doing this for 400 odd years as far as I can remember. We've been using CVS resumes. LinkedIn profiles. That's the modern age CV. And if you use all of the data that sits on that, that is highly unpredictive and probably adds to bias. And that's a double whammy. If you train a lot of data that works and sits on a CV, then you're likely to end up replicating the human decisions of the past. And that's what that's what is happening, and it's happened with Amazons of the world, so it's not a small problem to solve.

 

TOM Yeah. Does it suggest that we should take AI out of things like recruiting altogether?

 

KHYATI Well, I'm more with Alex to here, I think there is a huge potential for AI in recruitment and it's quite widespread, but it's more about working on those automated repetitive tasks, enabling diagnostics throughout the process so we could monitor data from the hiring process, learn from it, see what works, what doesn't work. But eventually, the human takes the decision. So humans have agency in the process, and that's what we believe at Applied.

 

TOM does anyone want to come in on on bias being exaggerated by HR? Yes.

 

ALASTAIR Well, I was really fascinated to hear Khyati’s experience there because, you know, we see exactly the same thing happening in health and in in fact, we coined the term for her second points around the the impoverished ness of the training data or the narrowness of the training data. We coined the term health data poverty because we think that this is a real, a real bar to people getting good quality health care in the future. Because as digital health becomes more and more important, including A.I. systems, you need to make sure that those tools have been trained on a diverse dataset that represents the whole population, not just a majority group. And I think this is really key. So we see it in recruitment and we see it and in health, we see it in the justice system as well. There are distinct risks, risks there. So so this is an area that to respond to your challenge, what can we do about it? Well, one of the things is to to really focus on how we create those datasets. So you mentioned in your intro that I lead to health data research hub for AI (or eye?) Health and a lot of the vision of that is around creating diverse datasets that are inclusive and are representative of the wider population so that when the tools are developed, there's transparency on what the data's trained on. So we know what the representation is both, you know, age, ethnicity, gender and so on. And but there's also then an inclusive dataset to train these new algorithms. So I was really I was really encouraged to hear what Khyati is doing in her sector,

 

TOM One of the areas of safety that troubles me, you might think this is a bit broad, but I consider this to be a societal safety and algorithms is how social media uses artificial intelligence. There's a lot of concern that it exaggerates or intensifies our own prejudice by what it feeds us, because a prejudiced human is easier to feed material because you know what it wants and it wants a lot of it and that that underpins some of the arguably greater extremism in society today. Jan I'm going to come to you, do you see this as a risk because I do if my YouTube or Google, Facebook or whatever is giving me material that it thinks I like and is pushing me in a certain direction, then it notices the more I read, the more I wanted that than it is in effect, affecting my intelligence or my mind, isn't it? [20.6s]

 

JAN You're quite right. The algorithms that are sitting behind things like social media are looking for the patterns for what you look for. And essentially they're tailoring the material, the content that they send to you. It's the same in social media, it's the same in shopping. If you're shopping, the algorithms are building up a picture about you and they push certain things in your direction that they think you will like. Now there is a question here of is it correct to be reinforcing opinions? And is that the job of A.I.? And it's just a lot of this is it's down to individuals. You know, maybe you've got an individual who wants a balanced view, but maybe you don't.

 

TOM Do you think it's dangerous that reinforcement.

 

JAN I think it can be because we don't realise it's necessarily happening.

 

TOM Khyati, do you think it's dangerous?

 

KHYATI I think it is very dangerous and much beyond our understanding.

 

TOM Do you think some of the atomisation of the world, Alistair is atomisation of opinions in the world is down to this aspect of artificial intelligence?

 

ALASTAIR I think I think you're right Tom I think it can reinforce and really unhelpful way. So it is an area of concern for me, what I struggle with is it's where the boundaries of autonomy are. You know, if we look at some ethical principles, how much do we say, Well, you know, this is a person’s choice that they that it's seeking out space, reinforcing our views? Or how much do we say well, actually, no, they're not actively choosing that they are passive recipients of these algorithms. In which case I think there is a strong argument that you you should challenge that. But yes, I think it's difficult.

 

TOM Jan you’ve got your hand up. One of the things I was going to say. Can anything be done about your? You all agree that there is some peril in this. Prejudice reinforcement algorithm of social media. Is there anything that can be done about it? Jan.

 

JAN Well, I want to go back and remind us what is and is an algorithm that builds up patterns. And the effect it's having is is a very negative effect. In that context we're talking about now, but we have to remember that it isn't designed to have morals or ethics at this time, it's just not smart enough to do that

 

Tom: So can we make it, design it in a way that doesn’t deliver greater prejudice?

 

Alastair: We can design it to present counter views but this may ome back to autonomy – acyally where do we say it’s a right thing to do, I think that would be good but I would have to defer to others. One of the key things here is is is the black box elements or the fact that this is unseen reinforcements. But I think as a as a parents, I would I would love to be introducing those, though those challenging views so that so that my kids is that growing up aren't just reinforced with that kind of narrow, narrow worldview exposure to multiple views I think is so healthy.

 

TOM Yeah. One of the other things that seems a potential peril to me is that I hear the competition between countries in I Russia, China, America, India described as a race. Now, when you have a race, a lot of safety elements tend to fall away. You just want to be the victor. And so once again, it is this is this a peril if we're in danger of seeing this as international competition that we are in peril here, Khyati?

 

KYHATI: Quite frankly, a lot of the safety aspects will fall away because it's not the paramount metric when it comes to deciding who's the fastest and the most efficient, then who makes the most money. But it's upon practitioners like myself and people in the industry and various industries and sectors to work out what the outcomes of these AI will be. We have to consciously look for bias in the data. So to your question of is it possible to remove or mitigate for biases? I absolutely think we can. So I am taking the glass half full approach, but it is a very hard job to do. And the notion of market forces applies here, too, as Alastair said. But I don't think in specific contexts such as the context of hiring, where we know algorithmic algorithmic biases or human biases perpetuate systemic problems, we can't leave it to market forces.

 

TOM Yes Jan.

 

JAN Yes. Yeah. I think regulation is a really interesting point, and it's probably important to to just reiterate what regulation is about. Before we go on. So if if individuals are left to their own devices, they're probably going to behave in ways that are inconsistent with the public interest or business or government policies. So when we talk about regulation, it's about making sure that we're creating the right behaviours. And that's a really important point to make. And when it comes to a disruptive technology like A.I., it's moving really, really fast. And the people who are developing this technology are developing at pace and they're deploying it at pace. But then you have the people that will maybe need to regulate this technology, and they not they don't have the same understanding of the technology. We're not talking, you know, it's a bit like the tortoise and the hare in a way that one is racing ahead and one has a much tougher task to keep up. So being able to create the regulation that's needed is actually really, really difficult.

 

TOM And given the power of the A.I., that sounds pretty alarming.

 

JAN Well, I mean, the genie is out of the box in a way. The AI is out there and we do have regulations across the globe that people are starting to try and and implement to to make sure that AI is of benefit for everybody. But different people will want to regulate in different ways because they see they state they see the right behaviours just being different things. So we have across the globe, it's going to be very difficult for everybody to regulate A.I. in exactly the same way. And that's going to lead to some some challenges in the future.

 

KHYATI: Yeah I would see as AI as a cultural right becaue there are xxx out there… The last I checked, there are 21 different kinds of definitions of fairness. So which one would we apply, especially to recruiting now within recruiting? There is a standard definition that's something that is not relevant to your job. Application should not featuring, for example, your nationality. It should not matter. But if you build any AI based on CV daytime historic data, that's quite likely a possibility that because it's matched the pattern, let's say, of certain kinds of people coming into the workforce from the UK economy, it replicates that and your nationality becomes a sort of proxy for you to enter the job market, even though that's not what the algorithm was designed to do. It is simply matching a pattern. And so we're back to inferential analytics and understanding how we can create laws that mitigate impact of that inferential analytics when we haven't even completely understood what influential analytics we can create from this A.I.. So it is a bit of a cycle. And yeah, I love the hare and tortoise analogy. But yeah, and so I think where we are in that loop.

 

TOM Time’s up really, but I just want to ask you a final question. What do you think the future looks like for? How might we be using it in 2050, say? Jan?

 

JAN I think when I look at the future, I'm really hoping that the issues that we've been highlighting today are things that we get to grips with, and that is something that we can all benefit from in an equal way. And it's going to be everywhere. I mean, it's already pretty widely used, but it's going to be doing more things and it's going to be doing things in a smarter way. I mean, we were talking earlier about autonomous cars and, you know, it's going to probably be another 10 or 20 years before these things are really on the roads in numbers. But I think A.I. that we see today and that we have in the future, it's really there to make better versions of ourselves so that we do things in a in a in a better, smarter way. And I'm hoping that in the future, A.I. is going to keep us safe. It's going to help us be better people, better versions of what we are and also build resilience.

 

TOM Wow, that is a glorious future to look forward to. I'm concerned whether we're going to be controlling it or it's going to be controlling us by 2050, what was your vision for 2050 Alastair?

 

ALASTAIR Yeah, no, I think we'll still be controlling it. Tom. I think if we look in healthcare, you know, we will adapt and innovate A.I. systems, but we also change the the analogue systems. If you like the normal health care pathways to be more suitable and safer for those A.I. systems. So I think there will be adaptation both ways, and I think you'll see the same in different sectors. Some areas, it'll be seamless. And I guess that's what we're trying to do is get to the point where we're using the strengths of A.I. and that unique gifts of human to to just be like the best version of ourselves. So I want from what I want from an AI system is to allow me to have more time to be a human doctor. Really, really being able to help another human who's who's in distress, who's concerned about their diagnosis or their loved one's diagnosis may help them to make the very best decisions they can possibly make with the best information they can get and with the best understanding, which is supported by this artificial intelligence systems that doesn't refer reinforced by human biases or my limited ability to retain that data, but it can really help the patient really interesting.

 

TOM Khyati what's your vision for 2050?

 

KHYATI Well, I'm in the same camp as the panel, so cautiously optimistic because I think there is huge potential from what we've already discussed. Transport 2.0 to Food 2.0, Work 2.0, Democracy 2.0 even. And I think A.I. has a big potential, but it is upon us to look at the outcomes and how we implement that, because that's going to be key. I, my ideal vision is exactly the same as Alastair we have human agency, but the AI is used as enablement, so it makes us better humans. It makes us make better, better decisions, faster decisions, but not at the expense of society. So everyone comes along. But it is a it is a utopian world I live in so we’ll see!

 

TOM It's quite a widely held view that some aspects of AI in the social media aspects we talked about earlier have actually undermined democracy. You're suggesting that A.I. in the future could enhance democracy.

 

KHYATI Potentially. I think we can have better diagnostics and make better decisions as a society. Now, whether that's in recruitment or you take global politics, you can make better decisions because what you need for making better decisions, it is better insights. And where can you get that real time? You can use A.I. to get you real time data across really large economies if you wish to and synthesise that but retain human agency and make you a better global leader, you can do that. But whether we get to that is it is a different point.

 

TOM Retaining human agency, that's a phrase that's come up quite a lot, and I like the sound of that. Well, thanks very much to today's panel – Jan Pryzdatek, Khyati Sundaram and Alastair Denniston and also to our guests we heard from earlier Anna Mackintosh, John McDermott and Adriana Bora. Well, that was the last global safety podcast for the year. Dry your eyes, but don't worry. We'll be back in 2022 with more from the Lloyd's Register Foundation. Just search for us, the global safety podcast wherever you get your podcasts. Follow or subscribe for free so that you don't miss an episode. Thanks very much. Goodbye.

Sign up for news from the Foundation

Latest news

Can't find what you are looking for?

Hit enter or the arrow to search Hit enter to search

Search icon

Are you looking for?