Talking AI With Cindy Rose OBE, CEO at Microsoft UK



 
How do you see AI being used to address broader societal issues?

 

Well, let me say how excited we are about AI, we see it impacting so many of our customers in every industry sector you can imagine, from commercial to public sector. Companies big and small are seeing the opportunity to use AI to cut cost out of their business, drive the productivity of their people and come up with new revenue streams and business models.

The Economic Opportunity around AI is really exciting but the other side of it is the societal benefits that we see. We have a big program at Microsoft called ‘AI for Good’ which has lots of different aspects to it. We talk about AI for sustainability, AI for earth - we see lots of potential there. We see AI for humanitarian aid. You know, we’re putting computer vision cameras powered by AI on drones and we're sending them into disaster relief areas right at the edge and mapping routes to people in need of aid for example, which is I think a great example of AI for humanitarian aid.


“The Economic Opportunity around AI is really exciting but the other side of it is the societal benefits that we see.”


The other area where we see a lot of promise is AI for accessibility. People with disabilities of all descriptions, visually impaired for example, can use an application that we've developed called ‘Seeing AI’ which basically narrates the world in audio so that they read newspapers, go to the supermarket or go to the park. They can understand what's in front of them whether that’s a girl, a dog or whatever it is. This ability for AI to improve the quality of people's lives is probably the most profound impact and opportunity that we see and we're just excited and optimistic about the possibilities.


“This ability for AI to improve the quality of people's lives is probably the most profound impact and opportunity that we see and we're just excited and optimistic about the possibilities”


One of Microsoft's public commitments is to re-skill people already working in the tech industry in areas such as the Cloud and AI, specifically in the public sector. How exactly does this look and how do you see AI impacting the Government as a whole?

 

It's an important issue that you raise because it's the flip side of all that opportunity, it’s the risk of unintended consequences. We're not blind to the fact that there is a risk that things like chatbots for example may start to impact on to the traditional job market. We see that happening already and that trend I suspect will continue. To mitigate the impact of that risk, what can you do? I think it leads you immediately down the road of investing in skills and really making sure that kids coming out of school understand The Fourth Industrial Revolution - this is why I was in a school this morning talking about what big data is, what cloud computing is, what IOT and machine learning are. Kids need to know this stuff and civil servants that work in the public sector need to be re-skilled so that they understand the technology trends that are coming.


“We need to invest in skills and really making sure that kids coming out of school understand The Fourth Industrial Revolution”


There are IT departments in every business in every sector who need to up-skill and re-skill. There's IT professionals who need to refresh and we're really committed to helping to do that through our apprenticeship program, AI Academy and digital skills program. I think our unique role is to invest in skilling in the markets that we operate in.

 

You touched on it then slightly, with the potential of AI also comes unintended consequences and the implementation of AI can be quite disruptive. Can you share your thoughts on this?

 

Well, there's lots of potential for unintended consequences even beyond the impact on traditional job markets that we just need to be aware of and talk about openly so that people don't become so distrustful of the technology that they don't use it. We see potential risk in all sorts of areas, you've got to make sure your AI protects people's privacy, you have to make sure that it's transparent and accountable and that at a human being remains accountable for the technology. You have to make sure that it’s operating in a way that’s not biased.

We know that the underlying data is full of bias. We know that because it was collected over the years by humans and humans are inherently biased. So how do we make sure that AI doesn't institutionalise and amplify the existing bias in the underlying data? How do you design your AI so that it programs out the bias that's in the data. There's all kinds of unintended consequences that we need to elevate, talk about it and figure out how we address it so that people don't lose trust in the technology and that's what we're very focused on doing.