Can We Trust AI, or the People Building It? - The Assignment with Audie Cornish - Podcast on CNN Audio

CNN

CNN Audio

Trump opposes Israeli plan, Washington Post hack, blockbuster baseball trade & more
5 Things
Listen to
CNN 5 Things
Mon, Jun 16
New Episodes
How To Listen
On your computer On your mobile device Smart speakers
Explore CNN
US World Politics Business
podcast

The Assignment with Audie Cornish

Every Thursday on The Assignment, host Audie Cornish explores the animating forces of this extraordinary American political moment. It’s not about the horse race, it’s about the larger cultural ideas driving the conversation: the role of online influencers on the electorate, the intersection of pop culture and politics, and discussions with primary voices and thinkers who are shaping the political conversation.

Back to episodes list

Can We Trust AI, or the People Building It?
The Assignment with Audie Cornish
Apr 10, 2025

Artificial Intelligence can be difficult to understand, but a few things are certain: it's here, it's reshaping entire industries, and it's making a lot of people nervous. Audie sits down with Mustafa Suleyman, CEO of Microsoft AI and a cautious optimist about our technological future. They unpack what AI is already changing, what it can do next—and how we can prepare for what’s coming.

Episode Transcript
Audie Cornish
00:00:00
I can remember when the internet felt like a promise, and everyone bought in. It spans the globe.
CNN clip
00:00:07
like a superhighway. It is called Internet!
Audie Cornish
00:00:11
Anything and everything, free music, new friends, pet supplies, just a click away.
NBC news clip
00:00:18
Internet is that massive computer network. The one that's becoming really big now.".
Audie Cornish
00:00:23
Bill Gates himself had to go on late night TV to make it make sense.
David Letterman clip
00:00:28
What the hell is that exactly? Well, it's become a place where people are publishing information, so everybody can have their own homepage, companies are there, the latest information. It's wild what's going on. You can send electronic mail to people. It is the big new thing.
Audie Cornish
00:00:46
Back then, the internet was new, weird, and hopeful. Now...
Jimmy Fallon clip
00:00:50
Today, U.S. tech stocks plummeted after China introduced a highly effective AI program called DeepSeek that could make American AI obsolete. That story again, AI is afraid of losing its job to AI.
Audie Cornish
00:01:09
'Ever since ChatGPT was released to consumers back in 2022, it seems like every day there's some new creative use for this technology. So you might see a White House-generated Studio Ghibli anime-style meme mocking illegal migrants, or watch robots doing surgery. But today we're gonna talk to someone who has been at the heart of the industry for more than a decade, the CEO of Microsoft AI, Mustafa Suleyman on the next era of the tech industry and about what it means to hold onto your values when the industry is moving faster than the rules meant to govern it. I'm Audie Cornish, and this is The Assignment.
Audie Cornish
00:01:58
'Full disclosure, I use a ton of AI chatbots. ChatGPT, Claude, Perplexity, DeepSeek, you name it. I have tried it for everything, from researching family vacation ideas to researching this interview. At the same time, not a day goes by where I do not see a news story about the concerns people have about the technology, the effect of its power-hungry data centers on the climate, its potential to displace workers. the way it's used by law enforcement and militaries. So here's an example. At a company event for Microsoft's 50th anniversary just last week.
Microsoft protester clip
00:02:35
Shame on you! You are a war profiteer.
Audie Cornish
00:02:37
Microsoft AI CEO Mustafa Suleyman stepped on stage and was met not just with applause, but with some protest.
Microsoft protester clip
00:02:45
Stop using AI for genocide, Mustafa. Stop using AI for genocide in our region!
Audie Cornish
00:02:51
'As the AI chief at one of the largest technology companies in the world, criticism comes with what is a very big job. He leads all consumer AI products and research, including Copilot, as well as Bing and Edge, but tons of major corporations already using it in their work from banks to BlackRock. But he doesn't have the typical Silicon Valley origin story of a student engineer tinkering in the garage. Suleyman is the son of an English woman and a Syrian father. He entered adulthood in the aftermath of the 9-11 attacks.
Mustafa Suleyman
00:03:26
I think many people were going through a difficult transition at that moment.
Audie Cornish
00:03:30
He was a student at Oxford.
Mustafa Suleyman
00:03:32
Because they're often immigrant parents didn't understand what it was like to grow up feeling British and being British, having been born in the UK, but also having a kind of cultural and religious context that wasn't really understood by anyone but your peers.
Audie Cornish
00:03:47
He was raised Muslim, and he understood. So he and a friend launched a telephone counseling service, the Muslim Youth Helpline.
Mustafa Suleyman
00:03:55
'And so our approach was really to try and put a peer-to-peer support service around young people rather than having to rely on their elders and their community who are often quite judgmental, didn't understand, you know, the issues that many people are facing with respect to their sexuality, for example, or respect to having relationships before marriage or, you, know, starting to drink and stuff like that. And so there was so much taboo and pressure and it just felt like a natural thing to do. Just find ways of talking to heal through understanding.
Audie Cornish
00:04:27
Heal through understanding. That instinct to listen, to support, to help, it's the thread he says runs through everything. Throughout his career founding companies like DeepMind to Inflection, he's always talked about potential regulations, but also values like empathy in AI. Now at Microsoft, he's overseeing the next generation of what he has sometimes described as a digital species.
Mustafa Suleyman
00:04:53
You know, for the first sort of seven to 10 years of DeepMind, so 2010, right up to sort of 2020, most of the progress we made was in the context of image understanding and image generation, video understanding, video generation...
Audie Cornish
00:05:06
What can it see, what can it make, what can it generate on its own?
Mustafa Suleyman
00:05:11
Right, and could it play these games and optimize score in a game. In 2015 or 16, I actually worked on some research on natural language processing to try and predict which word would come next in a sentence. With these very same neural networks that had been successful in gaming, we thought, well, maybe we could apply that to the space of language. And we made some good progress, made some contributions to the field, but it was clearly too early. But my dream was always to be able to model language, because language essentially is the real descriptor of culture of who we are as a people and shapes it.
Audie Cornish
00:05:48
And shapes it. Some of our biggest culture war fights are about what things are called and the way they're talked about. And so this question about what AI is going to do, it's very compelling to me, especially as a journalist, because bad data in, bad data out. And I've just watched a bunch of years of terrible internet, basically. And you know, I have doubts.
Mustafa Suleyman
00:06:09
And yet language is also the thing that inspires us and motivates us to be our very best selves, right? And delivering that with kindness and empathy, I think is one of the most scalable ways to positively impact the world. So the stakes are high, but the prize is also amazing.
Audie Cornish
00:06:26
You've shared in the past this story of at the time, your six year old nephew, asking you what AI was, because usually on the internet, people say tell me like I'm five. So tell me like I am five or your six year old relative. What is AI? How do you define it to the average person?
Mustafa Suleyman
00:06:44
One way of thinking about it is that an AI is really there to perform tasks on your behalf. It answers questions, explain things, collects information from across the web.
Audie Cornish
00:06:54
And is it a program or an algorithm?
Mustafa Suleyman
00:06:57
Yeah, that is kind of the key question. So for all of computer history, we've been designing rules that computers follow very precisely, like a relational database is a way of storing information. You put a bunch of information in, and then when you want to access that information, you get exactly that same piece of information out. With these new models we actually teach them to learn their own storage mechanism, which is these neural networks. They store information in a way that is much more associative. Every word that goes into the neural network has a relationship to all the other words based on its experience of all words that it's seen in the training data.
Audie Cornish
00:07:38
Now, as a human, that means I'm also associating it to my past traumas, things my friends have said, the way I've been raised. The machine doesn't have any of that unless it's getting the data from the internet, I assume, all the things we've collectively put into it. Is that how I should understand this? And its limitations.
Mustafa Suleyman
00:07:57
'That's exactly right. And that's the beauty of it. Like, we want it to learn the subtlety, the nuance, we wanted to be creative. You know, people often say, these models suffer hallucinations, they make stuff up. Well, actually, they're designed to make things up, right? We want them to tell us something that we don't know. We want them to imagine the space between a zebra and, you know, a fish, like, and then generate a new image that is in between those two points. And that's always been the limitation of old software. Old programming has been structured, heuristic-based, logic-based. Rigid. This is flexible, adaptive, creative, and that's actually the great ambition. We're pursuing models that teach us something that we don't know, that give us a novel and creative answer on the spot.
Audie Cornish
00:08:52
Is that like the human brain, or not?
Mustafa Suleyman
00:08:52
I think it's reasonably similar in the sense that when I say the word dog, your mind's synapses are firing in hundreds of millions of directions. The sound of the word, the touch that you've experienced in the past, the idea of a dog both as a protector and a companion being silly. And likewise with these models, they have a similarly neural representation rather than a um x, y...you know cells in excel kind of relation which is a relational database and so It isn't exactly like a human to be clear, but it's got some similar elements.
Audie Cornish
00:09:35
While we're on this, there was this moment from the top executive at Anthropic that does Claude, which is one of the AIs, and he was saying that they were toying with the idea of maybe an I quit button that the machine itself could deploy if it didn't want to go through with a command and that they might track how many times it happens and what they would learn from that. And I thought of it, and I thought of your claim of a digital species, how do you think of something like that? Would you ever consider that for Copilot?
Mustafa Suleyman
00:10:05
'You know, I think that this is a time to be very creative and open-minded. Lots of people have proposed different methods for what I would call containment. You know my book I wrote about this idea of establishing boundaries of trust that all scales right down to the model and all the way up to corporate ownership and then national regulation. I think, that's going to be the key thing to be able to steward this in the right way over the next few decades.
Audie Cornish
00:10:32
But can this species have the right to say no?
Mustafa Suleyman
00:10:34
You know, species is deliberately not a literal representation. It's a best effort to try and make sense of something that's new. It isn't a species. It just has some elements of what a species might be like. And it will have lots of characteristics that are very different to humans. It won't have...
Audie Cornish
00:10:56
But is that the kind of test to figure out what those characteristics might be?
Mustafa Suleyman
00:11:01
It could be. Yeah, certainly. I mean, it's very interesting to look at. We've focused very much on its IQ, right? So we're trying to get it to be factually accurate, very comprehensive knowledge, reference citation so that you could look at the evidence base behind individual claims. It's getting better at its emotional intelligence. It can sort of mirror and be empathetic and supportive and so on. But there are some amazingly interesting qualities that it doesn't have which are good. For example... It doesn't have guilt. It doesn' have shame. It doesn't have competitive ego, right? There's no intrinsic desire inside of it, right? So, it's not compelled to do anything other than just predict the next word with respect to the behavior policy that it's been given. And yet humans have all these underlying emotional drivers that form our behavior and our words. And so we're now designing these models without a lot of the weaknesses that we have as a species. And to me, that's what originally motivated me to try and do this 15 years ago. That's what I'm really inspired about. Like this genuinely, there's a chance that this really could reflect the best of all of us.
Audie Cornish
00:12:18
we're talking about the idea of containment. And that gets me to the idea of guardrails.
Mustafa Suleyman
00:12:23
Yeah.
Audie Cornish
00:12:24
What kind of guard rails should there be? Because we are in a moment where we've literally seen them rolled back.
Mustafa Suleyman
00:12:33
Yeah. Yeah. This is a really tough one. I mean, there are guardrails...we have to think about guardrails at all scales at the lowest level. You described a proposal to put guardrails around the model itself.
Audie Cornish
00:12:45
So in the machine itself...
Mustafa Suleyman
00:12:45
In the machine, does it have a way of saying, I can't reason over this problem? It's too complicated. I have to exit.
Audie Cornish
00:12:53
So what's the next consent tick ring out of that?
Mustafa Suleyman
00:12:54
'Right. Beyond that, it's like, does the machine have the ability to self-improve itself? Right? That would definitely be an area where there's going to be more risk. Can it edit its own code in real time? Then there is a question of what is the objective function of the machine? What is it trying to do? What is its motivation? And that speaks to the business model. Is it aligned to your interests? What is it actually trying to do? Do you feel it's accountable to you? Can you direct it? Right? And then what is the motivation and the kind of governance of the company or the lab or the organization that's building it, right? That's a really important question. And I think this is a moment when, you know, we are going to need to believe and trust in our companies a lot and rely on them to do the right thing.
Audie Cornish
00:13:36
I mean, we just had a decade or decade and a half where we witnessed unintended consequences take a toll on our democracy, right? And we are still even now wrestling with data exposure and the concerns around that. So people have learned a lot of hard lessons and I think they're a lot less, I don't know, ready to believe.
Mustafa Suleyman
00:14:03
'I think that's the right approach, because it would be wrong to blindly trust. We're in a moment where it's a trust but verify. So let's verify. Let's look back in the last three years. We all worried in 2022, 2021 when these models started to come out, that as they got bigger, they would be less controllable. As they added more data, they'd be more biased. As they got access to more tools, they will cause more harms. Now we have three years of empirical data of them being in production across many different labs, not just one lab, with hundreds of millions of daily users. And net-net, there have been some harms. Some people have clearly been hurt by it, right? But the 99.999% number is quite staggering. The fact that it is making, you know, cheap and abundant intellectual capital available to everybody, regardless of your well-being, you can get it on WhatsApp. And pay virtually nothing. It's now available in the open source, completely free to experiment with. Again, I said, trust, but verify. So that doesn't mean tomorrow is going to be the same. So, we should continue with a skeptical critical approach because they're about to take on a bunch of new capabilities. They're about to call APIs, spin up other applications, talk to other AIs, right? So, the scope of the containment challenge is getting bigger as they have these interactive and self-improvement capabilities. But I do think we've got to take a moment to celebrate where we're already at. It's pretty remarkable.
Audie Cornish
00:15:36
My guest today, Mustafa Suleyman, the CEO of Microsoft AI. Stay with us.
Audie Cornish
00:15:50
One of the things is you'll say, trust but verify. But I don't know who's gonna do the verifying. When I look at countries gathering for international summit, it was kind of a mess. They weren't able to agree on their national interests versus what the collective interests should be. There were a lot of warnings issued about the potential problems. Most recently, we had JD Vance, the vice president, speaking about innovation and saying that needs to be basically the North Star of this conversation. So how do you balance the two? I know what the tech industry is like when innovation is its North Star. Right. And we're left in the dust of that in a lot of ways.
Mustafa Suleyman
00:16:31
Look, the debate moves forward and then moves in another direction and that is a natural, healthy balance in our democracies, right?
Audie Cornish
00:16:41
but it's moving really forward. Do you know what I mean? Like when you even talk about a few years ago, what AI could do a few ago is not what it does today or where it's in use today. I mean, I'm biased. I'm asking you this because I'm in a building that might not exist in a few year if a large language model, if an AI or an AGI can do the job better, right? And can generate that imagery of, I don't know what, a version of me that people find more engaging. So it feels very literal to people because when they talk about job loss or things changing, it's across many different kinds of industries and people feel it. And we've learned that that is destabilizing for people when they feel like they are not part of the world's economic advancement.
David Letterman clip
00:17:28
'Mm-hmm, right
Audie Cornish
00:17:28
Globalization taught us that people are unhappy now with the result of it. So how do you think about that in terms of making a consumer product, but you want someone to still be around to do it?
Mustafa Suleyman
00:17:41
I mean, the nature of our work is going to fundamentally change, and every generation of technology changes where we work, how we work and who we interact with. This is happening faster than the previous generations, but I think this is also a moment where we will have to decide which parts of the science and technology we really introduce into our societies and at what rate. When I look out towards 2050, over the next 25 years, there will come a time when we have to decide, what does it mean to be uniquely human? Not just in terms of AI, but in terms synthetic biology too. There are going to be many people who have tremendous biological advantage and will be able to enhance themselves in quite fundamental ways. And...
Audie Cornish
00:18:35
That sentence sounds so, you're underselling it there in terms of what you're saying. You're saying there will be people who have a biological advantage because of these various advances?
Mustafa Suleyman
00:18:45
'Just as today you have a biological advantage compared to 150 years ago pre-penicillin, that we now have surgery that enables us to battle through cancer and live for decades longer. 200 years ago, the average life expectancy was 25 years old. Today it's 75. In two centuries, we've tripled the amount of health and wellbeing that the average person on our planet can enjoy. So we're already on that trajectory. It's just when I obviously, when I say over the next 25 years, it does sound crazy, but we're living in a crazy time. What's happened is already crazy. Right. And so my point was just that there will be moments when we have to collectively as humanity decide what we say no to, and that, that is really the big challenge is when do we think something no longer serves us right collectively.
Audie Cornish
00:19:42
Are there any areas that AI should not be deployed?
Mustafa Suleyman
00:19:49
It's very hard to say ahead of time, you know.
Audie Cornish
00:19:52
Well, we're actually in the time right now. There were protests at a Microsoft event about the company's AI and technology being used in military contexts. So we're like, we are in the time right now
Mustafa Suleyman
00:20:05
'Part of the challenge is that a lot of the services that, you know, all the companies provide are foundational elements inside a stack, just as paper or pens or tissues are core pieces that are used by everybody. You know, the tablet that you have in front of you is used by everybody for all sorts of things, whether you're on an airline or a three-year-old watching cartoons. And so that's the story of technology. It proliferates. If it's useful, and it is clearly useful, it becomes general purpose and so it gets cheaper and easier to use. Everybody wants access. And so stopping access is something that is just really, really difficult to imagine structurally how we would actually, how anyone would actually do that, given that it is inherently dual use. Like it's clearly valuable for organizing a supply chain as much as it's useful for doing anything else. And I don't have a good answer to that question. I mean, that is genuinely the hard sort of ethical governance question of our age.
Audie Cornish
00:21:08
Do you see why people ask it and do you get it often?
Mustafa Suleyman
00:21:10
I think it's a central question to ask. There's nothing else to do other than to focus the mind on the fact that some of these technologies may make war cheaper, faster, easier. We've certainly seen that in Ukraine. It is remarkable that we now have decentralized access to the tools of war for a thousand a drone that can deliver a payload that could take out, you know, a small APC. And that's going to change the balance of power. And that is the nature of proliferation. Technology really is the greatest equalizer we've ever imagined. There's no additional cost to actually copy that software and then share it again. It's information. And so that's gonna change the traditional structure of power, like traditionally, historically, power has been concentrated because it has been physical, right? Forts, tanks, governments focused around cities. But now knowledge and power are going to be manifested in information and bits and therefore spread very, very widely. And the single rule is very simple. If it's useful, it gets cheaper and easier to use and spreads far and wide. And that spreading is really a spreading of knowledge and power.
Audie Cornish
00:22:23
'Here's where I question this. I don't see the democratization of power through this for a couple of reasons. One, it's a trillion dollar enterprise, maybe, given what we've seen with DeepSeek out of China, maybe it's not. But the idea that it right now takes a tremendous amount of capital, tremendous amount of technological support and the industry literally believes it has to consolidate to make that happen. Okay, so right away it's in the hands of a few. And then where it's deployed, it doesn't feel again like we have the ability to contain it. I don't feel right now like this is a democracy, right? I feel like this has kind of happening to the rest of us. It's not feeling like it's something we're all getting to do. It's like some of us will get to, I donno, make travel plans without using Boolean search and some of are going to get to do all the things you're talking about, nevermind the climate impacts. Right now this thing is labor-intensive, hurts the environment, and it's kind of steamrolling so much of the economy.
Mustafa Suleyman
00:23:28
I hear you, it makes sense, I think...
Audie Cornish
00:23:31
You can tell me I'm wrong. I'm just like, like, but you see what I mean? You always make it sound so great. And then I like open up the paper and.
Mustafa Suleyman
00:23:37
No, actually, I spent a lot of time calling about the risks as well, like I'm very, very obsessed with the risks, but I'm also trying to be balanced about it, right? It is certainly true that a small number of players can produce the biggest AI of the highest quality first. But many, many people are able to produce the same performance, the best model in the world, but a year later at a hundred X cheaper in the open source available to everybody for free. That's just an objective fact. Just three days ago, Meta put out an open source model that is the best in the World. So we have to just wrestle with that fact. It is widely available to Everybody. That's true. On the environmental question, as it happens, Google, Microsoft, Meta, certainly at Microsoft, the vast majority of our compute consumption is renewable. We actually make the market for renewable energy. Now, that's not to say that there aren't other issues to do with how these chips are manufactured and the resource exploitation question, which is really got no environmental positive story at all. That's a bad sign. But at least where we know there is a fundamental issue, like moving from oil to renewables, it's pretty big deal that we've made a commitment to be net negative, carbon net negative by 2030, to be giving back to the grid, in fact, to be 100% water positive, clean water positive by 2030. Now, that's massive. We're one of the biggest buyers in the world of energy. So I don't want to sort of always give you a positive answer in response, but I do think that there are some like fundamentally positive signs that we should be excited about and try and build on those things.
Audie Cornish
00:25:40
You know, you've said in your book, The Coming Wave, that all technology is political. So where are we in this political moment with this technology?
Mustafa Suleyman
00:25:49
You know, I think that there's a very sort of trite and easy to dismiss narrative. People often say, you know, the tool itself is neutral. It's really the person who uses it. And that's absolutely true. Yet we also design tools in much more complex and nuanced ways than when we invented a hammer, right? And part of the challenge is that our language is so weak. You know, the reason why we end up talking about a species, for example, or all confusion around superintelligence and AGI is because our words just aren't subtle and precise enough to capture all the nuanced distinctions of all the variety of tools that we have. Today, what we're creating, it doesn't feel satisfactory to describe it just as a tool, right? You speak to Copilot, for example, the AI that I make at Microsoft. It gives you an answer that is unique to you based on your history, personalized to, you know, the things that you've talked about in the past, what you're interested in. And then you say something back and it gives you something completely unique. We have never invented anything that gives a unique, emergent, dynamically generated response on the fly before. So it's clearly not a tool at the same time, it's clearly, not, you know, yet a species, right? It may well be more like that in the future, but right now it's totally not. So we're all just sort of struggling with the right language to wrap our hands around these things.
Audie Cornish
00:27:13
'Most people talk about this in the context of like science fiction. Like, people keep bringing up the Terminator concept because the idea of self-replicating AI is a genuine concern, right, that has spread and that people have heard echoed in other areas of science. And you've said yourself, look, that's not going to happen unless someone develops it that way, which isn't that really much of a comfort because the odds are someone with the right financial incentive might.
Mustafa Suleyman
00:27:39
'Yeah, I mean, part of the challenge is that because this is so available to everybody, because the cutting edge is in the hands of open source, you can't, like, there are problems with governing powerful companies, but there are a few of them. They're subject to shareholder control, to boards and to regulation and governments. When you put that kind of power in the hands of literally hundreds of millions of developers. The equation changes. Yes, it's much more likely that someone's going to do that. I personally don't believe one of the big labs is going to be the first to go and do that kind of risk taking experimentation with self-improvement and replication, but can I say that some random developer in their garage is not going to that? No, I feel more concerned about that. And that that's sort of part of the challenge.
Audie Cornish
00:28:28
What would you like to see in terms of a regulatory environment? What makes sense to you that we don't have right now?
Mustafa Suleyman
00:28:35
I think that it's a good thing for us to proactively disclose to central audit functions what our models can do and to make claims about what they can't do and what they're designed not to do. We've certainly done that at Microsoft by proactively talking to the Irish Data Protection Authority before we've deployed a model describing everything that it was intended to do, all of the known weaknesses today, our best assessment of the risks, and then ongoing analysis. and monitoring of what it's doing, keeping them updated proactively.
Audie Cornish
00:29:09
So companies voluntarily sharing information from their internal audits. That's not regulatory
Mustafa Suleyman
00:29:16
It's not regulatory. Well, I've called for regulation for many years.
Audie Cornish
00:29:20
Yeah, but we're not getting there, so what is it that you would like?
Mustafa Suleyman
00:29:24
That's the political environment. That's your domain. I've proactively called for regulation for most of my career and been involved in drafting those things and briefing with people on it. So I'm definitely a big fan of not just making it voluntary.
Audie Cornish
00:29:36
Yeah, but what then are the obstacles? You've done this work, I haven't.
Mustafa Suleyman
00:29:42
That's the politics of it. That's the mode that that's the moment we're in, isn't it?
Audie Cornish
00:29:45
Are you glad you're out of politics then? You were at one point working in a political office very briefly. Oh, that doesn't count.
Mustafa Suleyman
00:29:51
I was 20 years old.
Audie Cornish
00:29:52
Oh, you were idealistic thinking things would be changed.
Mustafa Suleyman
00:29:54
I worked for the mayor of London writing policy documents on human rights for nine months when I was 21.
Audie Cornish
00:30:04
Mustafa Suleyman, that's relevant here. That's relevant, here, because you have not been cynical once in this whole interview until I brought up the idea of the politics of this and of regulation. And now, suddenly, your eyebrows are up.
Mustafa Suleyman
00:30:15
Very frustrating. It's very frustrating. I would love us to be able to make more progress there.
Audie Cornish
00:30:22
But.
Mustafa Suleyman
00:30:22
I mean, my take is that, you know, so far, there's a lot of fear. But if you objectively look back at the actual harm in the last three years, it's hard to say now is the time for aggressive or unprecedented regulation. Even I'm not saying that, right? And so I think that in most of these situations, we have to wait and see, monitor closely, be prepared to react really fast, be proactive and transparent, hold ourselves accountable in the best possible way, invite everybody in to kind of debate and discuss what they're seeing, and just keep a very open mind. But so far, I don't see any evidence that it's time for some unprecedented regulatory action.
Audie Cornish
00:31:06
Um, I've asked you all of my scaremongering questions. So do you have any questions for me? Kind of what do you wish people would talk about in this space or in these environments?
Mustafa Suleyman
00:31:16
How do we communicate in a way that enables us to both be optimistic and excited, but at the same time, skeptical and critical? How do you strike that balance?
Audie Cornish
00:31:28
What do you hear in the line of questioning?
Mustafa Suleyman
00:31:30
No, I like the line of question.
Audie Cornish
00:31:32
But what do you hear when you're...
Mustafa Suleyman
00:31:34
I think, you know, flipping roles is helpful. You playing my role, me playing your role.
Audie Cornish
00:31:41
Your role's a lot more fun.
Mustafa Suleyman
00:31:44
It doesn't always feel like that, you know, and I think that's, that's the art. I think for all of us who are communicating publicly.
Audie Cornish
00:31:51
Which is hard right now. I mean, I get asked that question by doctors and nurses. I mean, anyone with expertise right now is in the position of somehow trying to somehow have authority, right? Somehow assure people who, by definition, believe they are not to be trusted simply because they have authority or some kind of credential. That's why I wanted to talk to you, because I think you're a good communicator. I mean that being said, what you're probably hearing from me is just like a decade of feeling lied to by a variety of tech industry leaders, you know? Like everything was gonna be voluntary. Everything was gonna, we're just gonna do it this way. Trust us. And I do think that as every industry grows up, this industry is at the age where you don't trust them as much, right? Just like, I don't know, some people out there have teenagers that they're like, okay, I'm giving you the keys, but not great, you like, this could go sideways. And this is a very powerful industry that has at times loved the rewards and not the responsibilities equally.
Mustafa Suleyman
00:32:58
'No, I love what you say. I don't want to try and reassure anybody, and I certainly don't want people to say, just trust me. What I want are spaces where we can have a critical dialog where the skepticism doesn't turn into mean-spirited anger and basically verbal violence so that I can play other people's roles and they can play my roles. I love people to give me concrete and specific and practical suggestions of what they would like to see. So I'm always open-minded to hearing what we should be doing differently. And also what I think we need is people thinking about the third-degree consequences because what happened in social media was, at least in the early phase of social media, I don't think they were really aware or could even see. I think they sincerely thought, clearly this is helpful. More information, the better. No verification of quality, or we don't want to centralize that, et cetera. I think that towards the latter part of that, it was maybe harder to defend. I think it was pretty clear that there was being used for inciting all kinds of chaos. And I think in this moment, we've learned some of those lessons. And the question is, how do we put those into practice? How do we hear from people who are seeing the darker side of these chatbots who do see some risks that maybe I'm not seeing? Like, how do we vocalize those, bring those into the conversation? That's definitely my invitation and why I'm, this is why I am here. This is what I want to be doing.
Audie Cornish
00:34:32
Well, we really appreciate your time. I appreciate you being at Microsoft, frankly, because the company is so influential. And it's a little weird to be talking about this because, I don't know, 50 years from now, what we mean by computer could be completely different. Right. Like what Microsoft might be celebrating might be completely different..
Mustafa Suleyman
00:34:52
'Right. I think it is going to look quite different. Like it's going to feel not like typing at your keyboard and more like talking to an AI companion, talking to a co-pilot or an agent. It's going feel much more like a sort of a presence that lives life alongside you, that really understands you and works with you, works for you.
Audie Cornish
00:35:15
Mustafa Suleyman is the CEO Microsoft AI. Microsoft is celebrating its 50th anniversary this year. The Assignment is a production of CNN Audio, and this episode was produced by Lori Gallaretta, Jesse Remedios, and Osman Noor. We had editorial support this week from Joanna Campioni and Jennifer Sherwood. Our senior producer is Matt Martinez, Dan DeZula is our technical director, and Steve Lickteig is the executive producer of CNN audio. We had support from Dan Bloom, Haley Thomas, Alex Manassari, Robert Mathers, John Dianora, Leni Steinhardt, James Andrist, Nicole Pesaru, and Lisa Namarow. If you liked this show, please remember to hit your follow button and even better share it. We have more conversation for you online.