The Pentagon vs. AI Power Players: Did Anyone Win? - CNN One Thing - Podcast on CNN Podcasts

CNN

CNN Podcasts

‘Biggest Oil Disruption in History,’ IEDs in NYC, Major Flight Delays and more
5 Things
Listen to
CNN 5 Things
Mon, Mar 9
New Episodes
How To Listen
On your computer On your mobile device Smart speakers
Explore CNN
US World Politics Business
podcast

CNN One Thing

You’ve been overwhelmed with headlines all week – what's worth a closer look? One Thing takes you beyond the headlines and helps make sense of what everyone is talking about. Host David Rind talks to experts, reporters on the front lines and the real people impacted by the news about what they've learned – and why it matters. New episodes every Wednesday and Sunday.

Back to episodes list

The Pentagon vs. AI Power Players: Did Anyone Win?
CNN One Thing
Mar 8, 2026

Anger directed at OpenAI is spreading after it struck a deal with the Pentagon to use its AI models in classified systems, just hours after its rival, Anthropic, refused. OpenAI said it had shared Anthropic’s concerns about mass surveillance and autonomous weapons, so why did they sign? And what does this mean for other companies looking to do business with the Trump administration? 

For more: Some OpenAI staff are fuming about its Pentagon deal 

--- 

Guests: Hadas Gold, CNN AI Correspondent & Dean Ball, Senior Fellow at The Foundation for American Innovation 

Host: David Rind 

Producer: Paola Ortiz 

Showrunner: Felicia Patinkin

Photo: WH Pool

Episode Transcript
David Rind
00:00:00
Welcome back to One Thing, I'm David Rind and a messy fight over AI red lines in the military gets existential.
Dean Ball
00:00:08
Are we just saying that you're not allowed to be a company that has political ideas that are different from the people in power?
David Rind
00:00:14
Stick around. This is from a video posted to the White House X account the other day. It's a mix of war footage. Some of it very real, American missile strikes on Iran, planes taking off, but some of it is also apparently from the video game Call of Duty. Not everyone loved this mashup. Critics said, real people are dying in this war. Multiple U.S. Service members have been killed. This is not a video game. It's no joke. The outrage didn't seem to bother the White House though. Comms director Stephen Chung wrote in streamer slang, W's in the chat, boys. But it's worth asking, if that's how the administration is selling the war on social media, how seriously are they taking negotiations with potential defense contractors?
Hadas Gold
00:01:09
So to start, AI companies have been working with the military for several years now.
David Rind
00:01:13
This is Hadass gold. She's CNN's AI correspondent. And I wanted her to catch me up on a major story that's been playing out over the past few weeks. It involves the Defense Department and some of the biggest AI companies in Silicon Valley. And it could have major implications for the future of American war fighting. Cause like Hadass said, this AI technology is already being used by the Pentagon. It reportedly played a role in both the January operation in Venezuela, as first reported by the Wall Street Journal and the week old war with Iran. So when it comes to which company the military works with and how this tech gets used the stakes are sky high
Hadas Gold
00:01:51
Anthropic, which has the Claude AI system, was until recently really the only AI model that could be used on the military's classified systems, which makes it special. But the Pentagon a while ago wanted the Anthropic to change some of the policies in their contract to allow them to use their system for, quote, all lawful uses. Anthropic was mostly okay with that, but had two major red lines. They were concerned about two things. One was AI being used in autonomous weapons, so weapons that are controlled by AI without necessarily a human in the loop, and AI being in the mass surveillance of U.S. Citizens. And Anthropix's point of view on this is that on the autonomous weapons they say AI just is not reliable enough yet to be used in these systems. And on the second... They believe that the laws and regulations of the United States have not really caught up to where we are on the advances in the technology, and they just believe that they need to be updated before you can really use AI and mass surveillance. The Pentagon's point of view on this was we need to able to use the tools that we contract you with for any reason. We can't go to you in the middle of a war and say, hey, we need be able to do X, Y, Z. Can you give us permission, please? You're cool. Yeah yeah they're saying like we don't we don't ask bowing for permission on how to use its planes we're not gonna do the same with an AI model
David Rind
00:03:11
Adas says the Pentagon took a hard stance here. They basically said, if you don't agree to our terms, not only will we cancel your contract, we'll deem you a supply chain risk, basically banning any government entity from using the product. That kind of blacklisting is normally reserved for adversaries like Russia or China. It isn't usually slapped on American companies.
Becky Anderson
00:03:33
The Showdown between the US Pentagon and Anthropic is barreling towards its deadline without a deal in sight.
Hadas Gold
00:03:41
Now, so all of this came to a head on Friday, February 27th. This was the day when at 5.01 p.m. The Pentagon said if Anthropic didn't agree, then they were going to do the supply chain risk designation and cancel their contract. We already had known beforehand Anthropic was not going to agree. They had said they could not in good conscience accede to the Pentagon's request.
Kaitlan Collins
00:04:00
The president blasted Anthropic in this long post today saying, quote, we don't need it, we don't want it, and we will not do business with them again.
Hadas Gold
00:04:08
'And then President Donald Trump actually surprised everyone when he posted on Truth Social that all of the government was now going to have to stop using Anthropic, and then he was giving them a six-month window to remove their AI model from all of their systems, calling them a woke company. But even so, there was still a possibility that things could work out. But then, shortly after 5 p.m., the Defense Secretary Pete Heksef posted that Anthropic would be designated a supply chain risk, meaning that all military contractors and his telling would have to stop using Anthropic if they work for the military, not just in their military work. He claimed in any commercial work, there's some legal debate over that.
Kaitlan Collins
00:04:49
Moments before we just came on the air tonight, Anthropic responded to what you just heard from the Pentagon saying, quote, no amount of intimidation or punishment from the Department of War will change our position on mass domestic surveillance or fully autonomous weapons. We will challenge any supply chain risk designation in court.
Sam Altman
00:05:11
I don't personally think the Pentagon should be threatening DPA against these companies.
Hadas Gold
00:05:17
Another surprising thing of all of this was the unity we saw, at least initially, from the tech community. OpenAI, which is ChachiBT's maker, their CEO Sam Altman came out on Friday morning and said they actually have the same red lines as Anthropic when it comes to any contract with the Pentagon for them to be using Chachi BT in their classified systems.
Sam Altman
00:05:35
For all the differences I have with Anthropic, I mostly trust them as a company and I think they really do care about safety and I've been happy that they've been supporting our war fighters. I'm not sure where this is going to go.
Hadas Gold
00:05:46
But then a few hours later, late Friday, Open Air actually announces that they had signed onto a contract with the Pentagon, seemingly swooping in and taking Anthropics place. And they claimed that their red lines, which they said was the same, were upheld in this contract. And there was kind of a huge mess and legal questions that followed.
David Rind
00:06:06
Well, yeah, did we even know that they were like competing for this contract and where did how did this all happen?
Hadas Gold
00:06:12
So all of the major AI companies have been working with the military on unclassified systems and it had been known that they had all been working to try to get onto classified systems. I should note XAI, Elon Musk's company, had agreed to this before and said, yeah, you can use it for whatever the heck you want. We don't have red lines. But OpenAI said that they have found a way that they believed that those red lines on autonomous weapons and mass surveillance were being upheld. But this caused a huge uproar.
David Rind
00:06:42
Yeah, I mean, how does the people working at this company feel about that? If Sam Ullman, even just a few hours before, was saying Anthropic is taking a stand here, we agree with it, and then a few hours later they say, oh, we've figured it out. It's all good.
Hadas Gold
00:06:56
'OpenAI employees didn't feel great about this. I was speaking to sources at the company and there was a lot of frustration, not only with some frustration with just, you know, having this deal with the Pentagon, but also how it all went down and how it seemed as though it was rushed through so quickly, you now done in the late hours of Friday. And then also what happened over the next few days was that it seemingly was getting updated. So then Sam Almond does like a public ask me anything on X and people asking him questions and then come Monday. OpenAI says, we've updated our contract, which we believe further locks down these red lines. But the employees I spoke to said that they were just frustrated with how it looked, with the narrative of how it felt so rushed out and that they, we're also frustrated that Anthropic was being hailed as this hero and they were being kind of portrayed as these villains. If you go outside of Anthropic's office, there are all these messages in chalk on the sidewalks thanking them. OpenAI's office have some different messages in shock. Sam Altman ended up having an all-hands meeting with all of the employees on Tuesday, and he did acknowledge both in that meeting and in a memo to staff that the whole process looked opportunistic and sloppy, and that he wished it hadn't been as rushed through as it was, but that he thought he was just trying to really de-escalate the situation. And on Thursday, the Pentagon formally issued the supply chain risk designation to Anthropic. But importantly, it was actually of a narrower scope than I think a lot of people feared, because there was a fear that if the supply chain risk destination was so broad, that could mean that almost anybody that Anthropic wanted to do business with couldn't work with them, because if they also wanted to be able to do work with the U.S. Military, which obviously has a lot of contractors out there that could impact Anthropic's businesses. And so there is a sense that this narrower designation is a bit of a... Stepped down than what had initially been threatened.
David Rind
00:08:45
'OpenAI says their agreement with the Pentagon allows their systems to be used for all lawful purposes, consistent with applicable law, operational requirements, and well-established safety and oversight protocols. They say their agreement explicitly states that their tools will not be used to conduct domestic surveillance, and they're sticking by their red line that its system will not be used to direct autonomous weapons. That is all stuff Anthropic had insisted on in the first place. So begs the question, why was the Pentagon willing to play ball with OpenAI. And not anthropic.
Hadas Gold
00:09:16
I think this really comes down to an issue of like personalities and politics. Anthropic has been labeled the woke company by the administration by David Sachs, who's the White House AIs are. They are backing political groups that are trying to work for more AI regulation, which is not the White House's point of view. The White House has been trying to prevent states from regulating AI on a state level. And so there is a political background here. Open AIs. President, Greg Brockman, has donated millions of dollars to President Trump and his political groups. And, you know, Sam Altman has been seen as being closer to the administration meeting at the White House and things like that.
David Rind
00:09:57
Yeah, I've seen him literally next to the president multiple times.
Hadas Gold
00:10:01
So there is a question of, was this just an issue of personalities clashing?
David Rind
00:10:05
Yeah, I guess I'm wondering, does this moment say more about the Trump administration, namely the federal government effectively blacklisting a company that wouldn't bend the knee, or does it say more about where AI companies like OpenAI are now and what they're willing to do to stay ahead of the curve?
Hadas Gold
00:10:20
I think this is much more about the Trump administration because as Axios has pointed out, the Trump Administration is treating China's deep seek AI model better than a homegrown American company that is widely seen as having one of, if not the best AI models out there. Fine, don't work with a company if you don't want to, if you're the U.S. Government, you don't agree with them. But the flip side of that and the irony is that we have reporting from the Washington Post and the Wall Street Journal and CBS that... The military is literally using anthropics product as we speak in this operation in Iran. And so I don't know how two things can be true. It can something be both so dangerous, it has to be a supply chain risk and yet then why is the military literally using it right now? And a lot of people's answer to that is, well, this is just a punitive punishment.
David Rind
00:11:05
How does this affect the average person who isn't like a defense contractor or, you know, working in the military? Like, what does this mean just for the average person using these chatbots for X, Y, and Z? Like, What should we make of all of this?
Hadas Gold
00:11:20
'Well, for one thing, Anthropics' Claude has shot up in popularity for the everyday person. I think a lot of people didn't know that what Anthropic or Claude were. They only knew of Chach-BT, but it shot up to the top of the App Store. But you know, what systems our government relies on in their AI systems will matter because as we turn to AI models more and more for every sort of task, whatever rules and weights and measures and all these things that go into how an AI model is created. Those will affect the outcome. So if the government is using an AI system to determine who gets benefits or to determine how your taxes are run, that will have an effect on you. Now those things are, I would say a few years out, but that is why people need to pay attention to these types of debates because down the line they could affect us all.
David Rind
00:12:09
'It's important to say Anthropic has long pitched itself as an AI company with a soul. Safety was at the core of everything it did. However, right in the middle of its beef with the Pentagon, the company said it would relax its self-imposed guard rails because they were hindering its ability to scale and compete in a crowded market. According to a source familiar with the matter, the policy change was separate and unrelated to the discussions with the pentagon. Meanwhile, we asked both the Pentagon and Anthropic about whether Claude is being used in the war with Iran. They did not respond to our request for comment. We also asked the Pentagon and the White House whether politics played a role in these negotiations with Anthropic and did not get a response. When we come back, I'm going to talk with someone who helped the Trump administration craft AI policy, but now says what the Pentagon did here is part of an ongoing death rattle. Stick around.
Dean Ball
00:13:08
My principal role there was holding the pen, as you might say, on the AI action plan.
David Rind
00:13:17
This is Dean Ball. He's currently a senior fellow at the Foundation for American Innovation. But in April of 2025, he joined the White House Office of Science and Technology to help write the Trump administration's AI Action Plan. This was a consequential document released last year that outlines how the federal government views AI and how it should be used. Dean helped write the first draft.
Dean Ball
00:13:39
That people were reacting to a draft that was principally written by me.
David Rind
00:13:43
'Did you feel the final version did justice to what you had in mind? Well, so I wanted to talk to you today because you wrote a essay on your substack on March 2nd called Clawed, C-L-A-W-E-D, pretty clever, and you framed this whole dust up in pretty dire terms. I want to quote you here. You said, I consider the events of the last week a kind of death rattle of the Old Republic, the outward expression of a body that is thrown in the towel. Mm-hmm. I mean, that's pretty serious. Look, what did you mean by that?
Dean Ball
00:14:16
What I mean by that is I'm not saying this is some watershed moment or this is like the cause of the death of the republic or something like that. That's not my observation at all. My observation instead is that like the death rattle is a very, very subtle, in the grand scheme of things, a subtle indicator of a body that is in the process of dying. And so what I mean that is that a great deal of the people who have defended the administration here. Have basically done so on the grounds that like, well, what else did you expect? Of course their private property won't be maintained. Like they're making this powerful technology and they're political enemies of the administration. Like what else do you expect. And it's like, well, wait, like are we just saying that you're not allowed to be a company that has political ideas that are different from the people in power? Have we just given up on the idea of the first amendment there? And that's the thing that I think really goes bridge too far is this idea of You know, if you don't do business with us on our terms, we're gonna destroy your company. That can't possibly be the way our country works, right? Because that would be an abrogation of private property rights. Because like, what is to stop the government from saying, you know, going to any random business in the economy and saying, yeah, we wanna procure your services and we want you to do it on these terms. And on another level, if in fact, I think a lot of people look at the fact patterns here and um They come to the conclusion that instead of this being a principled matter about usage restrictions and DOD contracts, that this is instead, quite frankly, about politics. And they don't like anthropic politics. They don't the fact that anthropic has, you know, leaders who have given money to the Democratic Party. They don t like the fact they hired former officials of the Biden administration. And if it's a political thing, well then we're in First Amendment territory. Right. And it's like, well, it can't be the case that you get retaliated against for your political views. Right. And to the extent that's happened before, it obviously has happened.
David Rind
00:16:22
I was going to say, you can't honestly be surprised, right, just based on how this administration has kind of gone after political enemies in various other ways.
Dean Ball
00:16:31
End prior administrations. End prior administrations too. This is my point. This is why I wrote the article. I'm not saying that this is some watershed moment. Maybe it's worse than what we've seen in the past, but ultimately it's just a continuation of these same trends which have all been gradual diminishments of these basic bedrock principles of our republic. And that's why I think it's a slow Yes.
David Rind
00:16:56
Well, on the actual points that Anthropic was taking issue with, their red lines, did any of them sound unreasonable to you?
Dean Ball
00:17:05
Um, I mean, no, and they didn't sound unreasonable to the Trump Department of Defense, you know, eight months ago, right? Like, so, no autonomous lethal weapons and, um, um the mass domestic surveillance both seem like pretty reasonable red lines. I would say like, you, know, the government having access to advanced artificial intelligence to do with what it wants, even within the domain of purely lawful use, there are potential. Civil liberties and other types of concerns that I think all Americans should have about that, regardless of political stripe. And I hope that Congress can pass some restrictions on such usage that are real. And then beyond that, I think the idea that not only does U.S. Government use of these models under any circumstances, should it raise legitimate civil liberties concerns, and has it raised, you know. Been a concern of mine for years, right? One of the animating things that got me into this field is, you know, as someone on the center right, because I'm concerned about the prospect of government abuse of power using this technology. And the fact that on top of those existing concerns, the government is now asserting that no one can tell them, you now, there are no restrictions on their use that can be imposed in contract. It's certainly caused to be concerned, for sure.
David Rind
00:18:29
Yeah, I mean, is that disheartening for you at all to have done this work for the administration and then to see this as a result? Like, was that ever a possibility in your mind?
Dean Ball
00:18:38
Um, yes, this was always a possibility in my mind. And the reason has nothing to do with the Trump administration and it has everything to do but the structures of power. It's obvious that AI is going to be extremely powerful. It's obviously that in some ways it will challenge the traditional power structures in ways that implicate politics and the state. And so it was always very clear to me that the birthing of machine intelligence was going to happen in conflict with the government and would be profoundly political. So I knew that everything up to and including government seizure of the national labs was a distinct possibility. I expected it would take a little bit longer, I expected this, but I'm not shocked that it's happening here because again like you can talk a big game about deregulation and wanting to let the innovators innovate all you want, but the reality is that the government has the incentives that it has and those cut across party lines. So like, is it disheartening to me to see this happen? Look, yeah, if they're successful, then everything we did on the action plan and all the other stuff that this administration has tried to do to let AI thrive, it's all, it all goes in the toilet. None of it matters. At the same time, I think they probably fail, most likely, and the work on the Action Plan continues apace, and there are many dedicated people in the government that are working on it. You know, I applaud their work. I continue to cheerlead them. I'll continue to speak positively about them. I'm not declaring war. On the administration in any way, I'm saying this is a bad idea.
David Rind
00:20:08
Well, Dean, thanks very much for your time. I really appreciate it.
Dean Ball
00:20:11
Of course, thank you so much.
David Rind
00:20:14
That's all for us today. Thank you as always for listening. Really appreciate it. If you have five seconds, maybe 10 seconds, leave a rating and a review wherever you listen and it helps other people find the show. We'll be back here again on Wednesday. I'll talk to you then.