Anthropic CEO says he’s sticking to AI “red lines” despite clash with Pentagon
Hours after a bitter feud between the Pentagon and Anthropic ended with the Trump administration cutting off the artificial intelligence startup, Anthropic CEO Dario Amodei told CBS News in an exclusive interview Friday night he wants to work with the military — but only if it addresses the firm’s concerns.
“We are still interested in working with them as long as it is in line with our red lines,” he said.
The conflict centers on Anthropic’s push for guardrails that explicitly prevent the military from using its powerful Claude AI model to conduct mass surveillance on Americans or to power autonomous weapons. The Pentagon wants the ability to use Claude for “all lawful purposes,” and says it isn’t interested in either of the uses that Anthropic was concerned about.
The military gave Anthropic a Friday evening deadline to either meet its demands or get cut off from its lucrative Defense Department contracts. With the two sides still seemingly still far apart, President Trump on Friday ordered federal agencies to “immediately” stop using Anthropic’s technology. Then, Defense Secretary Pete Hegseth declared the company a “supply chain risk,” directing military contractors to also stop working with the AI startup.
In his interview later Friday, Amodei stood by the guardrails sought by Anthropic, which is the only company whose AI model is deployed on the Pentagon’s classified networks.
“Our position is clear. We have these two red lines. We’ve had them from day one. We are still advocating for those red lines. We’re not going to move on those red lines,” Amodei later said. “If we can get to the point with the department where we can see things the same way, then perhaps there could be an agreement. For our part and for the sake of U.S. national security, we continue to want to make this work.”
Amodei told CBS News that Anthropic has sought to deploy its AI models for military use because “we are patriotic Americans” and “we believe in this country.” But the company is worried that some potential uses of AI could clash with American values, he said.
Mass surveillance is a risk, Amodei argued, because “things may become possible with AI that weren’t possible before,” and the technology’s potential is “getting ahead of the law.” He warned that the government could buy data from private firms and use AI to analyze it.
In theory, artificial intelligence could also be used to power fully autonomous weapons that select targets and carry out strikes without any human input. Amodei said his company isn’t categorically opposed to those kinds of weapons, especially if U.S. adversaries develop them, but “the reliability is not there yet” and “we need to have a conversation about oversight.”
Since AI technology is still unpredictable, Amodei is concerned that autonomous weapons could target the wrong people by mistake. And unlike with human-powered weaponry, it’s not clear who is responsible for the decisions made by fully autonomous weapons.
“We don’t want to sell something that we don’t think is reliable, and we don’t want to sell something that could get our own people killed or that could get innocent people killed,” he said.
Amodei called the guardrails around surveillance and autonomous weapons “narrow exceptions,” and said the company has no evidence that the military has run into either of them.
The Pentagon’s position is that federal law already prevents it from surveilling Americans en masse, and fully autonomous weapons are already restricted by internal military policies, so there is no need to put restrictions on those uses of AI in writing.
Emil Michael, the Pentagon’s chief technology officer, told CBS News in an interview Thursday: “At some level, you have to trust your military to do the right thing.”
“But we do have to be prepared for the future. We do have to be prepared for what China is doing,” Michael said, referring to how U.S. adversaries use AI. “So we’ll never say that we’re not going to be able to defend ourselves in writing to a company.”
As a compromise, Michael said the military had offered written acknowledgements of the federal laws and military policies that restrict mass surveillance and autonomous weapons — though Anthropic said that offer was “paired with legalese” that allowed the guardrails to be ignored.
As the conflict between Anthropic and the Pentagon escalated this week, top military officials accused the company and Amodei of trying to impose their values onto the government. Hegseth called Anthropic “sanctimonious” and arrogant, Michael said that Amodei has a “God-complex” and Mr. Trump called the AI startup a “radical left, woke company.”
“Their true objective is unmistakable: to seize veto power over the operational decisions of the United States military. That is unacceptable,” Hegseth alleged.
Said Mr. Trump: “Their selfishness is putting AMERICAN LIVES at risk, our Troops in danger, and our National Security in JEOPARDY.”
Asked if weighty questions about AI guardrails should be left up to Anthropic rather than the government, Amodei told CBS News that “one of the things about a free market and free enterprise is, different folks can provide different products under different principles.”
He also said: “I think we are a good judge of what our models can do reliably and what they cannot do reliably.”
In the long run, he said, Congress should probably weigh in on AI safeguards.
“But Congress is not the fastest moving body in the world. And for right now, we are the ones who see this technology on the front line,” said Amodei.
With Anthropic and the Pentagon unable to reach a deal by Friday, the military is now expected to phase out its use of Anthropic’s AI technology within six months and transition to what Hegseth called “a better and more patriotic service.”
Hegseth also labeled Anthropic a “supply chain risk” and said all companies that do business with the military are now expected to cut off “any commercial activity with Anthropic.”
Amodei called that an “unprecedented” move for an American firm rather than a foreign adversary, and he said the government’s statements have been “retaliatory and punitive.” And he argued that Hegseth doesn’t have the legal authority to bar all military contractors from working with Anthropic, and can only stop them from using Anthropic for government contracts.
He also said that Anthropic hasn’t formally received any information from the Pentagon informing it of a supply chain risk designation, but “when we receive some kind of formal action, we will look at it, we will understand it and we will challenge it in court.”
Asked if he has a message for the president, Amodei said “everything we have done has been for the sake of this country” and “for the sake of supporting U.S. national security.”
“Disagreeing with the government is the most American thing in the world,” he said. “And we are patriots. In everything we have done here, we have stood up for the values of this country.”
