- Dell CEO Michael Dell answered a question about Anthropic at a forum
- The CEO said companies shouldn’t dictate how governments use their tech
- Dell added that its not a “workable model”
The CEO of Dell has said in a Bloomberg Television interview that companies in business with the government cannot dictate how their technology is used.
Michael Dell added, “I just don’t think that’s workable model,” when asked a question regarding Anthropic’s ongoing battle against the Pentagon’s designation of the company as a “supply chain risk.”
Speaking at a forum in Washington, the CEO didn’t mention Anthropic by name, and Dell added his company has systems and controls in place to ensure sales go only to authorized users, but did not elaborate.
Article continues below
The Anthropic battle
Defense Secretary Pete Hegseth recently labelled Anthropic a “supply chain risk” after the AI company refused to budge on allowing the US government to use its Claude model for mass domestic surveillance and fully autonomous weapons systems.
The designation, along with President Donald Trump issuing an executive order for all government agencies to stop using Anthropic technology, has resulted in Anthropic filing two lawsuits against the US government in an attempt to get the designation overturned.
The supply chain risk designation is typically reserved for foreign companies at risk of being abused by adversaries, with the most notable example being US sanctions and designations against Huawei.
What happens next?
By labelling Anthropic as a supply chain risk, the Trump administration is setting a dangerous precedent. Either companies are forced to comply with the US government’s desired use of a company’s product, as has happened with OpenAI’s latest contract, or companies do not renew their contracts and the government procures technology from a different company.
Those in the know will remember how Google ended its partnership with the US military after an internal petition reached over 4,000 signatures over the company’s involvement in Project Maven. The project involved AI image recognition software developed by Google being used for drone strikes in the Middle East.
Google chose to let its contract lapse without renewal, and the US government turned to other companies including Palantir, Anduril, Amazon Web Services, and Anthropic to fill the gap.
Now, in the fallout of the Anthropic situation, almost 1,000 Google and OpenAI employees have signed letters calling for clear limits on military uses of AI. Should these companies bow to their employees’ demands, they could face the wrath of the US government. On the other hand, they may face a mass exodus of employees if their demands are not addressed.
One outcome that the US government may have failed to acknowledge in its dealings with Anthropic is that companies now may be less willing to work alongside the US Department of Defense due to fears their technology may be used for purposes their terms of service explicitly forbid.
