Microsoft has demonstrated that it is willing to stand up to the Pentagon when it comes to AI ethics, filing a court brief in a San Francisco federal court supporting Anthropic’s legal challenge against the Defense Department’s supply-chain risk designation. The brief called for a temporary restraining order and argued that the designation threatens critical technology supply chains. Amazon, Google, Apple, and OpenAI have also backed Anthropic, making this a comprehensive industry challenge to the government’s action.
The dispute began when Anthropic refused to allow its Claude AI to be deployed for mass surveillance of US citizens or to power autonomous lethal weapons during a $200 million contract negotiation with the Pentagon. Defense Secretary Pete Hegseth applied the supply-chain risk designation following the collapse of talks, and Anthropic’s government contracts began to be cancelled. The company filed two simultaneous lawsuits challenging the designation in California and Washington DC.
Microsoft’s willingness to stand up to the Pentagon is grounded in its direct use of Anthropic’s AI in military systems and its participation in the $9 billion Joint Warfighting Cloud Capability contract. The company also holds additional federal agreements with government agencies. Microsoft publicly called for a collaborative framework between government and industry to ensure advanced AI serves national security without crossing ethical lines.
Anthropic’s court filings argued that the supply-chain risk designation was an unconstitutional act of retaliation for the company’s publicly stated AI safety positions, violating its First Amendment rights. The company disclosed that it does not currently believe Claude is safe or reliable enough for lethal autonomous operations. The Pentagon’s technology chief publicly ruled out any possibility of renegotiation.
Congressional Democrats have separately pressed the Pentagon for answers about whether AI was involved in a strike in Iran that reportedly killed over 175 civilians at a school. Their formal inquiries ask about AI targeting systems and human oversight. Microsoft’s public stand against the Pentagon, combined with the industry coalition and congressional pressure, is turning the AI ethics debate into a defining confrontation in the federal courts.