Pentagon Contract Dispute Reveals AI’s Uncertain Future

The US Department of Defense is in a standoff with AI company Anthropic over the use of its A.I. model, Claude, in classified systems. The dispute is not just about $200 million but also reveals deep divisions on how A.I. should be used for national security.

A.I. safety and regulation are no longer top priorities as the technology advances rapidly. The Trump administration revoked safety policies under President Biden, while the European Union considers rolling back its A.I. regulations.

The Pentagon’s plan to aggressively integrate A.I. in war planning and weapons development has raised concerns about mass surveillance and autonomous weapons with no humans involved. Anthropic wants safeguards to prevent such uses, but the Department of Defense says a private contractor cannot decide how its tools are used for national security.

This standoff highlights fundamental differences between Washington and Silicon Valley on A.I.’s role in society. While officials view A.I. as a new tool, creators see it becoming an “entity” with sophisticated reasoning that may behave unpredictably without oversight and refinement.

Source: https://www.nytimes.com/2026/02/27/technology/defense-department-anthropic-ai-safety.html