
US Military Used Palantir and Anthropic AI to Strike 1,000 Targets in Iran
The United States military used an advanced artificial intelligence system to help strike roughly 1,000 targets in the first 24 hours of its campaign against Iran, relying on technology developed by Palantir and Anthropic, according to a report by The Washington Post.
The Maven Smart System
The system, known as the Maven Smart System, is built by data-mining company Palantir and processes large volumes of classified intelligence data from satellites, surveillance platforms and other sources, the report said, citing three people familiar with the system.
Claude AI Integrated into Pentagon Targeting Platform
Embedded within the Maven system is Claude, a generative AI model developed by Anthropic.
The Washington Post reported that Claude was integrated into Maven to help analyse intelligence data, suggest targets and prioritise them based on operational importance.
Accelerating Military Planning
Two people familiar with the system told the newspaper that Maven, powered by Claude, suggested hundreds of targets, generated location coordinates and ranked targets as US planners prepared the campaign.
The combined system has accelerated the pace of military planning by converting processes that previously took weeks into near real-time operations, one of the people told the newspaper.
First Major War Deployment for Claude
While Claude has previously been used in security operations — including counterterrorism work and the raid that captured Venezuelan President Nicolás Maduro, according to two people cited by The Washington Post — the Iran campaign marks its first use in large-scale military combat operations.
Pentagon Banning Anthropic Tools after Dispute
The deployment of the technology has come alongside a policy dispute between the US government and Anthropic.
Hours before the bombing campaign against Iran began, US President Donald Trump announced a ban on the use of Anthropic’s AI tools across government agencies, according to The Washington Post.
Experts Debate Risks of AI-Driven Warfare
The growing use of generative AI in military operations has triggered debate among defence analysts about oversight and reliability.
Paul Scharre, executive vice president at the Center for a New American Security, told The Washington Post that AI allows the military to develop targeting packages “at machine speed rather than human speed”.
However, he warned that human oversight remains necessary.
“AI gets it wrong,” Scharre said. “We need humans to check the output of generative AI when the stakes are life and death.”
Conclusion
The use of Palantir and Anthropic AI in the US military’s campaign against Iran marks a significant milestone in the development of artificial intelligence for military operations.
As the use of AI in warfare continues to expand, it is essential to consider the potential risks and benefits of this technology and to ensure that it is used in a responsible and transparent manner.