New Delhi, March 8, 2026
The head of OpenAI’s robotics team, Caitlin Kalinowski, has announced resignation, citing the company’s move to deploy its artificial intelligence (AI) models within the Pentagon’s classified network.
“AI has an important role in national security. But surveillance of Americans without judicial oversight and lethal autonomy without human authorisation are lines that deserved more deliberation than they got,” Kalinowski said in a post on X, adding that the decision to resign was difficult.
OpenAI confirmed Kalinowski’s exit in an emailed statement, saying it believes the Defense Department agreement “creates a workable path for responsible national security uses of AI while making clear our red lines, no domestic surveillance and no autonomous weapons.”
“We recognise that people have strong views about these issues and we will continue to engage in discussion with employees, government, civil society and communities around the world,” the company said.
Kalinowski joined OpenAI in November 2024 after leading augmented‑reality glasses’ team at Meta.
OpenAI, in a late‑February deal with the Pentagon, struck an agreement to deploy advanced AI systems “in classified environments,” which the company requested the government to also make available to all AI companies, a release said.
“We retain full discretion over our safety stack, we deploy via cloud, cleared OpenAI personnel are in the loop, and we have strong contractual protections. This is all in addition to the strong existing protections in US law,” the company said.
The deal was reached after talks between the Trump administration and Anthropic broke down. Anthropic has said it will challenge a Pentagon designation that labelled the company a supply‑chain risk. OpenAI also publicly disagreed with the blacklisting of Anthropic.
Anthropic also confirmed it has been formally designated a “Supply Chain Risk (SCR)” by the US government and the firm’s CEO also apologised for criticising President Donald Trump.
The CEO clarified that the designation would apply only to use of Anthropic’s Claude models within Department of War contracts and not to “all use of Claude by customers who have such contracts”.(Agency)




































































































