The post US Pentagon intensifies talks with top AI companies on deploying tools on classified systems appeared on BitcoinEthereumNews.com. The Pentagon is puttingThe post US Pentagon intensifies talks with top AI companies on deploying tools on classified systems appeared on BitcoinEthereumNews.com. The Pentagon is putting

US Pentagon intensifies talks with top AI companies on deploying tools on classified systems

The Pentagon is putting real pressure on major artificial intelligence companies to give the U.S. military access to their tools inside classified systems.

Officials aren’t just asking for basic access. They want these AI models to work without all the usual limits companies place on users.

During a White House meeting on Tuesday, Emil Michael, the Pentagon’s Chief Technology Officer, told tech leaders the military wants these AI models running across both classified and unclassified networks.

An official close to the talks allegedly said the government is now set on getting what it calls “frontier AI capabilities” into every level of military use.

Pentagon demands access without restrictions across secure networks

This push is part of bigger talks about how AI will be used in future combat. Wars are already being shaped by drone swarms, robots, and nonstop cyberattacks. The Pentagon doesn’t want to play catch-up while the tech world draws lines around what’s allowed.

Right now, most companies working with the military are offering watered-down versions of their models. These only run on open, unclassified systems used for admin work. Anthropic is the one exception.

Claude, its chatbot, can be used in some classified settings, but only through third-party platforms. Even then, government users still have to follow Anthropic’s rules.

What the Pentagon wants is direct access inside highly sensitive classified networks. These systems are used for stuff like planning missions or locking in targets. It’s not clear when or how chatbots like Claude or ChatGPT would be installed on those networks, but that’s the goal.

Officials believe AI can help process huge amounts of data and feed that to decision-makers fast. But if those tools generate false info, and they do, people could die. Researchers have warned about exactly that.

OpenAI made a deal with the Pentagon this week. ChatGPT will now be used on an unclassified network called genai.mil. That network already reaches over 3 million employees across the Defense Department.

As part of the deal, OpenAI removed a lot of its normal usage limits. There are still some guardrails in place, but the Pentagon got most of what it wanted.

A company spokesperson said any expansion to classified use would need a new deal. Google and Elon Musk’s xAI have done similar deals in the past.

AI researchers are quitting and calling out the risks

Talks with Anthropic haven’t been as easy. Leaders at the company told the Pentagon they don’t want their tech used for automatic targeting or spying on people inside the U.S.

Even though Claude is being used already in some national security missions, the company’s executives are pushing back. In a statement, a spokesperson said:-

They said Claude is already in use, and the company is still working closely with what’s now called the Department of War. President Donald Trump recently ordered the Defense Department to adopt that name, but Congress still needs to approve it.

While all of this is happening, a bunch of researchers at these companies are walking out. One of Anthropic’s top safeguards researchers said, “The world is in peril,” as he quit. A researcher at OpenAI also left, saying the tech has “a potential for manipulating users in ways we don’t have the tools to understand, let alone prevent.”

Some of the people leaving aren’t doing it quietly. They’re warning that things are moving too fast and the risks are being ignored. Zoë Hitzig, who worked at OpenAI for two years, quit this week.

In an essay, she said she had “deep reservations” about how the company is planning to bring in ads. She also said ChatGPT stores people’s private data, things like “medical fears, their relationship problems, their beliefs about God and the afterlife.”

She said that’s a huge problem because people trust the chatbot and don’t think it has any hidden motives.

Around the same time, tech site Platformer reported that OpenAI got rid of its mission alignment team. That group was set up in 2024 to make sure the company’s goal of building AI that helps all of humanity actually meant something.

Source: https://www.cryptopolitan.com/us-pentagon-talks-with-top-ai-companies/

Market Opportunity
Nowchain Logo
Nowchain Price(NOW)
$0.0009572
$0.0009572$0.0009572
-10.17%
USD
Nowchain (NOW) Live Price Chart
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact service@support.mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.