After a wave of criticism, OpenAI announced revisions to its agreement with the U.S. Department of Defense, adding explicit safeguards to curb misuse of its technology. Chief executive Sam Altman disclosed that the new terms bar the system from spying on U.S. citizens and require a further contract amendment before intelligence agencies, such as the NSA, can employ the platform.
Altman admitted the initial Friday deal was “opportunistic and sloppy,” stating the company rushed to finalize it without adequate discussion. The changes aim to address the complexities surrounding AI in warfare and to restore trust among users.
The backlash manifested in a sharp spike in ChatGPT uninstallations, with Sensor Tower reporting a 200% rise in daily rates. Meanwhile, Anthropic’s Claude, previously black‑listed by the Trump administration for its stance against autonomous weapons, surged to the top of Apple’s App Store.
Beyond OpenAI, the U.S., Ukraine, and NATO already employ AI‑driven tools like Palantir’s Maven to process satellite imagery, intelligence reports, and logistical data. While these systems can hasten decision‑making, experts emphasize the necessity of a human‑in‑the‑loop to prevent erroneous or “hallucinated” outputs.
Professor Mariarosaria Taddeo of Oxford noted that Anthropic’s exit from Pentagon contracts leaves a gap in safety‑focused oversight, highlighting the ongoing debate over autonomous weaponry and the role of private firms in national defense.
Source: www.bbc.com