While the public-facing £2 billion ChatGPT deal was rejected, the UK’s ongoing plan to integrate AI into defence and justice systems carries profound, hidden security risks. The Memorandum of Understanding with OpenAI opens the door for a foreign-owned AI to be used within the nation’s most sensitive sectors.
The deal, signed by Technology Secretary Peter Kyle, is explicitly designed to explore how AI can enhance public services. In defence, this could mean using AI for strategy and logistics. In the justice system, it could involve case analysis or administrative support.
However, relying on a third-party, closed-source AI model from a US company for these functions creates potential vulnerabilities. It raises questions about data security, the potential for foreign surveillance, and what would happen if the service was withdrawn or compromised during a geopolitical crisis.
The earlier, more ambitious talks with Sam Altman about a universal public subscription show the government’s high level of trust in OpenAI. This trust must be balanced with extreme caution when applying the same technology to the core functions of national security and the rule of law.

