More than 100 lawmakers from across the UK’s political spectrum have joined a coordinated call for binding regulation on the most powerful artificial intelligence systems. They warn that current government efforts are too slow and that unchecked AI development could eventually threaten national and global security.
The initiative, led by the nonprofit Control AI, has gained support from former defence and technology ministers who argue that “superintelligent” AI could become the most dangerous technological advancement since the creation of nuclear weapons. They fear that major AI firms are moving faster than governments can respond, while lobbying efforts from US-based companies are discouraging meaningful regulation.
Supporters say that despite the UK hosting an AI safety summit and creating the AI Security Institute, the government has not introduced the binding laws needed to ensure transparent testing, independent oversight and clear safety standards. Some lawmakers are calling for an international agreement to pause development of superintelligent systems until global safeguards are in place.
Campaigners argue that the rapid evolution of AI makes regulatory action urgent. They warn that without enforceable rules, society risks falling behind companies capable of releasing increasingly powerful and unpredictable models.

