Technologies that could unleash a generation of lethal weapons systems requiring little or no human interaction are being funded by the Ministry of Defence، according to a new report، as the Guardian said.
The development of autonomous military systems – dubbed “killer robots” by campaigners opposed to them – is deeply contentious. Earlier this year، Google withdrew from the Pentagon’s Project Maven، which uses machine learning to analyse video feeds from drones، after ethical objections from the tech giant’s staff.
The government insists it “does not possess fully autonomous weapons and has no intention of developing them”. But، since 2015، the UK has declined to support proposals put forward at the UN to ban them. Now، using government data، Freedom of Information requests and open-source information، a year-long investigation reveals that the MoD and defence contractors are funding dozens of artificial intelligence programmes for use in conflict.
“Despite public statements that the UK has no intention of developing lethal autonomous weapon systems، there is tangible evidence that the MoD، military contractors and universities in the UK are actively engaged in research and the development of the underpinning technology with the aim of using it in military applications،” said Peter Burt، author of the new report Off the Leash: The Development of Autonomous Military Drones in the UK – produced by Drone Wars UK which campaigns against the development of unmanned systems. In one example، the report claims the MoD is trialling a “predictive cognitive control system” that has been deployed in live operations at the Joint Forces Intelligence Centre at RAF Wyton. The system takes huge quantities of highly complex data، beyond the comprehension of analysts، and uses deep learning neural networks to make predictions about future events and outcomes that will be of “direct operational relevance” to the armed forces.