The Science of Where meets Dr. Peter Layton, Visiting Fellow at the Griffith Asia Institute, Griffith University, and a RAAF Reserve Group Captain. He has extensive aviation and defence experience and, for his work at the Pentagon on force structure matters, was awarded the US Secretary of Defense’s Exceptional Public Service Medal. His publications: Peter Layton (academia.edu).
Can you explain to our readers the thesis of the article “The Artificial Intelligence Battlespace” (RUSI)?
The article extends some ideas explained in more detail in my much larger, open-source paper: Fighting Artificial Intelligence Battles. This paper looks at the use of artificial intelligence as the technology is now, with its well-known strengths and weaknesses. I do not speculate about so-called general AI as this is not expected to be developed for another 50-70 years.
Given this, I see the best use of current AI technology is to sort through large troves of surveillance data to very quickly find and identify hostile forces operating on the future battlefield. Today’s narrow AI is tremendously faster than humans at finding small, well-camouflaged items. On the other hand narrow AI can be fooled and so there a possibility of a counter-AI battle where AI systems try to continuously fool each other on the battlefield.
From this idea many further operational concepts can be derived as the paper explains.
How will the defense and security sector change with artificial intelligence?
The US and the Chinese are leading in the application of AI for military purposes but they seem to be taking different approaches.
The American problem is that they wish to deter China from military adventurism. China has built up very large military land, sea and air forces and is further expanding these as it becomes the world’s largest economy. The Americans really cannot beat the Chinese in terms of numbers, called ‘mass’ by the military. However, AI can give robots a form of cognition. Accordingly the Americans are planning to build AI-enabled robots that in being low cost can be made in large numbers. These are not killer robots but rather robots to do a wide range of military tasks such as surveillance and reconnaissance, and logistic resupply. The future American military looks like being a small number of very expensive manned platforms controlling a very large number of low-cost robots.
The Chinese have a different problem. They have lots of mass but they lack the deep American experience and expertise in how to fight wars. In some respects, AI can provide the missing cognition and quickly bring Chinese military thinking up to the American level. The Chinese have been very impressed by the ability of modern AI systems to beat very good ‘Go’ game players. Winning at the game of Go is seen to require thinking that is similar to the thinking needed to fight wars. The Chinese now see AI as potentially being invaluable in helping their military commanders think about how to win future wars.
Summing up, the Americans want mass but have the ‘brains’. The Chinese have mass but want ‘brains’. AI as a general purpose technology can help both sides.
Our magazine deals with “Science of Where”. How can geolocation, especially in a pandemic period, help political decisions? How do you consider the alliance between public and private and between civil and military?
Two interesting questions. Firstly, the matter of geolocation highlights that AI can provide decision-makers with a very detailed map of the society in a local area down to the individual person level. The South Koreans have clearly used such information to contain COVID-19 clusters. There are though very well know privacy issues. These are hard but really important issues to debate. China has developed a comprehensive suite of AI-enabled societal surveillance technologies that are being put in place across China. The Communist party wishes to continually monitor its population not just for sinister purposes but also for public safety issues and to make crowded urban areas better places to live. There is a real tension here. It seems all technologies can be used for both good and bad purposes.
On the second matter, the military is late to AI technology. Military forces worldwide are only now just starting to use technology developed for civilian purposes and already in use. Today’s alliance may develop into a new form of military-industrial complex. The Chinese talk about dual-use technologies where both the civilian and military domains feed each other.
This may happen but might not. Dual-use technologies by their nature are widely sold and known to all. This means that counter-measures to them can be readily developed. The military may prefer to keep new developments to themselves, so that in a war they can field unexpected secret weapon systems. Accordingly, the AI technologies in the civilian and the military world may drift apart. From the military viewpoint that always seeks competitive advantages over others, dual-use technologies may be a bad idea.
The topic of innovation, in particular of the frontiers of artificial intelligence, calls for strategic reflections on the man-machine relationship. What rules will be needed?
There are two issues in this question. The first is purely military usefulness. Current AI has a range of shortcomings. It is not a technology that can be relied upon in time of battle. It can be expected to work sometimes and not others. Soldiers would be very foolish to let their lives depend on AI working properly.
This means that human-machine teaming is essential. Each can compensate in various ways for the shortcomings of the others. In theory, human-machine teaming could mean a battlefield where the laws of armed conflict are obeyed very well.
This bring us to the second issue. In the 20th Century technology was applied to solve battlefield problems in ways that where not in accordance with the laws of armed conflict. A simple mechanical equivalent to today’s potential AI-enabled weapons was the antipersonnel land mine. Many areas in the world had large minefields laid where the mines laid in wait until they detected human presence and then exploded. If tomorrows’ killer robots might move in space, the 20th Century’s landmines moved in time.
Such landmines are now banned as they are indiscriminate weapons that kill. People made a serious error in developing and fielding such weapons. It is important that people today appreciate AI’s well-known shortcomings and design future AI systems accordingly.
There are sound military reasons for using human-machine teams that get the maximum combat effectiveness from AI, rather than using AI alone. There are even more compelling humanitarian reasons for following the laws of armed conflict and not once again building indiscriminate weapons like the 20th Century’s land mines.