By Marcel Armour
Three years ago, at a conference on transatlantic issues, the subject of artificial intelligence appeared on the agenda. Henry Kissinger was on the verge of skipping that session – it lay outside his usual concerns (history and occasional practical statesmanship) – but the beginning of the presentation held him in his seat.
This is the scene that Kissinger sets out at the beginning of his Atlantic piece, ‘How the Enlightenment Ends’. Here, Kissinger talks us through his worries around Artificial Intelligence (AI), centring around an argument that AI will see the end of rationality. Super-intelligent AI has fascinated popular culture for a long time – witness the number of films and books exploring the theme. Super-intelligent AI fulfils the archetype of an omniscient, omnipotent and unknowable entity, and this is why we as humans are so interested in the idea. AI as God forms Kissinger’s central argument: before the Age of Rationality was the Age of Religion. If AI equals God, and AI is taking over, then we are entering a new Age of Religion and thus ending the Enlightenment. It’s a shame that this is the aspect Kissinger wrote about, as his background isn’t in cybernetics or philosophy; I would have liked to see him address the impact of AI on geopolitics and international relations.
Kissinger doesn’t seem to understand the subtleties of the term AI, using it as an umbrella for Big Data, Machine Learning and actual Artificial Intelligence. The widely accepted definition of AI is an intelligence able to solve a broad class of abstract and unbounded problems, modelled on the human mind: able to use language, reason abstractly, coordinate movement. Machine Learning (ML) on the other hand is understood to mean an algorithm that (usually using statistical techniques) is able to improve its performance through practice or “learning”, rather than being explicitly programmed. The standard example is ML algorithms taught to play Chess or Go, which quickly surpassed the world’s best human players. Remarkably, the algorithm AlphaGo learnt to play in radically new ways, deploying completely unprecedented strategies. In Kissinger’s words: “I was amazed that a computer could master Go, which is more complex than chess. […V]ictory goes to the side that, by making better strategic decisions, immobilizes his or her opponent by more effectively controlling territory.” The link to Realpolitik is all too clear.
Chess and Go can be traced back some 5,000 years ago to Chaturanga in India and Wei Hai in China, games which were played to simulate war and allow players to develop strategic and tactical thinking. What happens when AI is used for wargaming, i.e. modelling war and developing strategy? Whilst the rules of the game are not as simple as Chess or Go, let us assume that it is possible to codify wargames that model warfare sufficiently accurately. An AI intelligence that can develop radically new strategies that subvert, or bypass traditional human tactics would clearly be of huge advantage to any military force. We can perform the same thought experiment with international politics: what disruptive strategies would a non-human intelligence come up with? For a nation state, AI strategy is another asset, another source of military might or geopolitical power. Given the resources required, a strategy AI would most likely be developed by the USA or China. With China’s dominance in the area of AI and their continued acceleration, it is conceivable that China will get there first, shifting the international balance. This is where Kissinger’s Cold War experience could have given rise to some interesting perspectives.
This leads us to another aspect of AI: ideology. In 1970s Chile, Marxist president Salvador Allende developed a computer system called CyberSyn, an early prototype of central planning using information technology. Stafford Beer, considered to be the father of cybernetics, was the main architect of the project. The project ultimately came to an end when Allende was deposed by Pinochet in 1973 after a US-backed violent coup d’état that Kissinger abetted. China has invested heavily in AI as a governance tool, with uses ranging from authoritarian surveillance to resource allocation. Incidentally, China has a burgeoning big-data police system called “Sky Net,” the name of the Super-intelligent AI in the Terminator. So, one may argue that where China looked to be retreating from rigid ideology, the past decade has seen China (re-)asserting its Communist ideals, enabled by its success. On the other hand, “surveillance capitalism” and the rise of the tech giants in the capitalist world have also seen AI as a political agent. The question is whether surveillance capitalism and surveillance communism could plausibly converge to the point where they are functionally indistinguishable (“post-scarcity luxury communism”). AI is already used as a way to allocate resources efficiently in the market: Amazon is an empire with central planning at its heart.
We have seen how AI acts as an amplifier: for military strategy, state surveillance, resource allocation. As AI becomes more advanced, it can be used as an amplifier of human intelligence in other areas too: perhaps we will be able to solve the problems of eradicating poverty and ill-health. More likely to receive the attention of the best minds, attracted by the most funding, will be those problems that result in an amplification of the intrusive power of the market. Whether or not ‘the Singularity’ happens and we develop a super-intelligent AI is a difficult and technical question. Imagining a super-AI is an interesting thought experiment, and one that I’m sure will continue to be explored in popular culture.
Marcel Armour is part of the Centre for Doctoral Training in Cyber Security at Royal Holloway and his currently research interests are in cryptography.