A powerful artificial intelligence system could march where human superpowers feared to tread, discovers Ben Walker
[button type=”large” color=”black” rounded=”1″ link=”https://issuu.com/revistabibliodiversidad/docs/dialogue_q3_2018_fullbook/16″ ]READ THE FULL GRAPHIC VERSION[/button]
Time was when the United States was the only nuclear power. In the four years immediately following the end of World War II, Washington, having developed and used the atom bomb in 1945, was the sole superpower. It wasn’t until 1949 that the Soviet Union developed its own bomb and, thus, the nuclear arms race and Cold War began.
Theoretically, the Americans could have exploited their edge in 1945-1949 to effect a decisive military advantage. The US had the opportunity to use its nuclear monopoly to create a singleton (an all-dominant power), and become the all-powerful, unchallengeable global government. There was serious discussion at the time around the Americans’ options. The first was to build up their arsenal then threaten and, if necessary, carry out a nuclear first strike against the USSR. Such a move would have destroyed the nuclear development capacity of the Soviets, consigning the Cold War to the realms of counterfactual fiction. An alternative strategy, which also stood a fair chance of success, was for the US to exploit its nuclear monopoly to bring about a global government with itself at the centre, a quasi United States of Earth with a nuclear monopoly and a powerful hand to prevent any rogue nations developing their own atomic weaponry.
The first – malign – approach was promoted by a surprising array of voices from all sides of the political spectrum. The socialist philosopher and future antinuclear campaigner Bertrand Russell was at times a supporter of so-called preventative nuclear war. The rightwing game theory pioneer John von Neumann was another prominent advocate.
A variety of the second – more benign – approach was tried in 1946 in the shape of the Baruch plan, which would have seen the US temporarily cede its nuclear capacity to an international agency under the control of United Nations. Under the proposal, all permanent members of the UN Security Council would give up their veto over nuclear weapons matters, such that no nation found to be in breach of UN nuclear policy could veto any penalties proposed against it. The plan collapsed when Stalin recognized that the USSR and its Warsaw Pact allies could easily be outvoted by the US-aligned West on the Security Council, an imbalance that would have secured American dominance for a generation. The world was left with the Cold War and, as it transpired, four decades of fragile peace.
Consider what would have happened were the US not a human governmental power but a superintelligent artificial system. Leading philosopher and ethicist Professor Nick Bostrom, director of the Oxford Future of Humanity Institute and Oxford Martin Programme on the Impacts of Future Technology at the University of Oxford, contends that many of the factors that prevent humans from exploiting a decisive strategic advantage in the hope of creating a singleton fail to apply to artificial systems.
Humans are wary of wagering all their capital for a 50-50 chance of doubling it. Moreover, a nation state will rarely – if ever – risk conceding all its territory in the hope of extending it by 10%. Such is the dynamic that leads powerful enemies, much of the time, into an uneasy peace. The US and Soviet Union declined to attack one another’s territory in the latter half of the last century not because they didn’t envy their opponent’s holdings, but because they feared losing their own. The same need not hold true for AI – thus a superintelligent system is far more likely to choose a risky course if it might lead towards global control.
There are more reasons why AI might pursue a singleton where human systems decline to try. Human organizations are inherently complex and fundamentally weak. Like the Soviet Union and the Warsaw Pact, belligerent or revolutionary forces fear infiltration not just from the outside, but from the inside. If a movement commands 60% support and that support is enough to disenfranchise the 40% minority, then what is to stop an internal faction within the 60%, which might disapprove of the detail of a campaign or be unhappy with its outcomes, to undermine the majority group? Belligerent or revolutionary movements can and do split irreconcilably. An AI system has no such concerns because it is a coherent single entity. Internal organization, coordination and partnership maintenance are not matters it need trouble itself with. It can be entirely focused on the goal of overthrowing any counterforces and pursuing its aim of creating a permanent singleton.
A third factor is cost – financial, moral, political or human. The cost of a nuclear first strike in the late 1940s was massive. The human and moral cost of bombing 20 Soviet cities would have been incalculable. The move might have lacked public support. A lesser aspect is that the financial expense to the US would have been gigantic. Imagine instead a nation state that could at the click of a switch command technological power that could safely disarm every other country without a single loss of life and with no physical damage to infrastructure or the environment. Or consider a digital version of the Baruch plan, in which a dominant nation with a superintelligent system persuades other nations to sacrifice their own AI ostensibly in the pursuit of global unity.
It is not clear whether the global public would even morally object to the creation of an all-powerful AI singleton. Indeed, the evidence so far suggests that it is relatively comfortable with digital monopolization. Quintessentially human considerations prevented the US exploiting its nuclear advantage to create a singleton in the 1940s. A similarly dominant AI system might face few such barriers 100 years later.
— Ben Walker is editor of Dialogue
This article is based heavily on the content, thoughts and ideas in Chapter 5 of Professor Nick Bostrom’s bestseller Superintelligence
[button type=”large” color=”black” rounded=”1″ link=”https://www.linkedin.com/groups/5125875″ ]JOIN THE CONVERSATION[/button]