Law in the Internet Society
It is strongly recommended that you include your outline in the body of your essay by using the outline as section titles. The headings below are there to remind you how section and subsection titles are formatted.

World War III

-- By CharlotteSkerten - 05 Nov 2017

In the beginning...

The Internet began in the US more than 50 years ago in response to growing concerns that the Soviet Union could wipe out the US telephone system through nuclear strike. In response, the US Department of Defense’s Advanced Research Projects Agency (ARPA) developed a computer network system utilizing packet switching that would enable government leaders to communicate through interconnected computers on a single network. This decentralized system of power would enable the US to transmit orders and control their armies even if Washington DC was wiped out by nuclear radiation. The network was extended overseas, ultimately evolving into the internet we know today.

In the 25 years since the end of the cold war, the horrors of Nagasaki, Hiroshima and the Cuban Missile Crisis have been largely forgotten. Most people have become comfortable with (or happily oblivious to) the fact that there are still 15,000 nuclear weapons in the world, many on high trigger alert. Despite 122 states voting for a UN nuclear weapon prohibition treaty earlier this year, we seem as unlikely to revert to a nuclear-free world as to one without the internet. We have, in short, “learned to stop worrying and love the bomb”.

But the development of North Korea’s nuclear program, and the US response, have reignited fears of another world war – and this time the relationship between the internet, society and weapons of mass destruction will be dramatically different.

The original threat: nuclear weapons

Many nuclear arsenals are being ‘modernized’, including through increased connectivity to the rest of the war-fighting system. This introduces new vulnerabilities and dangers, including that nukes can be sabotaged by both state-sponsored and private hackers. In 2010, 50 nuclear-armed Minuteman missiles in Wyoming suddenly went offline and disappeared from their monitors. Communication was re-established remotely an hour later. It was eventually discovered to be an incorrectly installed circuit card, but the lost connection could have allowed hackers to control the missiles – which are designed to fire instantly as soon as they received a short stream of computer code, and they are indifferent about the code’s source.

Nuclear weapons control systems are often ‘air-gapped’ from the open internet.  But the ‘stuxnet’ attack in Iran nevertheless demonstrates the impact that a sophisticated adversary with a detailed knowledge of process control systems can have on critical infrastructures. The stuxnet malware infected Siemens computers that controlled and monitored the speed of centrifuges at Natanz, slowing them down to a level that did not allow for the enrichment of uranium required for nukes, while making everything appear normal on their monitors. Although the computers were air-gapped, the malware was spread via infected USB flash drives. The attackers (widely believed to be American and Israeli-supported) first infected computers belonging to companies that did contract work for Natanz, which provided a gateway for infection when their computers became interoperable with those at Natanz.

Because North Korea has thousands fewer nukes than the US, and its infrastructure remains disconnected, hacking its nuclear weapons systems would be more difficult. Based on their diametrically opposed reliance on the internet, North Korea has been ranked first, and the US dead last, as to their cyber-conflict preparedness. But like all complex technological systems, those designed to govern the use of nukes are inherently flawed. They are designed, built, installed, maintained, and operated by human beings. We lack adequate control over the supply chain for critical nuclear components – hardware and software are often off-the-shelf. And today’s systems must contend with all the other modern tools of cyber warfare, including spyware, malware, worms, bugs, viruses, corrupted firmware, logic bombs and Trojan horses. The possibility of insiders facilitating illicit access to critical computer systems exponentially increases these risks.

The new threat: killer robots

Perhaps even more terrifying is the new arms race for lethal autonomous weapons systems (a.k.a ‘killer robots’) designed to select and attack military targets without intervention by a human operator. Killer robots involve statistical analysis of data sets as a complement to algorithms that use the data to do something, for example finding data patterns that identify a target and moving the robot forward. Because each person active on the internet has become a dense cluster of data points linked to other people’s clusters of data points, the physiology of the net creates the perfect breeding ground to develop killer robot technologies.

While robotized soldiers may (or may not) still be some time away, other lethal autonomous weapons are already in use. Samsung’s SGR-A1 sentry gun, which is reportedly capable of firing autonomously, is used along the border of the Korean Demilitarized Zone. The gun is the first of its kind with an autonomous system capable of performing surveillance, voice-recognition, tracking and firing with mounted machine gun or grenade launcher. Prototypes are now available for land, air and sea combat.

The threat of killer robots has been the subject of much disagreement, including between Elon Musk and Mark Zuckerberg. But these weapons undoubtedly violate the first principle of robotics. Like nuclear weapons, they have real potential to cause harm to innocent people, and to global stability. Killer robots are also likely to violate international humanitarian law, especially the principle of distinction, which requires the ability to discriminate combatants from non-combatants, and proportionality, which requires that damage to civilians is proportional to the military aim.  Additionally, if decisions are delegated to an autonomous hardware or software system engaged in battle, then can anyone be held responsible for resulting injury or death?

Conclusion

We have been told that the next world war will occur over the internet. But it may still involve cold war tools like nukes, with vastly increased risks because of the internet society in which we now live, or novel tools such as killer robots that exercise artificial intelligence. The ability to control weapons of mass destruction no longer lies only in the hands of governments, but nuclear systems can be controlled, and killer robots created, by civilians. And the purpose of these weapons is the efficient ending of human life.


You are entitled to restrict access to your paper if you want to. But we all derive immense benefit from reading one another's work, and I hope you won't feel the need unless the subject matter is personal and its disclosure would be harmful or undesirable. To restrict access to your paper simply delete the "#" character on the next two lines:

Note: TWiki has strict formatting rules for preference declarations. Make sure you preserve the three spaces, asterisk, and extra space at the beginning of these lines. If you wish to give access to any other users simply add them to the comma separated ALLOWTOPICVIEW list.

Navigation

Webs Webs

r1 - 05 Nov 2017 - 21:58:20 - CharlotteSkerten
This site is powered by the TWiki collaboration platform.
All material on this collaboration platform is the property of the contributing authors.
All material marked as authored by Eben Moglen is available under the license terms CC-BY-SA version 4.
Syndicate this site RSSATOM