Law in the Internet Society

View   r4  >  r3  ...
CharlotteSkertenFirstEssay 4 - 11 Jan 2018 - Main.CharlotteSkerten
Line: 1 to 1
 
META TOPICPARENT name="FirstEssay"
Line: 7 to 7
 -- By CharlotteSkerten - 05 Nov 2017
Changed:
<
<

In the beginning...

>
>

The internet and the bomb

 
Changed:
<
<
The Internet began in the US over 50 years ago in response to growing concerns that the Soviet Union could wipe out the US telephone system through nuclear strike. In response, the US Department of Defense’s Advanced Research Projects Agency developed a computer network system utilizing packet switching that would enable government leaders to communicate through interconnected computers on a single network. This decentralized system of power would enable the US to transmit orders and control their armies even if Washington DC was wiped out by nuclear radiation. The network was extended overseas, ultimately evolving into the internet we know today.

In the 25 years since the end of the cold war, the horrors of Nagasaki, Hiroshima and the Cuban Missile Crisis have been largely forgotten. Most people have become comfortable with (or happily oblivious to) the fact that there are still 15,000 nuclear weapons in the world, many on high trigger alert. Despite 122 states voting for a UN nuclear weapon prohibition treaty earlier this year, we seem as unlikely to revert to a nuclear-free world as to one without the internet. We have, in short, “learned to stop worrying and love the bomb”.

But the development of North Korea’s nuclear program, and the US response, have reignited fears of another world war – and this time the relationship between the internet, society and weapons of mass destruction will be dramatically different.

>
>
In 2017, I participated in the UN negotiations for a treaty to prohibit nuclear weapons. While the treaty does not expressly refer to the internet, the relationship between weapons of mass destruction and the internet warrants re-examination today. The internet was developed in response to the threat posed by nuclear strike during the Cold War. But in the last fifty years, the internet has evolved to such an extent that it now permeates and controls much of human life and government action. The internet society has also altered the nature of, and our relationship with, weapons of mass destruction, including the nukes that inspired its creation. In particular, the way that the internet has developed has modified the manner in which nuclear weapons can be used as well as the very nature of weapons of mass destruction that are likely to be deployed in modern warfare. As a result, the potential consequences of our decision to ‘stop worrying and to love the bomb’ are even more catastrophic than could be imagined half a century ago.
 

The original threat: nuclear weapons

Changed:
<
<
Many nuclear arsenals are being ‘modernized’, including through increased connectivity to the rest of the war-fighting system. This introduces new vulnerabilities and dangers, including that nukes can be sabotaged by both state-sponsored and private hackers. In 2010, 50 nuclear-armed Minuteman missiles in Wyoming suddenly went offline and disappeared from their monitors. Communication was re-established remotely an hour later. It was eventually discovered to be an incorrectly installed circuit card, but the lost connection could have allowed hackers to control the missiles – which are designed to fire instantly as soon as they received a short stream of computer code, and they are indifferent about the code’s source.
>
>
Nuclear weapons systems can be connected to an internet platform or air gapped. But neither option eliminates all security threats. Interference with nuclear weapons systems has been a risk since their inception, and does not necessarily require advanced technology (as demonstrated, for example, by the 2012 break in to a Tennessee nuclear plant by an 84-year-old nun). However, the internet society increases the risk of state and private interference with nuclear weapons systems, which in turn increases the likelihood of nuclear strike.
 
Changed:
<
<
Nuclear weapons control systems are often ‘air-gapped’ from the open internet.  But the ‘stuxnet’ attack in Iran nevertheless demonstrates the impact that a sophisticated adversary with a detailed knowledge of process control systems can have on critical infrastructures. The stuxnet malware infected Siemens computers that controlled and monitored the speed of centrifuges at Natanz, slowing them down to a level that did not allow for the enrichment of uranium required for nukes, while making everything appear normal on their monitors. Although the computers were air-gapped, the malware was spread via infected USB flash drives. The attackers (widely believed to be American and Israeli-sponsored) first infected computers belonging to companies that did contract work for Natanz, which provided a gateway for infection when their computers became interoperable with those at Natanz.
>
>
Many nuclear arsenals are now being ‘modernized’, including through increased connectivity to the rest of the war-fighting system. Like all complex technological systems, those designed to govern the use of nukes are inherently flawed. They are designed, built, installed, maintained, and operated by humans. We can never have complete control over the supply chain for critical nuclear components – hardware and software are often off-the-shelf. And today’s systems must contend with all the other modern tools of cyber warfare, including spyware, malware, worms, bugs, viruses, corrupted firmware, logic bombs and Trojan horses. Networking nuclear arsenal introduces new vulnerabilities and dangers, including the risk of remote access through the internet.
 
Changed:
<
<
Because North Korea has thousands fewer nukes than the US, and its infrastructure remains disconnected, hacking its nuclear weapons systems would be more difficult. Based on their diametrically opposed reliance on the internet, North Korea has been ranked first, and the US dead last, as to their cyber-conflict preparedness. But like all complex technological systems, those designed to govern the use of nukes are inherently flawed. They are designed, built, installed, maintained, and operated by human beings. We lack adequate control over the supply chain for critical nuclear components – hardware and software are often off-the-shelf. And today’s systems must contend with all the other modern tools of cyber warfare, including spyware, malware, worms, bugs, viruses, corrupted firmware, logic bombs and Trojan horses. The possibility of insiders facilitating illicit access to critical computer systems exponentially increases these risks.
>
>
The ‘stuxnet’ attack in Iran illustrates that air gapping devices does not necessarily protect them from corruption by adversaries. The stuxnet malware was spread through USB flash drives, and then between computers (irrespective of whether they were connected to the internet), in order to seek out and compromise the specific model of Siemens computers which ran the uranium-enriching centrifuges at the Nantanz nuclear plant. The size and sophistication of stuxnet indicated that it was state-sponsored. But the development of online communities such as ‘Anonymous’ and ‘LulzSec’, combined with the ever-increasing availability of information online, demonstrates the increasing potential for individuals to take action in areas that were once considered to be exclusively in the government domain.
 

The new threat: killer robots

Changed:
<
<
Perhaps even more terrifying is the new arms race for lethal autonomous weapons systems (a.k.a ‘killer robots’) designed to select and attack military targets without intervention by a human operator. Killer robots involve statistical analysis of data sets as a complement to algorithms that use the data to do something, for example finding data patterns that identify a target and moving the robot forward. Because each person active on the internet has become a dense cluster of data points linked to other people’s clusters of data points, the physiology of the net creates the perfect breeding ground to develop killer robot technologies.
>
>
The internet has also allowed for the development of new weapons of mass destruction, including lethal autonomous weapons systems designed to select and attack targets without intervention by a human operator. These so-called ‘killer robots’ involve statistical analysis of data sets as a complement to algorithms that use the data to do something, including identifying a target (for example, through a hashtag) and firing an integrated weapon. Because each person active on the internet has now become a dense cluster of data points linked to other people’s clusters of data points, the physiology of the internet has created the perfect breeding ground to develop killer robot technologies.
 
Changed:
<
<
While robotized soldiers may (or may not) still be some time away, other lethal autonomous weapons are already in use. Samsung’s SGR-A1 sentry gun, which is reportedly capable of firing autonomously, is used along the border of the Korean Demilitarized Zone. The gun is the first of its kind with an autonomous system capable of performing surveillance, voice-recognition, tracking and firing with mounted machine gun or grenade launcher. Prototypes are now available for land, air and sea combat.
>
>
Some lethal autonomous weapons are already in use. For example, Samsung’s SGR-A1 sentry gun, capable of performing surveillance, voice-recognition, tracking and firing autonomously, is deployed along the border of the Korean Demilitarized Zone. But the potential for the development of killer robots into weapons of mass destruction is exponential. A future involving killer robots has been likened to scenes from the Terminator and Robocop movies. Experts have pointed out that allowing machines to choose to kill humans would be “devastating to our security and freedom” and that artificial intelligence should be proactively regulated because of the “fundamental risk [it poses] to the existence of civilization”.
 
Changed:
<
<
The threat of killer robots has been the subject of much disagreement, including between Elon Musk and Mark Zuckerberg. But these weapons undoubtedly violate the first principle of robotics. Like nuclear weapons, they have real potential to cause harm to innocent people, and to global stability. Killer robots are also likely to violate international humanitarian law, especially the principle of distinction, which requires the ability to discriminate combatants from non-combatants, and proportionality, which requires that damage to civilians is proportional to the military aim.  Additionally, if decisions are delegated to an autonomous hardware or software system engaged in battle, then can anyone be held responsible for resulting injury or death?
>
>
Killer robots undoubtedly violate the first principle of robotics. They would also violate international humanitarian law applicable in armed conflict, in particular the principle of distinction, which requires the ability to discriminate combatants from non-combatants, and proportionality, which requires that damage to civilians is proportional to the military aim. Like traditional weapons of mass destruction, killer robots have real potential to cause harm to innocent people (both in warfare and peacetime), and to global stability. But they also fundamentally differ. It would be nearly impossible to hold any person or nation to account for injury, death or war crimes caused by killer robots. The development of ‘intelligent’ tools with an ability to kill without human thought or emotion changes the hierarchy between humans and machines in the world order, with machines taking over effective control.
 

Conclusion

Changed:
<
<
We have been told that the next world war will occur over the internet. But it may still involve cold war tools like nukes, with vastly increased risks because of the internet society in which we now live, or novel tools such as killer robots that exercise artificial intelligence. The ability to control weapons of mass destruction no longer lies only in the hands of governments, but nuclear systems can be controlled, and killer robots created, by civilians. And the purpose of these weapons is the efficient ending of human life.

Fact-checking could have been much tighter. Loss of contact with Minuteman silos does not mean that someone could have remotely fired the missiles. Stuxnet didn't slow down centrifuges, it sped them up until they were self-damaging. Fuel-cycle equipment isn't in itself weaponry.

Nor is it clear why, for example, you say that autonomous weapons will only be used against military targets. I first wrote about this subject a dozen years ago, and even then the political as opposed to military consequences of robot infantry were quite clear.

But I think the primary route to improvement here is to clarify the central idea motivating the essay. Your "conclusion" isn't really a conclusion because the idea from which you began isn't sufficiently explicit. The premises are ones that, by and large, your essay succeeds in making clear: Tools of warfare, large and small, are gaining intelligence as most "things" are gaining intelligence. The risks of accidental warfare, including perhaps nuclear war, have risen (although at the moment most human beings would probably say that it is old-fashioned human frailty, rather than technological change, that is moving the hands of the clock closer to midnight). Autonomous weaponry eliminates important political limits on the use of state violence. On those premises, what is your additional idea that you want to communicate to the reader? Put it at the top of the draft, in a sentence or two, so the reader knows what you want her to understand. Then you can develop the idea in the context of the valuable material you have in this draft, tightened somewhat. That would enable a real conclusion, that is, a re-presentation of your original idea in a form that the reader can take further for herself, under her own intellectual steam.

>
>
Many now believe that the next world war will take place exclusively over the internet. But the development of North Korea’s nuclear program, and the US response, highlight that weapons of mass destruction remain one of the most serious threats to human existence. Modern warfare may well involve cold war tools like nukes, with vastly increased risks because of the internet society in which we now live, or novel tools such as killer robots that exercise artificial intelligence. The ability to control weapons of mass destruction no longer lies only in the hands of governments, but nuclear systems may now also be controlled, and killer robots created, by civilians. The impetus for the development of the internet was the risk to civilization posed by weapons of mass destruction. Fifty years on, however, we should reconsider whether the internet has in fact decreased the risk to human life posed by these weapons, or has instead compounded it.
 
You are entitled to restrict access to your paper if you want to. But we all derive immense benefit from reading one another's work, and I hope you won't feel the need unless the subject matter is personal and its disclosure would be harmful or undesirable.

Revision 4r4 - 11 Jan 2018 - 23:55:39 - CharlotteSkerten
Revision 3r3 - 03 Dec 2017 - 19:21:41 - EbenMoglen
This site is powered by the TWiki collaboration platform.
All material on this collaboration platform is the property of the contributing authors.
All material marked as authored by Eben Moglen is available under the license terms CC-BY-SA version 4.
Syndicate this site RSSATOM