Fifty years ago, a UCLA computer science professor and his student sent the first message over the predecessor to the internet, a network called ARPANET.
On Oct. 29, 1969, Leonard Kleinrock and Charley Kline sent Stanford University researcher Bill Duval a two-letter message: “lo.” The intended message, the full word “login,” was truncated by a computer crash. (That original message was sent via the SDS Sigma 7 computer shown at the top of this page.)
Much more traffic than that travels through the internet these days, with billions of emails sent and searches conducted daily. As a scholar of how the internet is governed, I know that today’s vast communications web is a result of governments and regulators making choices that collectively built the internet as it is today.
Here are five key moments in this journey.
1978: Encryption failure
Early internet pioneers, in some ways, were remarkably farsighted. In 1973, a group of high school students reportedly gained access to ARPANET, which was supposed to be a closed network managed by the Pentagon.
Computer scientists Vinton Cerf and Robert Kahn suggested building encryption into the internet’s core protocols, which would have made it far more difficult for hackers to compromise the system.
But the U.S. intelligence community objected, though officials didn’t publicly say why. The only reason their intervention is public is because Cerf hinted at it in a 1983 paper he co-authored.
As a result, basically all of today’s internet users have to handle complex passwords and multi-factor authentication systems to ensure secure communications. People with more advanced security needs often use virtual private networks or specialized privacy software like Tor to encrypt their online activity.
However, computers may not have had enough processing power to effectively encrypt internet communications. That could have slowed the network, making it less attractive to users—delaying, or even preventing, wider use by researchers and the public.
1983: ‘The internet’ is born
For the internet to really be a global entity, all kinds of different computers needed to speak the same language to be able to communicate with each other—directly, if possible, rather than slowing things down by using translators.
Hundreds of scientists from various governments collaborated to devise what they called the Open Systems Interconnectionstandard. It was a complex method that critics considered inefficient and difficult to scale across existing networks.
Cerf and Kahn, however, proposed another way, called Transmission Control Protocol/Internet Protocol. TCP/IP worked more like the regular mail—wrapping up messages in packages and putting the address on the outside. All the computers on the network had to do was pass the message to its destination, where the receiving computer would figure out what to do with the information. It was free for anyone to copy and use on their own computers.
TCP/IP—given that it both worked and was free—enabled the rapid, global scaling of the internet. A variety of governments, including the United States, eventually came out in support of OSI but too late to make a difference. TCP/IP made the internet cheaper, more innovative and less tied to official government standards.
1996: Online speech regulated
By 1996, the internet boasted more than 73,000 servers, and 22 percent of surveyed Americans were going online. What they found there, though, worried some members of Congress and their constituents—particularly the rapidly growing amount of pornography.
In response, Congress passed the Communications Decency Act, which sought to regulate indecency and obscenity in cyberspace.
The Supreme Court struck down portions of the law on free-speech grounds the next year, but it left in place Section 230, which stated: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”
Those 26 words, as various observers have noted, released internet service providers and web-hosting companies from legal responsibility for information their customers posted or shared online. This single sentence provided legal security that allowed the U.S. technology industry to flourish. That protection let companies feel comfortable creating a consumer-focused internet, filled with grassroots media outlets, bloggers, customer reviews and user-generated content.
Critics note that Section 230 also allows social media sites like Facebook and Twitter to operate largely without regulation.
1998: US government steps up
The TCP/IP addressing scheme required that every computer or device connected to the internet have its own unique address—which, for computational reasons, was a string of numbers like “192.168.2.201.”
But that’s hard for people to remember—it’s much easier to recall something like “indiana.edu.” There had to be a centralized record of which names went with which addresses, so people didn’t get confused, or end up visiting a site they didn’t intend to.
Originally, starting in the late 1960s, that record was kept on a floppy disk by a man named Jon Postel. By 1998, though, he and others were pointing out that such a significant amount of power shouldn’t be held by just one person. That year saw the U.S. Department of Commerce lay out a plan to transition control to a new private nonprofit organization, the Internet Corporation for Assigned Names and Numbers—better known as ICANN—that would manage internet addresses around the world.
For nearly 20 years, ICANN did that work under a contract from the Commerce Department, though objections over U.S. government control grew steadily. In 2016, the Commerce Department contract expired, and ICANN’s governance shifted to a board of representatives from more than 100 countries.
Other groups that manage key aspects of internet communications have different structures. The Internet Engineering Task Force, for instance, is a voluntary technical organization open to anyone. There are drawbacks to that approach, but it would have lessened both the reality and perception of U.S. control.
2010: War comes online
In June 2010, cybersecurity researchers revealed the discovery of a sophisticated cyber weapon called Stuxnet, which was designed specifically to target equipment used by Iran’s effort to develop nuclear weapons. It was among the first known digital attacks that actually caused physical damage.
Almost a decade later, it’s clear that Stuxnet opened the eyes of governments and other online groups to the possibility of wreaking significant havoc through the internet. These days, nations use cyberattacks with increasing regularity, attacking a range of military and even civilian targets.
There’s certainly cause for hope for online peace and community, but these decisions—along with many others—have shaped cyberspace and with it millions of people’s daily lives. Reflecting on those past choices can help inform upcoming decisions—such as how international law should apply to cyberattacks, or whether and how to regulate artificial intelligence.
Maybe 50 years from now, events in 2019 will be seen as another key turning point in the development of the internet.
Scott Shackelford is Associate Professor of Business Law and Ethics; Director, Ostrom Workshop Program on Cybersecurity and Internet Governance; and Cybersecurity Program Chair, IU-Bloomington, Indiana University. This article was originally published at The Conversation and has been republished under Creative Commons. Read the original article.
The post The Internet Turns 50 appeared first on ORBITER.