Real Security and the Internet Worm
CS 513 -- System Security -- January 27, 1998 -- Lecture 3
Lecturer: Professor Fred B. Schneider
Lecture notes by
Lynette I. Millett
For those planning to participate in the optional project annex: group membership
declarations are due today at noon.
is due Thursday, and should be done in groups of 2 or 3.
For a discussion of the internet worm described in today's lecture see:
Communications of the ACM (CACM)
Vol. 32, No. 6, June 1989.
In the last lecture we discussed the gold (Au) standard for security; it
We now ask: What are good strategies for defense? How can we make sure that
attacks won't succeed? There are four strategies we might consider:
- authorization, and
The latter three strategies require being able to distinguish the bad
guys from the good guys, and that means some sort of authentication
process. Keeping the bad guys out requires having a list of the good
guys. Letting the bad guys in but preventing them from doing harm
requires not only a list of the good guys, but an authorization
mechanism as well. This authorization can be done at many levels of
the system. For example, read/write/execute privileges in a UNIX
system are done at the file level. Authorization could also be done at
the level of fields in data records. The closer to the application
level, the greater the possibility of authorizing precisely what is
required. However, if done at a higher level, it is possible for the
success of the mechanism to become application dependent. At a lower
level, authorization is likely to be simpler. The less complex such
facilities are, the fewer bugs there will be. Thus lower level
authorizations can provide higher assurance. This is a classic
trade-off: functionality vs. assurance. The fourth option above
depends heavily on audit facilities that cannot be subverted by the
bad guys. Thus, audit requires authorization.
- Keep everyone out. If there is no sharing, then there is no need to worry
about attacks. This technique is used for single-user machines. The
problem is that sharing is a good way to communicate and get things
- Keep the bad guys out. There are technologies that facilitate
this. Firewalls can be used to act as a gateway and filter information
going in and out of a subnetwork. Similarly, code signing is used by
web browsers to allow restrictions on what code can be run. The problem
here is that if a firewall is penetrated or a code signature is
stolen, security is compromised.
- Let the bad guys in, but prevent them from doing damage. There are
various techniques for this. Most internal system security mechanisms
we discuss do exactly this. Think of standard operating systems,
which place restrictions on which processes have which privileges.
- Let the bad guys do anything, keep track of what happens, catch the
bad guys and then prosecute them. This is facilitated through the
use of good audit facilities.
The issue of assurance brings up a whole family of issues. It is not
enough that a system be secure or trustworthy. It must be
demonstrably so. Before a system can be trusted, it is necessary
to convince people that it satisfies the security requirements. The
question becomes: What do you know and how do you know it? There are
several ways to answer this question. One weak answer is to claim that
the system has been debugged thoroughly. Another is to note that the
system has been deployed and tested extensively. Yet a third may be to
point to an agency, like NSA, that has approved it. (The latter may
not provide as much of a security guarantee as one would hope.)
In general, the two strongest methods used
to increase assurance are extensive testing by outside groups and
mathematical proof. Mathematical proofs can be very long and involved
even for small programs, and thus are not likely to be feasible for large,
The question of assurance is also related to
an engineering question. If a system is designed to have the fewest number
of critical components, then testing those components thoroughly can be
a lower-cost way to increase assurance than by testing an entire system.
Recall also that assuring security requirements is difficult because
of their lack of specificity. Assurance of functional requirements is
easier to demonstrate. In fact, systems can be located on the
An application such as Microsoft Word is likely in the danger zone. It
has high functionality and lots of bugs. On the other extreme are
simpler mechanisms that have a high degree of assurance, such as the
kernel. System designers must decide where they want their products
to be on the functionality-assurance graph, and also how to convey
such information to their customers.
Case Study: The Internet Worm
We have completed an overview of security and security issues. The next
topic we discuss will be access control . Before that, however,
it seems sensible to examine a real world example in detail. At a minimum,
this ought to provide some real perspective on
security. We discuss a series of attacks that occurred in November, 1988.
The full details are available in CACM. (See Further Reading at the top
of this page.)
In 1988 a "worm" was unleashed on the Internet. This worm infected
SUN 3 and VAX machines running 4.2 BSD UNIX. The worm was written by
Robert Morris, a first-year Computer Science Ph.D. student at Cornell
at the time. He was convicted under the 1986 Computer Fraud and Abuse
Act in 1990. He was ordered to pay a $10,000 fine, sentenced to 3
years probation and 400 hours of community service. He was also
expelled from Cornell and his subsequent re-admission attempt failed.
It should be noted that under the 1986 Act, malice is not necessary
for conviction. A misguided experiment is sufficient for conviction.
The worm's effect was to cause infected systems to become more and
more loaded with processes. This used up resources, such as swap space
and the process table, which eventually caused the machine to crash.
Definitions: worm vs. virus.
Both of these terms originated in the science fiction literature. John
Brunner in "The Shockwave Rider" (1975) first used the term worm and
David Gerrold suggested the term virus in "When Harlie was One" (1972).
- A worm is a program that can run independently and can propagate
a fully working version of itself to other machines.
- A virus is a piece of code that attaches itself to other
programs, including the operating system. It cannot run independently and
requires that the host program be run to activate it.
How the Worm Operated
There were two basic parts to the Internet worm: A main program and a bootstrap
(or vector) program. The main program collected information from the host on
which it was running, and used that information to decide
what other network ids the current machine could connect
to. The worm exploited vulnerabilities in the UNIX operating system to get
the bootstrap program running on those machines.
The bootstrap program consisted of 99 lines of C code. It compiled
and ran on remote machines. Compilation allowed it to exploit the fact
that most machines could compile C, rather than needing to distribute
machine code for all possible machine/operating system configurations.
This monoculture (all machines "understanding" C and UNIX)
made for much easier penetration and raises intriguing questions about
the spread of languages like Java, which also promote a
monoculture. The bootstrap program had three command-line arguments:
the network address of the infecting machine, the network port number
to connect to to get copies of the main worm files, and a "magic
number" used as a 1-time challenge password.
Once running, the bootstrap did the following:
1. Connect back to the infector using TCP/IP.
2. Transfer from the infector binary files for the main program to the
Each of these files was a copy of the worm's main program compiled
for a different architecture and operating system, along with a
copy of the bootstrap.
3. Load and link these binary files with local (stdlib) files.
4. Execute them. If successful, attempt to infect neighboring
5. If none are successful, delete all files, clean up, and go away.
Note that the real challenge here is getting the bootstrap (vector) installed
and running. The Internet worm exploited four well-known vulnerabilities
in UNIX to install the bootstrap program:
Once a remote machine has been infected, the worm needs to guess passwords
for local accounts. On UNIX, the file /etc/passwd stores encrypted versions
of user passwords. A brute force attack allows password cracking. The worm
has a list of words, which it encrypts, and then compares to the encrypted
passwords in /etc/passwd. Specifically, the worm first checks an internal
list of 432 commonly-used passwords (discovered empirically before deployment
of the worm.) Then, more attempts are made using the UNIX on-line dictionary.
Some sites reported that up to 50% of their passwords were cracked using
- The first vulnerability involved .forward and .rhosts files. The
worm used rexec, a UNIX command that allows one shell to
cause commands to execute on another machine. The worm found all
.forward and .rhosts files on the current machine, which contain
lists of machines reachable
by the current machine. It assumed that a password for running on the new
machine is the same as the current one (We will discuss how the
worm acquires an initial password shortly.) and attempts to connect
to the new machine. This also exploits the fact that many people
use the same password for their accounts on different machines.
- If the above didn't work, the worm would rexec to the current host,
using its local uid and password. It would then rsh to a remote
host. This would work if the user on the foreign machine has a .rhost
file that allows access without a password.
- A third option was to exploit a vulnerability in the mail system.
The attack involves opening an SMTP (Simple Mail Transfer Protocol)
the target machine. There is a process, sendmail, that monitors the
TCP port 25, which is conventionally understood to be the mail port.
If the DEBUG option in sendmail is turned on, then the "rcpt to" field
in a mail message is executed as a shell script. In this case, the
worm would cause the target machine to execute a shell script that
then caused the target to read and execute the "message" that followed.
Unfortunately, BSD 4.2 was shipped with the debugging option turned
on and unexperienced system administrators would likely be unwilling
to change configurations on sendmail, a fairly complicated program.
- The fourth vulnerability involved the finger daemon. That is a utility
that provides information about users, servicing
both local and remote requests. The problem is that the implementation
of this program uses a C library routine "gets" to read a null-terminated
character string (the request) into a buffer. "gets"
does not check the length of the buffer, and thus it is possibly to
write beyond it if the input string is too long. The attack, which
works only on 4.2 BSD run on a VAX,
first establishes a connection to the finger daemon and then sends a
536 byte string as the parameter to fingerd. This overflows the 512
byte buffer and overwrites part of the stack frame. The 536 byte string
is carefully constructed so that the the return address for the main
routine now points to machine language located in another part of the
"buffer." The code in the buffer is execve("/bin/sh", 0, 0). The worm
is now connected to a shell on the remote machine and can bring over
During execution of the main loop, the worm checked for other worms running
on the same machine. This was done by connecting to host 127.0.0.1 port
11357 using UDP. If it succeeded, the worm would set an internal value that
would cause it to halt. However, one out of seven worms did not check for
others and thus became "immortal." This was the primary cause of machines
In order to conceal itself, the worm would occasionally fork
itself and kill the parent. This had the effect that no single process
accumulated excessive amounts of CPU. It also made sure that the UNIX
scheduler would not re-prioritize ("nice") the worm's process due to
the length of time it had been running. Also, if the worm was present
on a machine for more than 12 hours, it would flush its list of what
had been infected and start over.
It is useful to consider why the Internet worm succeeded from a more
abstract perspective. What were the vulnerabilities that the worm exploited?
The above problems are not atypical. Today, it is not subtle
technical issues causing the vulnerabilities that are being
exploited. Rather it is design and user practices.
- Programmer error: The fact that fingerd did not check for buffer
overflow caused a problem.
- Configuration errors: Sendmail should not have been shipped with
DEBUG turned on.
- User practices: When users chose dictionary words as passwords, an
externality was created in which one user could make the entire
- Design issues: The existence of .rhost files implies a kind of
transitive trust that was easily exploited.
- Monoculture: A large fraction of sites used a monoculture susceptible
to "epidemics." This would seem to suggest that diversity is a better
option. Today, we are moving away from a UNIX monoculture to a
MS Windows monoculture, but a monoculture just the same.
of lecture notes.
CS 513 homepage.