Introduction to Security

All satisfied with their seats? O.K. No talking, no smoking, no knitting, no newspaper reading, no sleeping, and for God's sake take notes. —Vladimir Nabokov

November 2, 1988: Robert Tappan Morris, Jr. released the "great worm." (His dad, Bob Morris, was chief scientist at NSA's National Computer Security Center.) This was the first worm; the first malware to get media attention; and the first conviction under the Computer Fraud and Abuse Act. Morris was 23 years old, and a first year grad student at Cornell. (He released the worm from MIT, though.) He later claimed the worm's purpose was to measure size of Internet, but the immediate effect was denial of service (DoS). The Internet "came apart" as hosts were overloaded by invisible processes. System admins had to disconnect from the network to isolate their systems from infection. The US GAO later estimated the cost of recovery was somewhere in $100k to $10m. Morris was tried in US District Court. His sentence: 3 years probation, 400 hours community service, and a $13k fine. In 1999, he received a PhD from Harvard. And now he's a professor at MIT.

June 1, 2012: The New York Times reports that the US and Israel created Stuxnet, the first (publicly known) cyberweapon. Its provenance was initially unknown. The weapon first infects Windows systems, then subsequently infects an industrial control device, causing it to vary the frequency of its motor and do physical harm. But, the weapon hides that frequency change from the device's monitoring system, so that the harm won't be noticed until it's too late. The purpose of the weapon seemed to be destruction of centrifuges in Iranian uranium enrichment facilities.

Today, security is

That's what makes this such a fun field of study.

Defining security

A computer system is secure when it

A security policy stipulates what should and should not be done. Policies can be long English documents, mathematical axioms, etc. But almost everyone agrees that security policies are formulated in terms of three basic kinds of security properties: confidentiality, integrity, and availability.

Confidentiality:

Protection of assets from unauthorized disclosure. Assets could be information, or resources. Disclosure must be to someone; that might be a person, a program, another computer system, etc. To generalize those entities, define a principal to be any entity that can take actions. So, confidentiality is about which principals are allowed to learn what. Secrecy is synonymous with confidentiality.

Privacy is the confidentiality of identifying information about individuals, which could be people, organizations, etc. Sometimes privacy is construed as legal right. Don't say "keep information private" unless you really mean that the information is about an individual and is identifying. (All your vocabulary are belong to us.)

Integrity:

Protection of assets from unauthorized modification. I.e., what changes are allowed to system and its environment. Changes can include initial sources, hence provenance. The environment can include outputs.

Availability:

Protections of assets from loss of use. Denial of service (DoS) attacks typically cause violations of availability properties.

Beyond attacks

Attacks are the sexy side of security. When hackers perpetrate cool attacks, we hear about it in the news. (Unless it's covered up by business or government.) But if to build secure systems, you need to begin thinking beyond attacks. Attacks are perpetrated by threats that inflict harm by exploiting a vulnerability. Vulnerabilities are controlled by countermeasures.

Every system is built on assumptions—about timing, kinds of failures, message delivery, formats of inputs, execution environment, etc. Buffer overflows, which have long been one of the most prevalent vulnerabilities, involve an assumption about input sizes. Violate that assumption, the system does the wrong thing.

ASSUMPTIONS are VULNERABILITIES. We can't live with 'em, we can't live without 'em.

One very important kind of assumption is trust. When a system is trusted, we assume the system satisfies a particular security policy. (It might not, in which case, we have vulnerabilities.) When a system is trustworthy, we have evidence that the system really does satisfy a policy. Much of this class is devoted to techniques for transforming trusted systems into trustworthy systems.

In theory, vulnerabilities needn't all be found or repaired. Some aren't exploitable (e.g., a component fails to do bounds checking, but the component in front of it does). Some aren't exploitable by threats of concern (e.g., the attack would require too many resources). But in practice, ignoring vulnerabilities is a risky business. Too many get passed off as "no one could ever exploit that", only to have a successful attack perpetrated in the future.

This recognition that there is no such thing as "perfect security" leads us to reflect on the goals we are attempting to achieve in security: