# Principles
There is no such thing as "perfect security". That leads
us to reflect on the approach we take to security:
- **Prevention.** We can attempt to build systems that are completely
free of vulnerabilities, hence all attacks are prevented. This goal
is laudable, and we'll discuss many techniques in this course
related to achieving prevention. We should recognize, however, that
we are unlikely ever to achieve absolute security, and even
approaching it is likely to incur significant expense.
- **Risk Management.** We can attempt to ensure that investment in
security reduces the expected harm from attacks. Given that our
resources are limited, this goal is sensible. We would need to
determine the probabilities that vulnerabilities can be exploited,
the costs of countermeasures that would control those
vulnerabilities, and the cost of harm that would be incurred from
attacks. But determining the actual probabilities involved is
difficult—we don't have enough information about likelihoods and
costs.
- **Deterrence through Accountability.** We can attempt to legally
prosecute the attackers. To do so, we would need to attribute
attacks to the humans who perpetrate them. There are significant
technical and legal challenges to implementing this goal: How do we
attribute attacks? How do we prosecute attacks across legal
boundaries? How do we balance attribution with privacy?
An emerging new approach is to attempt to treat security
similar to how we treat public health. We can educate the public
about it, disseminate treatments (such as vaccines), and track
outbreaks. This approach to **Cybersecurity as a Public Good**
was articulated recently [[Mulligan and Schneider 2011][ms11]].
[ms11]: http://www.cs.cornell.edu/fbs/publications/publicCybersecDaed.pdf
Most of the rest of this course concentrates on prevention.
Here are underlying principles for building secure systems that prevent attacks.
We'll continue to see many examples of these throughout semester, so
don't worry if they seem a bit abstract now. Most of these
principles were first articulated in [[Saltzer and Schroeder 1975]][ss75].
[ss75]: http://web.mit.edu/Saltzer/www/publications/protection/
**ACCOUNTABILITY:**
Hold principals responsible for their actions.
This is a principle behind physical security, and it holds for
computer security, too. Consider a bank vault. It has a lock,
key(s), and a video camera:
- The lock prevents most principals from taking any action.
- The key enables certain principals to take actions.
- The camera holds any principals responsible for visible actions.
In the real world, we don't make perfect locks or keys or cameras.
Instead, we do risk management. We buy insurance. (It's cheaper than
building perfect locks, etc.)
Mechanisms for accountability are separated into three classes:
- **Authorization:** mechanisms that govern whether actions are
permitted. e.g., file permissions, firewalls, visibility
restrictions in FB
- **Authentication:** mechanisms that bind principals to actions.
e.g., passwords, digital signatures, EV certificates in browsers
- **Audit:** mechanisms that record and review actions and the
principals who take them. e.g., log files, log browsers,
intrusion detection systems
Together these are known as the Gold Standard [[Lampson 2000]][lampson00],
because they all begin with Au, the atomic symbol for gold. Use these
terms carefully! People frequently confuse authorization and
authentication.
[lampson00]: http://bwlampson.site/64-SecurityInRealWorld/Acrobat.pdf
**COMPLETE MEDIATION:**
Every operation requested by a principal must be intercepted and
determined to be acceptable according to the security policy. The
component that does the mediation is called a *reference monitor*.
Reference monitors should be tamperproof and transparently correct.
*Time-of-check to time-of-use* (TOCTOU) attacks exploit vulnerabilities
arising from failure to adhere to this principle.
**LEAST PRIVILEGE:**
A principal should have the minimum privileges it needs to accomplish
its desired operations. Experienced system administrators know not to
login as root for routine operations. Otherwise, they might accidentally
misuse their root privileges and wreak havoc. Likewise, a web browser
doesn't need full access to all files on the local filesystem. And a web
front-end doesn't need full write access to a database back-end for most
of its operation.
**FAILSAFE DEFAULTS:**
Presence of privilege, rather than absence of prohibition, should be
basis for allowing operation. It's safer to forget to grant privilege
(in which case a principal complains) than to accidentally grant
privileges (in which case principal has an opportunity to exploit them).
For example,
- default access to your files should be that only you get to read
and write them; and
- default privilege for your web browser should be that it isn't
allowed to read or write your disk.
**SEPARATION OF PRIVILEGE:**
There are two senses in which this principal can be understood:
1. Different operations should require different privileges.
2. Disseminate privileges for an operation amongst multiple principals.
(Separation of Duty)
Regarding the first sense, in practice, this principle is difficult to
implement. Do you really want to manage rights for every object and
operation and principal in a software system? There's millions of
them—you'll get something wrong. So we naturally do some bundling of
privileges.
Regarding the second sense, prevention of large (potentially catastrophic)
fraud and error is the goal. Two bank tellers might be required in order
to open a vault or disperse a large amount of cash. Two officers might be
required in order to launch a missile.
**DEFENSE IN DEPTH:**
Prefer a set of complementary mechanisms over a single mechanism.
*Complementary* means
- **independent**, such that an attack that penetrates one mechanism
is unlikely to penetrate others, and
- **overlapping**, such that attacks must penetrate multiple mechanisms
to succeed.
For example, your bank might use both a password and a hardware
token to authenticate customers. Your apartment building might have
multiple door locks and a security system. And your university IT
department might use firewalls and virus scanners to prevent spread
of malware.
**ECONOMY OF MECHANISM:**
Prefer mechanisms that are simpler and smaller.
They're easier to understand and easier to get right. It's easier to
construct evidence of trustworthiness for small, simple things.
In any system, there's some set of mechanisms that implement the
core, critical security functionality hence must be trusted. That
set is called the *trusted computing base* (TCB). Economy of
Mechanism says keep the TCB small.
**OPEN DESIGN:**
Security shouldn't depend upon the secrecy of design or implementation.
Assume the enemy knows the system. For example, assume the enemy
knows the encryption algorithm, but not the key. Or, assume the
enemy knows the model of a lock, but not the cuts made in the key.
In cryptography, a similar idea is known as "Kerckhoffs's Principle."
Non-secrecy is frequently violated by neophytes. Note that there's
nothing wrong with keeping a design secret if security can be
established by other means. That's just defense in depth.
The opposite of this principle is "security by obscurity".
**PSYCHOLOGICAL ACCEPTABILITY:**
Minimize the burden of security mechanisms on humans. Although it's rarely
possible to make that burden zero, it shouldn't be too much more difficult
to complete an operation in the presence of security mechanisms than it would
be in their absence. Otherwise, humans will be tempted to create end-runs
around those mechanisms.
## Exercises
Read the abstract and introduction of [*The Security Architecture of the Chromium
Browser*][chromium] (Barth et al., 2008).
[chromium]: https://seclab.stanford.edu/websec/chromium/chromium-security-architecture.pdf
1. Identify how each of the following principles is manifest in the
design of Chromium: Separation of Privilege, Complete Mediation, Least
Privilege, Open Design.
2. We previously defined a threat as a *motivated, capable adversary.*
In the Chromium threat analysis, what are the motivations of the threat?
Are there any motivations the threat is assumed not to have? What are
the capabilities of the threat? Are there any capabilities the threat is
assumed not to have?