We'll talk a lot about vulnerabilities and countermeasures, about
policies and mechanisms, about securing software systems throughout the
semester. Here are underlying principles for building secure systems.
We'll continue to see many examples of these throughout semester, so
don't worry if they seem a bit abstract now. Most of these
principles were first articulated in [[Saltzer and Schroeder 1975]][ss75].
Hold principals responsible for their actions.
This is a principle behind physical security, and it holds for
computer security, too. Consider a bank vault. It has a lock,
key(s), and a video camera:
- The lock prevents most principals from taking any action.
- The key enables certain principals to take actions.
- The camera holds any principals responsible for visible actions.
In the real world, we don't make perfect locks or keys or cameras.
Instead, we do risk management. We buy insurance. (It's cheaper than
building perfect locks, etc.)
Mechanisms for accountability are separated into three classes:
- **Authorization:** mechanisms that govern whether actions are
permitted. e.g., file permissions, firewalls, visibility
restrictions in FB
- **Authentication:** mechanisms that bind principals to actions.
e.g., passwords, digital signatures, EV certificates in browsers
- **Audit:** mechanisms that record and review actions and the
principals who take them. e.g., log files, log browsers,
intrusion detection systems
Together these are known as the Gold Standard [[Lampson 2004]][lampson04],
because they all begin with Au, the atomic symbol for gold. Use these
terms carefully! People frequently confuse authorization and
Every operation requested by a principal must be intercepted and
determined to be acceptable according to the security policy. The
component that does the mediation is called a *reference monitor*.
Reference monitors should be tamperproof and transparently correct.
*Time-of-check to time-of-use* (TOCTOU) attacks exploit vulnerabilities
arising from failure to adhere to this principle.
A principal should have the minimum privileges it needs to accomplish
its desired operations. Experienced system administrators know not to
login as root for routine operations. Otherwise, they might accidentally
misuse their root privileges and wreak havoc. Likewise, a web browser
doesn't need full access to all files on the local filesystem. And a web
front-end doesn't need full write access to a database back-end for most
of its operation.
Presence of privilege, rather than absence of prohibition, should be
basis for allowing operation. It's safer to forget to grant privilege
(in which case a principal complains) than to accidentally grant
privileges (in which case principal has an opportunity to exploit them).
- default access to your files should be that only you get to read
and write them; and
- default privilege for your web browser should be that it isn't
allowed to read or write your disk.
**SEPARATION OF PRIVILEGE:**
There are two senses in which this principal can be understood:
1. Different operations should require different privileges.
2. Disseminate privileges for an operation amongst multiple principals.
(Separation of Duty)
Regarding the first sense, in practice, this principle is difficult to
implement. Do you really want to manage rights for every object and
operation and principal in a software system? There's millions of
them—you'll get something wrong. So we naturally do some bundling of
Regarding the second sense, prevention of large (potentially catastrophic)
fraud and error is the goal. Two bank tellers might be required in order
to open a vault or disperse a large amount of cash. Two officers might be
required in order to launch a missile.
**DEFENSE IN DEPTH:**
Prefer a set of complementary mechanisms over a single mechanism.
- **independent**, such that an attack that penetrates one mechanism
is unlikely to penetrate others, and
- **overlapping**, such that attacks must penetrate multiple mechanisms
For example, your bank might use both a password and a hardware
token to authenticate customers. Your apartment building might have
multiple door locks and a security system. And your university IT
department might use firewalls and virus scanners to prevent spread
**ECONOMY OF MECHANISM:**
Prefer mechanisms that are simpler and smaller.
They're easier to understand and easier to get right. It's easier to
construct evidence of trustworthiness for small, simple things.
In any system, there's some set of mechanisms that implement the
core, critical security functionality hence must be trusted. That
set is called the *trusted computing base* (TCB). Economy of
Mechanism says keep the TCB small.
Security shouldn't depend upon the secrecy of design or implementation.
Assume the enemy knows the system. For example, assume the enemy
knows the encryption algorithm, but not the key. Or, assume the
enemy knows the model of a lock, but not the cuts made in the key.
In cryptography, a similar idea is known as "Kerckhoffs's Principle."
Non-secrecy is frequently violated by neophytes. Note that there's
nothing wrong with keeping a design secret if security can be
established by other means. That's just defense in depth.
The opposite of this principle is "security by obscurity".
Minimize the burden of security mechanisms on humans. Although it's rarely
possible to make that burden zero, it shouldn't be too much more difficult
to complete an operation in the presence of security mechanisms than it would
be in their absence. Otherwise, humans will be tempted to create end-runs
around those mechanisms.