# Beyond Attacks
*Attacks* are the sexy side of security. When hackers perpetrate cool
attacks, we hear about it in the news. (Unless it's covered up by
business or government.) But if you want to build secure systems, you need to
begin thinking beyond attacks. Attacks are perpetrated by *threats* that
inflict *harm* by exploiting a *vulnerability*. Vulnerabilities are
controlled by *countermeasures*.
- **Harm:** A negative consequence to a system asset, possibly
involving loss or damage to value.
- **Threat:** A principal or environment that has the potential to
cause harm to assets of a system. A human threat is an *adversary*
or *attacker*, who must be motivated and capable to be successful.
Adversaries range in scope from individuals to organized groups to
nations, and their motivations range from curiosity to crime to
terrorism.
- **Vulnerability:** An unintended aspect of a system, be it design,
implementation, or configuration, that can cause the system to do
something it shouldn't, or fail to do something it should.
- **Attack:** The act of causing harm by exploiting a vulnerability.
- **Countermeasure:** A defense that protects against attacks by
neutralizing either the threat or vulnerability involved.
Countermeasures might attempt to prevent, deter, deflect, mitigate,
or detect attacks, or recover from them.
### Assumptions
Every system is built on assumptions—about timing, kinds of failures,
message delivery, formats of inputs, execution environment, etc. *Buffer
overflows*, which have long been one of the most prevalent
vulnerabilities, involve an assumption about input sizes. Violate that
assumption, the system does the wrong thing.
**ASSUMPTIONS are VULNERABILITIES.** We can't live with 'em, we can't
live without 'em.
One very important kind of assumption is **trust**. When a system is
*trusted*, we assume the system satisfies a particular security policy.
(It might not, in which case, we have vulnerabilities.) When a system is
*trustworthy*, we have evidence that the system really does satisfy a
policy. Much of this class is devoted to techniques for transforming
trusted systems into trustworthy systems.
In theory, vulnerabilities needn't all be found or repaired. Some aren't
exploitable (e.g., a component fails to do bounds checking, but the
component in front of it does). Some aren't exploitable by threats of
concern (e.g., the attack would require too many resources). But in
practice, ignoring vulnerabilities is a risky business. Too many get
passed off as "no one could ever exploit that", only to have a
successful attack perpetrated in the future.
### Assets
The need for security arises when a system involves *assets* that can be
harmed by threats. Assets can include
- physical objects, such as printed money, and
- intangible objects, such as bank account balances.
Most computer systems deal in one main kind of asset: information. The
machines themselves might be an asset, as well as any peripherals (such
as smartcards), backup drives, etc. Note that people are not usually
considered assets in computer systems, because the system can't directly
protect or control them.
The key characteristic of assets is that they have *value* to
*stakeholders* in the system. An asset has *direct* value if damage
affects the value of the asset itself. An asset has *indirect* value if
damage affects something else—for example, reputation. Printed money has
both direct and indirect value to a bank. A bank robbery harms the bank
both through the loss of revenue (it no longer has money to lend) and
through the loss of reputation (if the robbery is made public).
Assets and stakeholders are intrinsically related. An object isn't an
asset unless some stakeholder values it. Likewise, a principal isn't a
stakeholder unless it values some object in the system. Although an
attacker might value objects, we don't consider a generic "attacker" to
be a stakeholder. However, legitimate stakeholders (e.g., disgruntled
employees) might try to attack the system themselves.
### Approaches
This recognition that there is no such thing as "perfect security" leads
us to reflect on the approach we take to security:
- **Prevention.** We can attempt to build systems that are completely
free of vulnerabilities, hence all attacks are prevented. This goal
is laudable, and we'll discuss many techniques in this course
related to achieving prevention. We should recognize, however, that
we are unlikely ever to achieve absolute security, and even
approaching it is likely to incur significant expense.
- **Risk Management.** We can attempt to ensure that investment in
security reduces the expected harm from attacks. Given that our
resources are limited, this goal is sensible. We would need to
determine the probabilities that vulnerabilities can be exploited,
the costs of countermeasures that would control those
vulnerabilities, and the cost of harm that would be incurred from
attacks. But determining the actual probabilities involved is
difficult—we don't have enough information about likelihoods and
costs.
- **Deterrence through Accountability.** We can attempt to legally
prosecute the attackers. To do so, we would need to attribute
attacks to the humans who perpetrate them. There are significant
technical and legal challenges to implementing this goal: How do we
attribute attacks? How do we prosecute attacks across legal
boundaries? How do we balance attribution with privacy?
An emerging new approach is to attempt to treat security
similar to how we treat public health. We can educate the public
about it, disseminate treatments (such as vaccines), and track
outbreaks. This approach to **Cybersecurity as a Public Good**
was articulated recently [[Mulligan and Schneider 2011][ms11]].
[ms11]: http://www.cs.cornell.edu/fbs/publications/publicCybersecDaed.pdf