Lecture Notes: Security Requirements

How can we write security requirements? There isn't yet a good answer to that question. Writing of security requirements is in the process of evolving from art (which is understood and mastered by few) to engineering (which many can be trained to apply). Security requirements engineering is an area of research that has become increasingly active in the last decade, but no particular methodology has yet achieved dominance.

The following methodology is a lightweight process for security requirements engineering. You will use this methodology in your CS 5431 project.

Preliminaries

It might seem obvious, but it's worth mentioning: before addressing system security requirements, we first need to identify what the system is. Recall security means that "a system does what it's supposed to do and nothing more." We need to first know what the system is supposed to do. Security can't be considered in isolation—we have to take into account the system's specification.

Here, we take the functional requirements of the system as that specification. One of the key characteristics of functional requirements is that they are testable: it is possible to determine whether a system satisfies a functional requirement or not. We'd like the same to be true of security requirements we produce.

Threat analysis is also prerequisite to addressing security requirements. We need to identify the threats of concern, including their motivation, resources, and capabilities. Again, security can't be considered in isolation—we have to take into account untrustworthy principals.

Assets

The need for security arises when a system involves assets that can be harmed by threats. Assets can include

Most computer systems deal in one main kind of asset: information. The machines themselves might be an asset, as well as any peripherals (such as smartcards), backup drives, etc.

The key characteristic of assets is that they have value to stakeholders in the system. An asset has direct value if damage affects the value of the asset itself. An asset has indirect value if damage affects something else—for example, reputation. Printed money has both direct and indirect value to a bank. A bank robbery harms the bank both through the loss of revenue (it no longer has money to lend) and through the loss of reputation (if the robbery is made public).

Assets and stakeholders are intrinsically related. An object isn't an asset unless some stakeholder values it. Likewise, a principal isn't a stakeholder unless it values some object in the system. Although an attacker might value objects, we don't consider a generic "attacker" to be a stakeholder. However, legitimate stakeholders (e.g., disgruntled employees) might try to attack the system themselves.

Harm Analysis

The goal of system security is to protect assets from harm. Harm occurs when an action adversely affects the value of an asset. Theft, for example, harms the value of a bank's cash—value to the bank, that is; the robber might later enjoy the value of the cash to him. Likewise, erasure of a bank's account balances harms the value of those records to the bank—and to the customers, who will probably begin looking for a new bank. And when a bank loses customers, it loses revenue.

A harm is usually related to one of the three kinds of security concerns: confidentiality, integrity, or availability. In computer systems, harm to confidentiality can occur through disclosure of information. Harm to integrity can occur through modification of information, resources, or outputs. And harm to availability can occur through deprivation of input or output. In physical systems, analogous kinds of disclosure, modification, and deprivation can occur.

Harm analysis is the process of identifying possible harms that could occur with a system's assets. Although harm analysis requires some creativity, we can make substantial progress by inventing as many completions as possible of the following sentence:

Performing action on/to/with asset could cause harm.

For example, "stealing money could cause loss of revenue," and "erasing account balances could cause loss of revenue."

Essentially, harm analysis identifies a set of triples that have the form (action, asset, harm). One way to invent such triples is to start with the asset, then brainstorm actions that could damage the confidentiality of the asset. That generates a set of triples. Then brainstorm actions that could damage integrity, and finally availability. For some assets, it might turn out that there isn't a way to damage one of these CIA concerns within the context of the system.

Security Goals

Given a harm analysis, we can easily produce a set of security goals for a system. A security goal is a statement of the following form:

The system shall prevent/detect action on/to/with asset.

For example, "the system shall prevent theft of money" and "the system shall prevent erasure of account balances." Each goal should relate to confidentiality, integrity, or availability, hence security goals are a kind of security property.

Note that security goals specify what the system should prevent, not how it should accomplish that prevention. Statements like "the system shall use encryption to prevent reading of messages" and "the system shall use authentication to verify user identities" are not security goals. These are implementation details that should not appear until later in the software engineering process.

In reality, not all security goals are feasible to achieve. When designing a vault, we might want to prevent theft of money. But no system can actually do that against a sufficiently motivated, resourceful, and capable threat (as so many heist movies attest). So we need to perform a feasibility analysis of security goals in light of the threats to the system and, if necessary, relax our goals. For example, we might relax the goal of "preventing any theft of items from a vault" to "resisting penetration for 30 minutes." (In the real world, locks are rated by such metrics.) We might also relax the original goal to "detecting theft of items from a vault." (In the real world, security cameras are employed in service of this goal.)

Security Requirements

Security goals guide system development, but they are not precise enough to be requirements. A security goal typically describes what is never to happen in any situation; a good security requirement would describe what is to happen in a particular situation.

We therefore define a security requirement as a constraint on a functional requirement. The constraint must contribute to satisfaction of a security goal. Whereas security goals proscribe broad behaviors of a system, security requirements prescribe specific behaviors of a system. Like a good functional requirement, a good security requirement is testable.

For example, a bank might have a security goal to "prevent loss of revenue through bad checks" and a functional requirement of "allowing people to cash checks." When we consider the combination of those two, we are led to invent security requirements that apply when tellers cash a check for a person:

Both security requirements are specific constraints on the functional requirement, and both contribute to the security goal. (However, they do not completely guarantee that goal. In reality, banks can't feasibly prevent all loss of revenue through bad checks.)

To generate security requirements, consider each functional requirement F and each security goal G. Invent constraints on F that together are sufficient to achieve G. Since any real system has a long list of functional requirements, this process would lead to a very large number of security requirements. It's easy for developers to miss some of those security requirements, so it's not surprising that real systems have vulnerabilities.

Security requirements ideally specify what the system should do in specific situations, not how it should be done. However, requirements may contain more detail about the "what" that goals do, because requirements must be testable.

For example, suppose that a system has a functional requirement for a message m containing integer i to be sent from Bob to Alice, and that a security goal demands i should be learnable only by Alice. We could write a very abstract security requirement saying that "the contents of m cannot be read by anyone other than Alice." However, this requirement isn't testable. A better security requirement would be one of the following:

Both of these requirements are testable. But neither overcommits to implementation details. For example, the encryption algorithm and the key size have not yet been specified by the second requirement.

Goals vs. Requirements

The following table summarizes the differences noted above between security goals and security requirements.

GoalsRequirements
are broad in scopeare narrow in scope
apply to entire systemapply to individual functional requirements
state desiresstate constraints
are not testableare testable
say nothing about design/implementationmight begin to provide design/implementation details

Iteration

The methodology described above is rarely linear. Inventing security goals and requirements will cause you to invent new functional requirements, which in turn introduce new assets, lead to new security goals, etc. And later stages of design and implementation might cause you to invent new goals and requirements.

Satisfying a confidentiality goal, for example, might cause you to add a security requirement that entails a new authentication mechanism. Design of that mechanism might cause you to add new functionality to the system, such as a password prompt. That new functionality is accompanied by a new asset (passwords) and new security requirements (the password checking routine should not display cleartext passwords, etc.).

You might even discover that it is impossible to craft security requirements that satisfy a security goal for a functional requirement. Then you must decide whether to weaken your goal or change the functional requirement.

References

These notes are based primarily on the following paper:

This paper was also useful: