# Security Goals and Requirements How can we write security requirements? There isn't yet a good answer to that question. Writing of security requirements is in the process of evolving from art (which is understood and mastered by few) to engineering (which many can be trained to apply). Security requirements engineering is an area of research that has become increasingly active in the last decade, but no particular methodology has yet achieved dominance. The following methodology is a lightweight process for security requirements engineering. ## Preliminaries It might seem obvious, but it's worth mentioning: before addressing system security requirements, we first need to identify what the system is. Recall security means that "a system does what it's supposed to do and nothing more." We need to first know what the system is supposed to do. Security can't be considered in isolation&mdash;we have to take into account the system's specification. Here, we take the functional requirements of the system as that specification. One of the key characteristics of functional requirements is that they are *testable*: it is possible to determine whether a system satisfies a functional requirement or not. We'd like the same to be true of security requirements we produce. *Threat analysis* is also prerequisite to addressing security requirements. We need to identify the threats of concern, including their motivation, resources, and capabilities. Again, security can't be considered in isolation&mdash;we have to take into account untrustworthy principals. ## Harm analysis The goal of system security is to protect assets from *harm*. Harm occurs when an *action* adversely affects the value of an asset. Theft, for example, harms the value of a bank's cash&mdash;value to the bank, that is; the robber might later enjoy the value of the cash to him. Likewise, erasure of a bank's account balances harms the value of those records to the bank&mdash;and to the customers, who will probably begin looking for a new bank. And when a bank loses customers, it loses revenue. A harm is usually related to one of the three kinds of security concerns: confidentiality, integrity, or availability. In computer systems, harm to confidentiality can occur through disclosure of information. Harm to integrity can occur through modification of information, resources, or outputs. And harm to availability can occur through deprivation of input or output. In physical systems, analogous kinds of disclosure, modification, and deprivation can occur. *Harm analysis* is the process of identifying possible harms that could occur with a system's assets. Although harm analysis requires some creativity, we can make substantial progress by inventing as many completions as possible of the following sentence: > Performing *action* on/to/with *asset* could cause *harm*. For example, "stealing money could cause loss of revenue," and "erasing account balances could cause loss of revenue." Essentially, harm analysis identifies a set of triples that have the form (action, asset, harm). One way to invent such triples is to start with the asset, then brainstorm actions that could damage the confidentiality of the asset. That generates a set of triples. Then brainstorm actions that could damage integrity, and finally availability. For some assets, it might turn out that there isn't a way to damage one of these CIA concerns within the context of the system. ## Security goals Given a harm analysis, we can easily produce a set of *security goals* for a system. A security goal is a statement of the following form: > The system shall prevent/detect *action* on/to/with *asset*. For example, "the system shall prevent theft of money" and "the system shall prevent erasure of account balances." Each goal should relate to confidentiality, integrity, or availability, hence security goals are a kind of security property. Note that security goals specify **what** the system should prevent, not **how** it should accomplish that prevention. Statements like "the system shall use encryption to prevent reading of messages" and "the system shall use authentication to verify user identities" are not good security goals. These are implementation details that should not appear until later in the software engineering process. Similarly, "the system shall resist attacks" is not a good security goal, because it fails to specify what in particular needs prevention. In reality, not all security goals are feasible to achieve. When designing a vault, we might want to prevent theft of money. But no system can actually do that against a sufficiently motivated, resourceful, and capable threat (as so many heist movies attest). So we need to perform a *feasibility analysis* of security goals in light of the threats to the system and, if necessary, relax our goals. For example, we might relax the goal of "preventing any theft of items from a vault" to "resisting penetration for 30 minutes." (In the real world, locks are rated by such metrics.) We might also relax the original goal to "detecting theft of items from a vault." (In the real world, security cameras are employed in service of this goal.) ## Security requirements Security goals guide system development, but they are not precise enough to be requirements. A security goal typically describes what is never to happen in **any** situation; a good security requirement would describe what is to happen in a **particular** situation. We therefore define a *security requirement* as a constraint on a functional requirement. The constraint must contribute to satisfaction of a security goal. Whereas security goals proscribe broad behaviors of a system, security requirements prescribe specific behaviors of a system. Like a good functional requirement, a good security requirement is testable. For example, a bank might have a security goal to "prevent loss of revenue through bad checks" and a functional requirement of "allowing people to cash checks." When we consider the combination of those two, we are led to invent security requirements that apply when tellers cash a check for a person: - The check must be drawn on the bank which the teller is employed by, or - The person cashing the check must be a customer of that bank and first deposit the check in an account. Both security requirements are specific constraints on the functional requirement, and both contribute to the security goal. (However, they do not completely guarantee that goal. In reality, banks can't feasibly prevent all loss of revenue through bad checks.) To generate security requirements, consider each functional requirement F and each security goal G. Invent constraints on F that together are sufficient to achieve G. Since any real system has a long list of functional requirements, this process would lead to a very large number of security requirements. It's easy for developers to miss some of those security requirements, so it's not surprising that real systems have vulnerabilities. Security requirements ideally specify **what** the system should do in specific situations, not **how** it should be done. However, requirements may contain more detail about the "what" that goals do, because requirements must be testable. For example, suppose that a system has a functional requirement for a message *m* containing integer *i* to be sent from Bob to Alice, and that a security goal demands *i* should be learnable only by Alice. We could write a very abstract security requirement saying that "the contents of *m* cannot be read by anyone other than Alice." However, this requirement isn't testable. A better security requirement would be one of the following: - Message *m* is sent over a channel that is accessible only by Alice and Bob. - Message *m* is encrypted with a key that is shared only between Alice and Bob. Both of these requirements are testable. But neither overcommits to implementation details. For example, the encryption algorithm and the key size have not yet been specified by the second requirement. ## Goals vs. requirements The following table summarizes the differences noted above between security goals and security requirements. <table> <tr><th>Goals</th><th>Requirements</th></tr> <tr><td>are broad in scope</td><td>are narrow in scope</td></tr> <tr><td>apply to entire system</td><td>apply to individual functional requirements</td></tr> <tr><td>state desires</td><td>state constraints</td></tr> <tr><td>are not testable</td><td>are testable</td></tr> <tr><td>say nothing about design/implementation</td><td>might begin to provide design/implementation details</td></tr> </table> ## Iteration The methodology described above is rarely linear. Inventing security goals and requirements will cause you to invent new functional requirements, which in turn introduce new assets, lead to new security goals, etc. And later stages of design and implementation might cause you to invent new goals and requirements. Satisfying a confidentiality goal, for example, might cause you to add a security requirement that entails a new authentication mechanism. Design of that mechanism might cause you to add new functionality to the system, such as a password prompt. That new functionality is accompanied by a new asset (passwords) and new security requirements (the password checking routine should not display cleartext passwords, etc.). You might even discover that it is impossible to craft security requirements that satisfy a security goal for a functional requirement. Then you must decide whether to weaken your goal or change the functional requirement. ## Exercise Here is a description of the **Stork Baby Delivery System.** * **Functional requirements** (incomplete sketch): The stork baby delivery system allows an autonomous aircraft (a stork) to deliver a payload (a baby) to a geographic location (given in some coordinate system, such as latitude and longitude) prespecified by some higher authority (herein called providence). Prior to take-off, providence programs a stork with the geographic location describing where the baby should be delivered. Throughout the mission, the stork transmits back to providence a video of the landscape (labeled with geographic location coordinates) that the stork flies over. While a stork is in flight, providence may issue commands to that stork and change the location for the delivery, alter the path being followed to that location, or abort the mission. * **Threat analysis** (also incomplete): The adversary desires to prevent baby deliveries. The adversary has access to radio equipment that transmits and receives on the same frequencies that providence uses for communication with a stork. The adversary also controls weapons systems that can destroy a stork in flight. Carry out the methodology described in the notes above&mdash;harm analysis, security goals, feasibility analysis, and security requirements&mdash;for the stork baby delivery system. Don't be too worried if the requirements part is difficult right now, because the rest of the course will be introducing mechanisms that can be employed in requirements. For now, it's more important to focus on the goals.