Lecture notes by Lynette
Revised by Borislav Deianov
Revised by Michael Frei
The presentation here also borrows from Computer Security in the Real World by Butler Lampson, IEEE Computer 37, 6 (June 2004), 37--46.
Today, we begin our discussion of security. What properties are we interested in and what are good strategies in implementing security?
A constraint we place on any security mechanism is that it add a minimum amount of interference to daily life. For example, locks must not be difficult or annoying to use. If they are, it's likely that people will find ways to circumvent the annoyance and thus nullify the security protect0ions the locks offer. It should also be noted that with rare exceptions is a security breach of some sort the end of the world. Risk management allows recovery from a security problem and decreases the need for complex and annoying locks. For example, rather than installing a complicated locking system for automobiles we buy auto insurance to help deal with costs that arise in the event of damage or theft.
Externalities also have a role to play. Briefly, an externality occurs when somebody or some agency does something in which the cost implications for the doer are not the same as (usually significantly less than) the cost implications for society. For example, think of companies that pollute the environment. The cost of cleaning pollution is usually great, and until recently there was no corporate penalty for not fixing a pollution problem. In short, an externality exists when it is cheaper to do the wrong thing. This has obvious large implications for security--an insecure subsystem may enable a system wide attack of great consequence.
There are number of things to observe. First, note that all locks are not the same. They typically have different keys as well as different strengths. The strength of the lock tends to be chosen according to the value of what is being protected. The environment also influences the type and strength of the locks being used as well. For example, apartments in Ithaca likely have fewer and weaker locks than apartments in Manhattan. Second, people pay for security they believe they need. Security is not monolithic and there is not one mechanism for everyone. Security is scaled with respect to both the value of the thing being secured and the threat against it. People's security "needs" are usually based on the perception of what's going on around them. If your neighbors are being broken into, then it's likely that you'll buy more security equipment than if not. Third, the police are central to the picture. The system still works even if locks are completely removed. Locks are only a deterrent; however, it is essential that there be enforcement and punishment strategies in place. There will undoubtedly be some security breaches no matter how good the locks are. Thus, it is critical that bad guys be found. Locks reduce temptation as well as reducing the police workload. Finally, security, as we have portrayed it, is holistic. It is only as good as its weakest link. Attackers will look for the weakest link, and thus it is generally best to expend effort in determining where the weaknesses are and shoring them up. Given limited resources, the best approach is to make all elements equally strong, thus eliminating weakest links.
We now move from an abstract discussion of security in our day-to-day lives to the world of computer security. How can the above discussion be applied in this new context? With regard to computer security, the story is told in terms of three terms:
Bugs in a software system are vulnerabilities. Since we are not really good at building large systems, it seems clear that any large software system will have many vulnerabilities. While a first strategy for addressing a security problem might be to find and fix each vulnerability, in fact, this is likely to be too costly to be practical. Rather, it is better to first identify threats, and then work on eliminating only those vulnerabilities that those threats would exploit.
As an example, consider the problem of intercepting cellular phone transmissions. This possibility is clearly a result of a design vulnerability--a consequence of the way cellular phone signals are encoded and transmitted. A threat that exploits this vulnerability would be the small number of people who want to do this and have the knowledge and equipment to intercept transmissions. When cell phones were first introduced, the equipment was hard to come by and few people had the knowledge to mount an attack. Thus, the threat was small. Currently, just about anyone can buy the equipment; the threat is huge. The vulnerability has remained the same, but the nature of the threat has changed. Currently, there is a large amount of cellular-phone fraud.
What are the range of threats that network information systems face? The Defense Science Board has issued a report that includes their view of current threats to the national infrastructures. Their list, in order of increasing severity, is as follows:
Another concern when building a secure system is the identification of what exactly we're trying to protect. With network information systems, common properties we worry about are:
How do locks, value, and police make sense in connection with computer security? Locks can be thought of as authorization or access control mechanisms. A "key" or authentication is generally required to open a lock. This "key" can be something the user knows, has, or is. For example, the user might know a password or possess an ID card. Retinal scans and fingerprint verification would take advantage of something that the user is. The police for computer security are the same as the "real world" police. The difference is that in the realm of computers, attacks can be launched remotely, and thus the equivalent of video cameras (to help the police and courts convict the offender) will be necessary. We call this equivalent an audit facility. Noting the chemical symbol, Au, for gold, Butler Lampson has proposed the "Gold Standard of Security" consisting of
Assurance is concerned with the question of what do you know and how do you know it? It is not enough to know that a system is trustworthy or secure; we must also convince ourselves that this will be the case. A system satisfies functional requirements if it does what it is suppose to. In other words, inputs produce the correct outputs. A system satisfies non-functional requirements if, for some specified set of contexts (depends on the threat), the system doesn't do more or less than its functional requirement. Not "more" because that might mean revealing secrets; not "less" because that might mean failure to render a service.
There is an annoying tradeoff between functionality and assurance. A system that supports increased functionality is likely to be more complex, hence harder to understand, analyze or test. A system with lower functionality can be simpler, easier to understand, analyze, and text. One must choose a good point in the functionality vs. assurance space
Two general principles to promote higher assurance are:
What should a lock do? There is a sense in which "less is more". System builders must design systems that support the needs of their users. This means that one should strive to define, design, and build mechanisms that support a wide variety of policies. Mechanisms that support a small number of fixed policies are a bad idea, for they impose restrictions on their users. We speak of separating "policy" from "mechanism" and desire mechanisms that do not exert a policy bias.
A general principle for security policies is:
Implicit in the Principle of Least Privilege is
Corollaries of the Principle of Least Privilege include:
Separation of Privilege can turn into a user's nightmare, with many different access rights to set or unset. Failsafe Defaults also can be problematic, with users unable to get their work done because they need access to all sorts of objects (tmp files, etc) that they do not realize are needed.
Where to Deploy Locks
Every system and every lock is likely to have vulnerabilities. Different locks are not likely to have the same vulnerabilities, though. We thus have two more useful principles.
There are now several possible strategies for implementing a secure system:
Notice that the latter three strategies of the four possible strategies (1)-(4) require being able to distinguish the bad guys from the good guys, and that means some sort of authentication process. Keeping the bad guys out requires having a list of the good guys. Letting the bad guys in but preventing them from doing harm requires not only a list of the good guys, but an authorization mechanism as well. This authorization can be done at many levels of the system. For example, read/write/execute privileges in a UNIX system are done at the file level. Authorization could also be done at the level of fields in data records. The closer to the application level, the greater the possibility of authorizing precisely what is required. However, if done at a higher level, it is possible for the success of the mechanism to become application dependent. At a lower level, authorization is likely to be simpler. The less complex such facilities are, the fewer bugs there will be. Thus lower level authorizations can provide higher assurance. This is a classic trade-off: functionality vs. assurance. The fourth option above depends heavily on audit facilities that cannot be subverted by the bad guys. Thus, audit requires authorization.