# A6 **Deadline:** Wednesday, 05/10/17, 11:59 pm *This assignment may be done as individuals or with one partner.* ### Problem 1 (6 pts) In the Unix file system access control model, only the owner of a file (or `root`) can modify the ACL for that file. We can think of ACL modification as a right that is granted only to the owner. A CS 5430 student has an idea: it should be possible to authorize principals **other** than the owner to modify ACLs. Describe how to implement this idea using ACLs. Discuss how to grant and revoke ACL-modification rights. What to submit: a file named `acl.pdf` with your answers to the above question. ### Problem 2 (10 pts) Sometimes the secrecy of individual data are less sensitive than their aggregate. For example: - The budgets of individual departments of a company may not reveal much information. But collectively, they reveal where the company is concentrating its resources, and thus telegraph its business strategy. - In the movie Mission: Impossible (1996), the recovery of a NOC (non-official cover) list is a focus of Agent Ethan Hunt. One half of the list contains the code names of secret agents, and the other half contains the agents' real names. Each half individually reveals sensitive information, and their combination reveals even more information. Aggregation is particularly relevant in the context of databases. For the purpose of this problem, suppose that a database comprises a number of *datasets*. (A dataset might be a table or a view.) Further, suppose that each dataset is assigned a sensitivity label such as Unclassified, Secret, or Top Secret. (We ignore compartments in this problem.) Then it might be the case that datasets A and B are both Unclassified, but that their aggregation is Secret. To model this, let the function L(R), where R is a set of datasets&mdash;for example, R={A,B}&mdash;denote the sensitivity of the aggregation of all the datasets in R. As healthiness conditions on L, we require that: - For all A, L({A}) = sensitivity of A. - If R &sube; R' then L(R) &le; L(R'). Our goal in this problem is to develop a MAC model for this scenario. Suppose that an *object* is a document containing information derived from the database&mdash;e.g., the result of queries on datasets. A *subject*, as usual, is a process executing on behalf of a user. An *entity* is either a subject or an object. 1. Construct your own real-world example, using the database model above, of aggregate data that are more sensitive than their constituents. (However, the particular examples above and examples from lectures are off-limits. Be original. If you need inspiration, begin by supposing that one of the datasets is a set of photographs.) Your example should include at least three datasets. Identify what L(R) is for each possible subset R of your datasets. 2. Suppose that each object (and subject) is labelled with its sensitivity (or clearance). We could then attempt to employ the Bell and LaPadula security conditions ("no read up, no write down"). However, we claim that these conditions are insufficient to guarantee the following policy: ``` P1: An object never contains information whose sensitivity is higher than the object's label. ``` Using your example database from part 1, prove this claim by exhibiting a series of read and write operations that effect such an information flow. You may freely invent entities and their labels. 3. Instead of sensitivity, suppose that each entity is labelled with a set of datasets. Give new conditions for reading and writing. Your conditions should guarantee the following policy: ``` P2: If X is labelled with R, then the information in datasets R should be allowed to flow to X, and information from datasets other than those in R should not be allowed to flow to X. ``` Your conditions should be general&mdash;not specific to the particular example you gave in part (1). What to submit: a file named `mac.pdf` with your answers to the above questions. ### Problem 3 (20 pts) *based on [Schneider, chapter 7, problem 19]* You have been engaged as a security consultant by Yangtze,\* a new company providing cloud storage. Yangtze's new system is named Remote Repository, or R2 for short. With R2, Yangtze's engineers seek to build an ultra-high performance cloud storage system. They've solved most of the problems, but they need your help with the access control subsystem. (\*The Yangtze is the next-longest river in the world after the Amazon.) Yangtze built a prototype of R2 that uses access control lists. They've encountered a serious problem, though: every request that a client makes to read from or write to an object in storage has to be authenticated, the client has to be mapped to a subject, and the subject's entry in the ACL for the object has to be consulted. All that work is slowing down the system, keeping it from achieving Yangtze's performance goals. Luckily, you studied capabilities in CS 5430. You know that with an access control subsystem based on capabilities, the storage system would need to do very little work, because the client would simply present the capability along with its request. The storage system would need only verify that the capability permits the request. Also, Yangtze is excited about the possibility of subjects delegating access rights without ever having to contact R2 at all, because this would further enhance performance. But there is one big problem: you've read about how to implement capabilities with asymmetric cryptography, digital signatures in particular, but that kind of crypto is too slow for use in R2. You're going to have to find a way to implement capabilities with symmetric cryptography. So far, you've invented the following architecture for the system: <img src="R2_architecture.jpg" alt="R2 architecture" width="50%"/> - The **client node** is used to access R2. - The **security node** authenticates clients and issues capabilities. - The **storage node** verifies capabilities when they are used to access objects. You've already taken care of the authentication subsystem&mdash;it doesn't play much, if any, role in the work you're doing now. Furthermore, you've already arranged that the security node and storage node can share an arbitrary number of symmetric keys&mdash;you don't need to concern yourself with how to accomplish that key distribution. However, generating and distributing keys is somewhat expensive, so Yangtze insists that you keep the number of keys used to a minimum. Finally, you can assume that all communication channels between client nodes and server (i.e., security and storage) nodes are secured with SSL in unilateral authentication mode: the client authenticates the identity of the server, all communication is encrypted to protect confidentiality, and replay of messages sent over the SSL channel is detected. One more thing: Yangtze has provided you with an implementation of *globally unique identifiers* (GUIDs) for objects, so that every object in the system has its own unique 128-bit identifier. Your remaining work is to figure out how to handle the following concerns: - How R2 will **grant access** to clients by issuing capabilities when they are requested. - How R2 will **determine access** by deciding whether a subject may read or write an object. - How R2 will enable **delegation of access** between subjects. - How R2 will enable **revocation of access** to objects. Taking into account all the constraints and goals above, you now need to produce a design for R2's access control subsystem. You will want to carefully specify what capabilities are: what fields they contain, how to interpret those fields, etc. You'll also want to explain in detail how each of the above concerns will be implemented in R2. If there are any cryptographic protocols involved, you need to write those down using proper notation, and explain each step. Finally, explain why you introduce each symmetric key that is used in your design, and explain why you've used the minimum of number of keys necessary. What to submit: a file named `caps.pdf` with your answers to the above questions. ### Problem 4 (12 pts) In class, we studied information flow control for confidentiality. We saw a simple lattice with two labels, L and H, which were interpreted as policies constraining the flow of information: data tagged L were public, whereas data tagged H were secret. In this problem, we examine information flow control for integrity. So instead of secrecy, we turn our attention to the *trustedness* of data: some data are trusted, whereas some data are untrusted. For brevity, let's give those two policies names: * **P1:** trusted * **P2:** untrusted As in class, we use a lattice to model information flow: * Let &Lambda; be the set {L,H} of labels. * Let &#8849; be a relation on labels that characterizes when information may flow, where L &#8849; L; L &#8849; H; and H &#8849; H. * Let &#8852; be an operation on labels that characterizes the label that results when data are combined, where L &#8849; H, L &#8852; L = L, L &#8852; H = H, and H &#8852; H = H. But we now want to use labels L and H to represent integrity policies, not confidentiality policies. 1. Which label, L or H, represents policy P1, and which represents policy P2? Explain why your answer is in accordance with the fact that L &#8849; H, i.e., that L may flow to H. Also explain why your answer in accordance with the fact that L &#8852; H = H, i.e., that the combination of L and H should be H. 2. Here is a definition of noninterference for integrity: \\[\\forall m_1, m_2 \\mathrel\{.\} m_1 =_L m_2 \Rightarrow C(m_1) =_L C(m_2).\\] The intended intuition behind that definition is that changes to untrusted variables should not cause changes to trusted variables. You'll notice that it is, in fact, syntactically the same definition of noninterference that we gave for confidentiality. The type system we discussed in lecture soundly but incompletely enforces that definition of noninterference for integrity&mdash;that is, the type system rejects insecure programs but also rejects some secure programs. Give two example programs that demonstrate this incompleteness. The first example should be an assignment statement. The second example should be an `if` statement whose guard contains at least one variable. For both examples, provide a mapping &Gamma; from variables to labels. Also, if your examples include any constants, state whether the tag on those constants is H or L. 3. With confidentiality, some functions could be treated as changing the secrecy of data&mdash;for example, encrypting a secret plaintext could be treated as producing a non-secret ciphertext. Give an example of a function that could be treated as increasing the integrity of data. And give an example of a function that decreases the integrity of data. What to submit: a file named `iflow.pdf` with your answers to the above questions. You do not need to provide fancy type-setting. ### Submission If you work with a partner, first form a group on CMS; submit as that group, rather than submitting the same solution independently. Submit the files specified above to [CMS][cms]. [cms]: https://cms.csuglab.cornell.edu/ ### Evaluation You will be evaluated on the quality of your solutions and on your adherence to the submission stipulations above. We'll use the following criteria in evaluating quality: - *Validity:* do you present a logical, lucid, coherent, clearly focused, well structured, and appropriately detailed argument? - *Consistency:* do you employ concepts, principles, and terminology as they are used in this course? - *Evidence:* do you adequately support your conclusions? - *Writing:* do you use proper mechanics, grammar, and style?