Mandatory Access Control

Do not trust the horse, Trojans! Whatever it is, I fear the Greeks, even bringing gifts. —Virgil, Aeneid, Book II

A mandatory access control (MAC) policy is a means of assigning access rights based on regulations by a central authority. This class of policies includes examples from both industry and government. The philosophy underlying these policies is that information belongs to an organization (rather than individual members of it), and it is that organization which should control the security policy.

MAC Example 1. Multi-level security (MLS)

In national security and military environments, documents are labeled according to their sensitivity levels. In the US, these range from Unclassified (anyone can see this) to Confidential to Secret and finally (we believe) to Top Secret; other countries use similar classifications. These levels correspond to the risk associated with release of the information.

But it is not sufficient to use only sensitivity levels to classify objects if one wants to comply with the Need to Know principle: access to information should only be granted if it is necessary to perform one's duties. (Need to Know is an instance of Least Privilege.) Compartments are used to handle this decomposition of information. Every object is associated with a set of compartments (e.g. crypto, nuclear, biological, reconnaissance, etc.). An object associated with {crypto, nuclear} may be accessed only by subjects who need to know about both cryptography and nuclear weapons. An object associated with the empty set {} simply doesn't pertain to any need-to-know compartments.

A label is a pair of a sensitivity level and a set of compartments. A document might have the label (Top Secret, {crypto,nuclear}) if it contained extremely sensitive information regarding cryptography and nuclear weapons. In practice, each paragraph in a document is assigned a set of compartments and a sensitivity. The classification of the entire document would then be the most restrictive classification given to a paragraph in that document.

Users are also labelled according to their security clearance. A user's clearance, just like a document's label, is a pair of a sensitivity level and a set of compartments.

Typical DAC mechanisms, like access control lists and capabilities, aren't adequate to enforce confidentiality (or integrity) in the MLS setting. Consider the following example.

Leakage through Trojan Horse. A subject G is cleared at (Top Secret, {}) because it runs on behalf of an army general. G has access to object BP containing battle plans. BP is labeled (Top Secret, {}). An attacker, who does not have access to BP, creates a Trojan Horse program T that if executed does the following:

Obviously, it doesn't do any good for the attacker to run T. But suppose T also has other functionality—it's word processor, or a game (Minesweeper?), or some other application that G might be interested in running. Then A might be able to trick G into running T. When that happens, A will get access to the contents of BP through O. DAC can't stop that.

How can we prevent this kind of attack from violating MLS? We need a different kind of mechanism than DAC. One famous solution comes from Bell and LaPadula (1973). They gave a formal, mathematical model of multi-level security. This model enforces the BLP policy: Information cannot leak to subjects who are not cleared for the information.

Given two labels L1 = (S1, C1) and L2 = (S2, C2), we write that L1 ≤ L2—meaning that L1 is no more restrictive than L2—when

Notice that ≤ is a partial order: it is possible to have two labels that are incomparable (e.g. (secret, {crypto}) vs. (top secret, {nuclear})) according to ≤.

Let L(X) denote the label of an entity X, where an entity is either a subject or an object. The BLP security conditions are:

Do the BLP security conditions enforce the BLP policy? Yes. First, note that a subject can never directly read an object for which it is not cleared. The first condition guarantees this. Second, a subject must never be able to learn information about some highly-labeled object O by reading another low-labeled object O'. Note that this is only possible if some other subject first reads O then writes O'. By the two conditions, a read then write by S entails L(O) ≤ L(S) ≤ L(O'). But then O actually has a lower label than O', so no information can have leaked.

The above was considered a significant result when it first was proved. But there are several problems with the BLP formulation of MLS. These include:

Some real-world systems, including SELinux and TrustedBSD, combine MAC and DAC policies. In such cases, an operation is allowed only if both the MAC policy and the DAC policy both permit the operation. RBAC and groups are also employed alongside MAC.

MAC Example 2. Chinese Wall

MLS is appropriate for national security confidentiality policies, and it is sometimes appropriate for business confidentiality policies. Consider a microprocessor company's plans for its next-generation chip. The company might consider these plans Top Secret and desire an access control mechanism that can prevent leakage of this sensitive information.

Other business confidentiality policies do not exhibit such close correspondence to MLS. Consider an investment bank. It employs consultants who both advise and analyze companies. When advising, such consultants learn secret information about a company's finances that should not be shared with the public. The consultant could exploit this insider information while performing analysis, to profit either himself or other clients. Such abuse is prohibited by law.

Brewer and Nash (1989) developed a MAC policy for this scenario, calling it Chinese Wall by analogy to the Great Wall of China. The intuition is that an unbreachable wall is erected between different parts of the same company; no information may pass over or through the wall. In the Chinese Wall policy, we (as usual) have have objects, subjects, and users. However, objects are now grouped into company datasets (CDs). For example, an object might be a file, and a company dataset would then be all of the files related to a single company. Company datasets are themselves grouped into conflict of interest classes (COIs). For example, one COI might be the set of all companies in the banking industry, and another COI might be all the companies in the oil industry.

The Brewer and Nash security conditions for the Chinese Wall policy are as follows. Note that these conditions require tracking the set of read objects for each user and subject.

  1. A subject S may read object O only if S has never read any object O' such that:

    1. COI(O) = COI(O'), and
    2. CD(O) ≠ CD(O').
  2. A subject S may write object O only if:

    1. S has never read an object O' such that CD(O) ≠ CD(O').

The first condition guarantees that a single user never breaches the wall by reading information from two different CDs within the same COI.

The second condition guarantees that two or more subjects never cooperatively breach the wall by performing a series of read and write operations. How would a cooperative breach work?

Cooperative breach: Suppose S1 has read from CD1, and S2 has read from CD2, where CD1 and CD2 are both in COI1. Then:

  1. S1 reads information from an object in CD1.
  2. S1 writes that information to object O in CD3 in COI2.
  3. S2 reads that information from O.

S2 now has read information about both CD1 and CD2, which violates the Chinese Wall policy.

Condition 2a prevents the write operation in step (ii) above by restricting when a subject may write: once a subject reads two objects from different CDs, that subject may never write any object.

There's still a problem—all this assumes the user can't store information outside the system (e.g., in their brain). In that case, even 2a wouldn't suffice. So this policy really isn't used to constrain the activity of humans so much as the activity of potentially malicious programs.

MAC Example 3. Clark-Wilson

The policies we have examined so far have been primarily concerned with maintaining secrecy of data. We are also interested in policies that ensure integrity of data. For instance, a bank is probably much more interested in ensuring that customers not be able to change account balances than it is in ensuring that account balances be kept secret. Although the latter is desirable, unauthorized data modification can usually cause much more harm to a business than inadvertent disclosure of information.

In 1987, Clark and Wilson examined how businesses manage the integrity of their data in the "real world" and how such management might be incorporated into a computer system. In the commercial world, we need security policies that maintain the integrity of data. In this environment, illegal data modification occurs due to fraud and error. And there are two classical techniques to prevent fraud and error.

The first technique is the principle of Well-formed Transactions. Users may not manipulate data arbitrarily, but only in constrained ways that preserve or establish its integrity. An example of this is double-entry bookkeeping. This involves making two sets of entries for everything that happens. Another example is the use of logs. Rather than erasing mistakes, the sequence of actions that reverses the mistake is performed and recorded on the log. A record of everything that has occurred is maintained. Using well-formed transactions makes it more difficult for someone to maliciously or inadvertently change data.

Another technique to prevent fraud is the principle of Separation of Duty. Here, transactions are separated into subparts that must be done by independent parties. This works to maintain integrity as long as there is no collusion between agents working on different subparts. A typical transaction might look as follows:

In this scenario, no one person can cause a problem that will go unnoticed. The separation of duty rule being employed here is: A person who can create or certify a well-formed transaction may not execute it.

The question now becomes: How can we employ these commercial principles in computer systems? Clark and Wilson gave a set of rules for doing so. They postulate trusted procedures (TPs), which are programs that implement transactions. We summarize the Clark-Wilson rules as:

  1. All subjects must be authenticated.
  2. All TPs (and the operations they perform) must be logged—i.e., must be auditable.
  3. All TPs must be approved by a central authority.
  4. No data may be changed except by a TP.
  5. All subjects must be cleared to perform particular TPs by a central authority.

Note how these rules exemplify the gold standard: they address authentication, audit, and (the final three rules) authorization.

Two other features of the Clark-Wilson model are: