# Logging When we discussed the Principle of Accountability, we observed three kinds of mechanisms for accountability: authorization, authentication, and audit. We now take a deeper look at **audit**—mechanisms that record and review actions and the principals who take them. ### Audit Audit has several uses in computer security: - **Individual accountability:** The maintenance of a log serves as a deterrence against malfeasance. - **Event reconstruction:** After an attack is detected, an audit can establish what events occurred during the attack and what damaged was perpetrated. - **Problem monitoring:** Real-time auditing can help monitor problems, security or otherwise, as they occur. There are two main tasks that are necessary for audit: - **Recording:** logging of events or statistics to provide information about system use and performance. Logging mechanisms should record sufficient information to infer (attempted) violations of security policies. - **Reviewing:** analysis of log records to detect (attempted) violations of security policies. Review mechanisms should present information about the system in a clear and understandable manner. In theory, a system could log every single instruction it executes, input it receives, and output it produces. Such a log would contain sufficient information but would be exceedingly difficult to review and might be difficult—if not impossible, in the face of continuous streams of big data—to implement. So the art of determining what to log depends on striking a balance between recording too much information and too little. ### What to log Logging **states** means taking a snapshot of all the key data of a system, possibly including data in memory, data on disk, and data in transit on the network. With this kind of log, it becomes possible to recover when a system fails because of a power failure, crash, or attack. Taking a consistent snapshot of a distributed system is difficult. (Take CS 5412 to learn more.) For security purposes, though, it's more common to log **events**. Any action taken by the system or by a user of the system is entered into the log. For example, the Windows security log records events corresponding to account login, access to operating system resources, change to security policies, and exercise of heightened user privileges. With this kind of log, it becomes possible to review what actions were taken and whether those might be the cause of security policy violations. But how to decide which events? We can use what we've learned about software engineering, specifically security requirements engineering, to help here. Recall that a *security requirement* is a constraint on a functional requirement that specifies what should happen in a particular situation to accomplish a security goal. Also, recall that security requirements should be testable. Security requirements typically specify a condition that must be satisfied for an action to be permitted. For example, given functional requirement "allowing people to cash checks" and security goal "prevent loss of revenue through bad checks", we might invent two security requirements: - The check must be drawn on the bank which the teller is employed by, or - The person cashing the check must be a customer of that bank and first deposit the check in an account. Both requirements stipulate a testable condition that suffices for the action "check cashing" to be permitted. Such conditions are exactly the kind of information that should be logged for later review, because they identify why the system believed that an action was (or was not) **authorized**. More generally, any event associated with authorization or authentication is a natural candidate for being logged. In the case of **authentication** event, logging them is important because they identify why a system believed that an action should (or should not) be associated with a principal. If you identify an event worth being logged that does not correspond to a security requirement, ask yourself whether you might be missing a requirement. As an example of authorization and authentication events worth logging, the [Orange Book][orange] stipulates the following minimal logging requirements for a C2 level certification: > *The TCB shall be able to record the following types of events: use of > identification and authentication mechanisms, introduction of objects > into a user's address space (e.g., file open, program initiation), > deletion of objects, and actions taken by computer operators and > system administrators and/or system security officers, and other > security relevant events. For each recorded event, the audit record > shall identify: date and time of the event, user, type of event, and > success or failure of the event. For identification/authentication > events the origin of request (e.g., terminal ID) shall be included in > the audit record. For events that introduce an object into a user's > address space and for object deletion events the audit record shall > include the name of the object. The [system] administrator shall be > able to selectively audit the actions of any one or more users based > on individual identity.* [orange]: http://csrc.nist.gov/publications/history/dod85.pdf ### What not to log Some information might be necessary to log for the sake of accountability, yet impossible to log for the sake of confidentiality. For example, including plaintext passwords in a log might seem desirable (to review the kind of passwords that attackers guess during an online attack), but doing so would violate a security policy requiring passwords never be stored in plaintext (a good policy). Real systems usually make the choice to preserve confidentiality at the expense of limited review capability, hence do not store plaintext passwords in a log. In other cases, it might be reasonable to store confidential information in a log but *sanitize* the log before releasing it for review. Sanitization might occur before or after information is written to the log, depending on whether system administrators are trusted with log contents. - When sanitization occurs **before** information is written to a log, the sanitizer processes an individual entry containing confidential information and attempts to remove that information as the entry is written to a log. Typically this scheme is used to protect users from their own system administrators, who have access to the (otherwise unsanitized) logs. For example, laws might permit administrators to monitor users only when administrators have probable cause to suspect the users of attacking the system or engaging in illegal activities. - When sanitization occurs **after** information is written to a log, the sanitizer processes an existing log containing confidential information and attempts to remove that information. Then the log might be passed along to some external entity for review. For example, the log might contain filenames that reveal information about a company's proprietary projects; the sanitizer could remove those filenames before the log is passed to a third-party company that specializes in review of logs and detection of corporate espionage. At any time, system administrators (who are presumably trusted with those filenames) can review the unsanitized logs themselves. A sanitizer might simply delete information from log entries, thus preventing reconstruction of that information in the future. Or the sanitizer might provide a means to recover that information. For example, the sanitizer could replace usernames with pseudonyms, and keep a separate file that maps pseudonyms back to usernames. If that file is stored in plaintext on the same system, administrators will be able to reconstruct the unsanitized log. If administrators are not trusted with log contents, some stronger mechanism (e.g., storage under a public key whose private key is secret shared amongst a group of administrators) is needed to store the pseudonym map. ### How to log Log entries should be unambiguous. The main principle from our discussion of cryptographic protocols is relevant: *Say what you mean.* A good entry contains enough context to determine what the information in the log entry means. There is no widespread agreement on standard formats for logs—not even whether logs should be kept in binary or in text format. Unfortunately, this makes correlation between logs difficult. One standard format for logging is [**syslog**][syslog], which originates from the Unix `sendmail` system. Each syslog entry is a plain text string containing (some of) the following fields: - **facility:** the subsystem producing the entry—e.g., kernel, mail, printer, syslog itself - **severity:** one of the following numeric codes: 0. Emergency: system is unusable 1. Alert: action must be taken immediately 2. Critical: critical conditions 3. Error: error conditions 4. Warning: warning conditions 5. Notice: normal but significant condition 6. Informational: informational messages 7. Debug: debug-level messages - **timestamp:** the time at the local clock at which the entry is generated - **hostname:** the name of the machine at which the entry is generated - **application name:** the name of the device or application that originated the entry - **process id:** an uninterpreted string, sometimes used for identifying groups of related messages from a particular application (e.g., a transaction) - **message type:** an uninterpreted string, sometimes used for filtering - **message:** an unstructured, free-form text string providing information about the event; or a structure key–value map [syslog]: http://tools.ietf.org/html/rfc5424 Logs generated locally at a host might need to be transferred to *log servers* that aggregate and manage logs on behalf of an organization. Issues of concern here include - how often log data should be transferred (real-time, every 5 minutes, every hour, etc.), - what modes of transportation are permissible (TCP, SSL, sneaker-net), and - how the confidentiality, integrity, and availability of log data should be protected while in transit. Logs typically have some size limit, even if only the free space on disk. What should a system do if the log overflows? There are generally two choices: - **Halt**, so that no activities will take place while the log is full, or - **Overwrite** part of the log, so that earlier events are replaced by newer events. Both choices introduce new availability vulnerabilities. A third choice is to simply stop logging, but this is probably the least acceptable option. ### Tamperproof logs Audit data must be protected from modification and destruction. Good practices include: - **Limiting access to log files.** Users should not have any access to most log files unless some level of access is necessary for creating log entries. If so, users should have append-only privileges and no read access if possible. Users should not be able to rename, delete, or perform other file-level operations on log files. - **Protecting archived log files.** This could include creating MACs for the files, encrypting log files, and providing adequate physical protection for archival media. There are limits to how tamperproof a log can be made. Once an attacker has gained control over a host, no log on that host is completely safe from being read, modified or deleted. Standard cryptographic techniques typically fall short, because there is nowhere to store keys that the attacker cannot access through the normal logging facility. It is possible, however, to protect log entries made **before** the attacker compromises the host: - If write-only media is available (e.g., DVD-R, paper printouts) existing log entries cannot be modified by a *remote* attacker. Not so for an attacker who is physically present. - If a trustworthy log server is available on the network, each log entry can be replicated on that server as the entry is made locally. Care must be taken that the log server has defenses independent of the local server; otherwise, an attacker who can compromise one will compromise the other. By a technique called **iterated hashing** it's also possible to protect logs on a host that has only periodic access to the network, such that attackers who compromise the host cannot read, modify, nor insert past log entries. Deletion of some log entries remains possible, though. An *iterated hash* is a successive hash H(...H(H(H(v)) of a value v. [Schneier and Kelsey (1999)][sk99] describe a tamperproof log protocol based on iterated hashing, which we summarize in a highly simplified form, next. [sk99]: http://www.schneier.com/paper-auditlogs.pdf Machine M maintains a local log that it periodically synchs over the network to a trustworthy log server. In addition to the log, M maintains an *authentication key* ak that is used to authenticate log entries. To initialize the protocol, M must secretly communicate the initial value of ak to the log server. The key must be randomly generated. To record message m in its log, M does the following: ``` 1. ek = H("encrypt", ak) 2. x = AuthEnc(m; ek; ak) 3. record x in log 4. ak = H("iterate", ak) ``` If an attacker gains control of M, he learns the current authentication key. But because the hash function is one-way, the attacker cannot determine any of the previous authentication keys. Thus he cannot replace any previous entries with a forged entry, or insert a new entry between old entries. Further, since the encryption key for each entry is derived from the authentication key, he cannot determine any of the previous encryption keys, hence cannot read any of the previous entries. The attacker could successfully truncate the log, thus deleting messages. But any messages previously synched to the log server will not be lost. The attacker could not truncate then add new messages—again because he does not know the previous authentication keys. The attacker could also successfully add new messages to the log, but there's nothing that can be done to defend against that, since the attacker controls the host.