# Review
Logging is pointless unless someone *reviews* the log. Auditors might
review logs manually or with automated tools.
### Manual review
Manual analysis is performed by using tools to extract information from
a log file. Such tools might range from simple `grep` to sophisticated
visual analysis engines. Building graphical tools is a
human–computer interaction problem: how can the tool enable the
human to easily formulate hypotheses and deduce consequences?
Some log browsing techniques include:
* flat text files viewed with text editors
* hypertext viewed with web browsers
* database tables viewed with the query tools provided by a DBMS
or interfaces built on top of them
* graphs, in which nodes represent entities and edges represent associations
*Temporal replay* is a review mechanism that enables the reviewer to
visualize events as they progress over time. The [Visual Audit Browser][vab]
used this technique to animate graphs.
[vab]: http://seclab.cs.ucdavis.edu/papers/CSE-95-11.ps
*Slicing* is a review mechanism that enables the reviewer to see a minimal
set of entries that relate to an entry of interest. The idea comes
from *program slicing*, a technique used in program debugging.
### Automated review
An automated analyzer automatically processes a log and, if configured
to do so, notifies another system component of an event or problem. For
example, the analyzer might cause a system administrator to be paged.
Classic automated review used techniques from artificial intelligence
such as neural networks and expert systems. More recent research
applies the tool of machine learning to build classifiers.
The [Tripwire][tripwire] tool is an automated review engine that enforces
a policy that certain system files shouldn't change. It takes a snapshot
of those files and periodically compares the current files against their
snapshot. If any file changes, it notifies the system administrator.
[Tripwire]: https://github.com/Tripwire/tripwire-open-source
### Intrusion detection
Intrusion detection is one part of the larger problem of *intrusion handling*,
which involves the following steps [Northcutt 1988]:
1. **Preparation:** establish procedures and mechanisms
2. **Identification:** detect attacks
3. **Containment:** limit ongoing damage
4. **Eradication:** stop the attack and any similar attacks
5. **Recovery:** restore system to good state
6. **Follow up:** take action against attacker, identify problems, record
lessons learned
An *intrusion detection system* (IDS) is a system that typically
participates in steps 2 and 3. An IDS has sensors, an analysis engine,
means for countermeasure deployment, and its own audit log. It responds
in nearly real time to identify and contain attacks. There are three
means commonly used to identify attacks:
* **Signature-based detection:** Characterize known attacks with *signatures*
that identify key properties of the attack. If the signature is ever observed,
signal an intrusion. The Network Flight Recorder
[Ranum et al. 1997] is an example that uses custom-written *packet filters*
to recognize bad sequences of packets. [Bro][bro] uses similar ideas.
* **Specification-based detection:** Characterize good behavior of system
with a specification. If behavior ever departs from the specification,
signal an intrusion. The Distributed Program Execution Monitor [Ko et al. 1997]
is an example based on *traces* of program execution; specifications
are written using a formal grammar.
* **Anomaly-based detection:** Characterize normal behavior of system. If
behavior ever departs from normal, signal an intrusion. Haystack [Smaha 1988]
is an example. It monitors a statistic of the system over time to see whether
the value of the statistic ever departs from within an adaptively determined
range.
[bro]: https://www.bro.org/
All these means must contend with *false positives* (raising an alarm
for non-attacks) and *false negatives* (failing to raise an alarm for
an attack). Trading off between these is challenging.
An IDS can be deployed on a single host, or as its own dedicated device
on a network. In the latter case it is typically deployed in *stealth
mode* with two network interface cards: one quietly watches the network,
never giving away its presence; the other is used to report alarms.
Another means of deployment is a *honeypot*, which consists of dedicated
machines or networks whose purpose is to look attractive to the attacker, but
really is just a trap: it is monitored to detect and surveil the attacker.
### Automated response
When an automated audit mechanism detects an attack, it has the opportunity
to respond. That response might include one or more of the following:
* **Monitor:** Increase the level of logging already taking place.
* **Protect:** Reduce the exposure of the system, perhaps by shutting off
network interfaces and file systems; or by degrading response times to
deflect attackers; or by *jailing* attackers inside a confined
area of the system.
* **Alert:** Call a human to intervene.
Another possibility is for the audit mechanism to **counterattack** by
causing damage to the attacker. This is fraught with danger: the counterattack
might cause harm to an innocent party, and it might expose the system owners
to legal liability.