Lecture 8: Abstraction Functions and Representation Invariants

We have observed that the most important use of the "comment" feature of programming languages is to provide specifications of the behavior of declared functions, so that program modules can be used without inspecting their code (modular programming). 

Let us now consider the use of comments in module implementations. The first question we must ask ourselves is who is going to read the comments written in module implementations. Because we are going to work hard to allow module users to program against the module while reading only its interface, clearly users are not the intended audience. Rather, the purpose of implementation comments is to explain the implementation to other implementers or maintainers of the module. This is done by writing comments that convince the reader that the implementation correctly implements its interface.

It is inappropriate to copy the specifications of functions found in the module interface into the module implementation. Copying runs the risk of introducing inconsistency as the program evolves, because programmers don't keep the copies in sync. Copying code and specifications is a major source (if not the major source) of program bugs.  In any case, implementers can always look at the interface for the specification.

Implementation comments fall into two categories. The first category arises because a module implementation may define new types and functions that are purely internal to the module. If their significance is not obvious, these types and functions should be documented in much the same style that we have suggested for documenting interfaces. Often, as the code is written, it becomes apparent that the new types and functions defined in the module form an internal data abstraction or at least a collection of functionality that makes sense as a module in its own right. This is a signal that the internal data abstraction might be moved to a separate module and manipulated only through its operations.

The second category of implementation comments is associated with the use of data abstraction. Suppose we are implementing an abstraction for a set of natural numbers. The interface might look something like this:

module type SETSIG = sig
  type 'a set
  val empty : 'a set
  val add : 'a -> 'a set -> 'a set
  val mem : 'a -> 'a set -> bool
  val rem : 'a -> 'a set -> 'a set
  val size: 'a set -> int
  val union: 'a set -> 'a set -> 'a set
  val inter: 'a set -> 'a set -> 'a set
end

In a real signature for sets, we'd want operations such as map and fold as well, but let's omit these for now for simplicity. There are many ways to implement this abstraction. One easy way is as a list of integers:

module Set : SETSIG = struct
  type 'a set = 'a list
  let empty = []
  let add x l = x :: l
  let mem x l = List.mem x l
  let rem x l = List.filter (fun h -> h<>x) l
  let rec size l = 
    match l with
	[] -> 0
      | h :: t -> size t + (if List.mem h t then 0 else 1)
  let union l1 l2 = l1 @ l2
  let inter l1 l2 = List.filter (fun h -> List.mem h l2) l1
end

This implementation has the advantage of simplicity. For small sets that tend not to have duplicate elements, it will be a fine choice. Its performance will be poor for large sets or applications with many duplicates but for some applications that's not an issue.

Notice that the types of the functions do not need to be written down in the implementation. They aren't needed because they're already present in the signature, just like the specifications that are also in the signature don't need to be replicated in the structure.

Here is another implementation of SETSIG that also uses int list but requires the lists to contain no duplicates. This implementation is also correct (and also slow for large sets). Notice that we are using the same representation type yet some important aspects of the implementation are quite different.

module Set : SETSIG = struct
  type 'a set = 'a list
  let empty = []
  let add x l = if List.mem x l then l else x :: l 
  let mem x l = List.mem x l
  let rem x l = List.filter (fun h -> h<>x) l
  let size l = List.length l 
  let union l1 l2 = 
    List.fold_left (fun a x -> if List.mem x l2 then a else x :: a) l2 l1
  let inter l1 l2 = List.filter (fun h -> List.mem h l2) l1
end

Another implementation might use some kind of tree structure (which we will cover later in the semester). You may be able to think of more complicated ways to implement sets that are (usually) better than any of these. We'll talk about issues of selecting good implementations in lectures coming up soon.

An important reason why we introduced the writing of function specifications was to enable local reasoning: once a function has a spec, we can judge whether the function does what it is supposed to without looking at the rest of the program. We can also judge whether the rest of the program works without looking at the code of the function. However, we cannot reason locally about the individual functions in the three module implementations just given. The problem is that we don't have enough information about the relationship between the concrete types (e.g., int list, bool vector) and the corresponding abstract type (set). This lack of information can be addressed by adding two new kinds of comments to the implementation: the abstraction function and the representation invariant for the abstract data type.

Abstraction Functions

The user of any SETSIG implementation should not be able to distinguish it from any other implementation based on its functional behavior. As far as the user can tell, the operations act like operations on the mathematical ideal of a set. In the first implementation, the lists [3; 1], [1; 3], and [1; 1; 3] are distinguishable to the implementer, but not to the user. To the user, they all represent the abstract set {1, 3} and cannot be distinguished by any of the operations of the SETSIG signature. From the point of view of the user, the abstract data type describes a set of abstract values and associated operations. The implementers knows that these abstract values are represented by concrete values that may contain additional information invisible from the user's view. This loss of information is described by the abstraction function, which is a mapping from the space of concrete values to the abstract space. The abstraction function for the first implementation of Set looks like this:

Notice that several concrete values may map to a single abstract value; that is, the abstraction function may be many-to-one. It is also possible that some concrete values, such as the list [-1; 1], do not map to any abstract value; the abstraction function may be partial.

The abstraction function is important for deciding whether an implementation is correct, therefore it belongs as a comment in the implementation of any abstract data type. For example, in the Set module, we could document the abstraction function as follows:

module Set : SETSIG = struct
  type set = 'a list
  (* Abstraction function: the list [a1;...;an] represents the
   * smallest set containing all of a1;...;an. The list may
   * contain duplicates. The empty list represents the empty set.
   *)
  ...

This comment explicitly points out that the list may contain duplicates, which is helpful as a reinforcement of the first sentence. Similarly, the case of an empty list is mentioned explicitly for clarity. The abstraction function for the second implementation, which does not allow duplicates, hints at an important difference: we can write the abstraction function for this second representation a bit more simply because we know that the elements are distinct:

module Set : SETSIG = struct
  type set = 'a list
  (* Abstraction function: the list [a1;...;an] represents the set
   * {a1;...;an}. [] represents the empty set.
   *)
  ...

In practice the words "Abstraction function" are usually omitted when practitioners write code. However, we will ask you to do it because it's a useful reminder that that is what you are writing in a comment like the ones above. Whenever you write code to implement what amounts to an abstract data type, you should write down the abstraction function explicitly, and certainly keep it in mind.

Commutative diagrams

Using the abstraction function, we can now talk about what it means for an implementation of an abstraction to be correct. It is correct exactly when every operation that takes place in the concrete space makes sense when mapped by the abstraction function into the abstract space. This can be visualized as a commutative diagram:

A commutative diagram means that if we take the two paths around the diagram, we have to get to the same place. Suppose that we start from a concrete value and apply the actual implementation of some operation to it to obtain a new concrete value or values. When viewed abstractly, a concrete result should be an abstract value that is a possible result of applying the function as described in its specification to the abstract view of the actual inputs. For example, consider the union function from the implementation of sets as lists with repeated elements covered last time. When this function is applied to the concrete pair ([1; 3], [2; 2]) it corresponds to the lower-left corner of the diagram. The result of this operation is the list [2; 2; 1; 3], whose corresponding abstract value is the list {1, 2, 3}. Note that if we apply the abstraction function AF to the lists [1; 3] and [2; 2], we have the sets {1, 3} and {2}. The commutative diagram requires that in this instance the union of {1, 3} and {2} is {1, 2, 3}, which is of course true.

Representation Invariants

Some missing information

The abstraction function explains how information within the module is viewed abstractly by module clients. However, this is not all we need to know the implementation is correct.

Consider the size function in each of the two implementations. For the latter implementation (in which the lists of integers have no duplicates), the size is just the length of the list:

let size = List.length

But for the former representation (which allows duplicates), we need to be sure not to double-count duplicate elements:

let rec size l = 
  match l with
     [] -> 0
   | h :: t -> size t + if List.mem h t then 0 else 1

How we know that we don't need to do this check in the representation with no duplicates? Since the code does not explicitly say that there are no duplicates, implementers will not be able to reason locally about whether functions like size are implemented correctly.

If we think about this in terms of the commutative diagram, we see that the abstraction function is not enough.

We can fix this by adding a second piece of information to the implementation: the representation invariant (or rep invariant, or RI). The rep invariant defines what concrete data values are valid representations (reps) of abstract values. For sets represented as lists without duplicates, we write this as part of the comment together with the abstraction function:

module Set : SETSIG = struct
  (* Simple implementation of sets as lists without duplicates 
   * Abstraction function:  the list [a1;...;an] represents the 
   * set {a1;...;an}. [] represents the empty set, {}.
   *
   * Representation invariant: the list contains no duplicate
   * elements.
   *)
  type 'a set = 'a list
  ...

The rep invariant must hold for any valid representation, and all concrete operations in the implementation must preserve the invariant. If this constraint is broken, functions such as size() will not return the correct answer.

Rep invariant vs. abstraction function

We observed earlier that the abstraction function may be a partial function. In order to make sure that an implementation works—that it completes the commutative diagram—it had better be the case that the implementation never produces any concrete values that do not map to abstract values.

The role of the representation invariant is to restrict the domain of the abstraction function to those values on which the implementation is going to work properly. The relationship between the representation invariant and the abstraction function is depicted in this figure:

We can use the rep invariant and abstraction function to judge whether the implementation of a single operation is correct in isolation from the rest of the module. It is correct if, assuming that:

  1. the function's preconditions hold of the argument values
  2. the concrete representations of the arguments satisfy the rep invariant

we can show that

  1. all new representation values created satisfy the rep invariant, and
  2. the commutative diagram holds.
  3. .

The rep invariant makes it easier to write code that is provably correct, because it means that we don't have to write code that works for all possible incoming concrete representations--only those that satisfy the rep invariant. For example, in the latter implementation of SETSIG, the code does not have to work on lists that contain duplicate elements. On return, there is a corresponding responsibility to produce only values that satisfy the rep invariant. As suggested in the figure above, the rep invariant holds for all reps both before and after the functions, which is why we call it an invariant.

repOK

When implementing a complex abstract data type, it is often helpful to write an internal function that checks that the rep invariant holds. By convention we will call this function repOK. If the module accepts values of the abstract type that are created outside the module, say by exposing the implementation of the type in the signature, then repOK should be applied to these to ensure the representation invariant is satisfied. In addition, if it creates any new values of the abstract type, repOK to them as a sanity check. With this approach, a bug in one function is less likely to create the appearance of a bug in another.

repOK as an identity function

A convenient way to write repOK is to make it an identity function that just returns the input value if the rep invariant holds and raises an exception if it fails.

(* The identity function.
 * Checks whether x satisfies the representation invariant. *)
let repOK (x : int list) : int list = ...

Here is an example of how we might use repOK for the implementation of sets given previously. Notice that repOK is applied to all sets that are created. This ensures that if a bad set representation is created, it will be detected immediately. In case we somehow missed a check on creation, we also apply repOK to incoming set arguments. If there is a bug, these checks will help us quickly figure out where the rep invariant is being broken.

Turn on Javascript to see the program.

Production vs. development code

Calling repOK on every argument can be too expensive for the production version of a program. The repOK above is quite expensive (though it could be implemented more cheaply). For production code, it may be more appropriate to use a version of repOK that only checks the parts of the rep invariant that are cheap to check. When there is a requirement that there be no run-time cost, repOK can be changed to an identity function (or macro) so the compiler optimizes away the calls to it. Howver, it is a good idea to keep around the full code of repOK (perhaps in a comment) so it can be easily reinstated during future debugging.

Module Invariants

Invariants on data are useful even when writing modules that are not easily considered to be providing abstract data types. Sometimes it is difficult to identify an abstract view of data that is provided by the module, and there may not be any abstract type at all. Invariants are important even without an abstraction function, because they document the legal states and representations that the code is expected to handle correctly. In general we refer to module invariants as invariants enforced by modules. In the case of an ADT, the rep invariant is a module invariant. Module invariants are useful for understanding how the code works, and also for maintenance, because the maintainer can avoid changes to the code that violate the module invariant.

Module invariants and code evolution

A strong module invariant is not always the best choice, because it restricts future changes to the module. We described interface specifications as a contract between the implementer of a module and the user. A module invariant is a contract among the various implementers of the module, present and future. According to assumption 2, above, ADT operations may be implemented assuming that the rep invariant holds. If the rep invariant is ever weakened (made more permissive), some parts of the implementation may break. Thus, one of the most important purposes of the rep invariant is to document exactly what may and what may not be safely changed about a module implementation. A weak invariant forces the implementer to work harder to produce a correct implementation, because less can be assumed about concrete representation values, but conversely it gives maximum flexibility for future changes to the code.

Modularity and module invariants

A sign of good code design is that invariants on program data are enforced in a localized way, within modules, so that programmers can reason about whether the invariant is enforced without thinking about the rest of the program. To do this it is necessary to figure out just the right operations to be exposed by the various modules, so that useful functionality can be provided while also ensuring that invariants are maintained.

Conversely, a common design error is to break up a program into a set of modules that simply encapulate data and provide low-level accessor operations, while putting all the interesting logic of the program in one main module. The problem with this design is that all the interesting (and hard!) code still lives in one place, and the main module is responsible for enforcing many complex invariants among the data. This kind of design does not break the program into simpler parts that can be reasoned about independently. It shows the big danger sign that the abstractions aren't right: all the code is either boring code, or overly complex code that is hard to reason about. It is a kind of fake modularity.

For example, suppose we are implementing a graphical chess game. The game state includes a board and a bunch of pieces. We might want to keep track of where each piece is, as well as what is on each board square. And there may be a good deal of state in the graphical display too. A good design would ensure that the board, the pieces, and the graphical display stay in sync with each other in code that is separate from that which handles the detailed rules of the game of chess.

Principles for modular design

Given a large programming task, we want to divide it into modules in an effective way. There are several goals. To make the user of the software happy, we want a program that is correct, secure, and has acceptable performance. But to keep the cost of development and maintenance low, and to increase the chances that the program is correct, we want a modular design that has loose coupling and permits local reasoning. These goals are in tension. We can roughly characterize design tradeoffs along an axis between loose and tight coupling:

IssueLoose couplingTight coupling
Size of interfacenarrow interface:
few operations
wide interface:
many operations
Complexity Simple specifications Complex specifications
Invariants Local Global
Pre/post-conditions Weak, nondeterministic Strong, deterministic
Correctness Easier to get right Harder to get right
Performance May sacrifice performance May expose optimizations

Thus, if we want software that is very modular and relatively easy to build correctly, we should design modules that have simple, narrow interfaces with relatively few operations with simple specifications. In some cases we may need more performance than an aggressively modular design offers, and may need to make our specification more complex or add new operations.

A good rule of thumb is to start with as modular and simple a design as possible. Interfaces should be narrow, exposing only as many operations as are necessary for clients to carry out their tasks. Invariants should be simple and enforced locally. Avoid premature optimization that results in complex, tightly coupled programs, because very often the performance bottlenecks are not what is expected. You don't want to pay the price for complexity that turns out to be unnecessary. If performance becomes a problem, a simple, clean design is usually a good starting point for refinements that improve performance.

In general, the right choice along this axis depends on the system being built, and engineering judgment is required. Software designers must balance issues of cost, performance, correctness, usability, and security. They are expected all the time to make judgment calls trading off among these issues. One key is to realize that you are making these tradeoffs and to know when to make them. Probably the most common errors of judgment involve choosing design points too far to the right on this diagram, resulting in systems that are unreliable and hard to maintain.