Lecture 7
Specifications for data abstractions

Nondeterministic specifications

Let's take a look at the function find again. Here is an attempt at a specification:

(* find lst x  is the index at which x is
 *   is found in lst, starting from zero.
 * Requires: x is in lst *)
val find: string list * string -> int

Notice that we have included a requires clause to ensure that x can be found in the list at all. However, this specification still has a problem. The phrase "the position" implies that x has a unique position. We could strengthen the precondition to require that there be exactly one copy of x in the list, but probably we'd like this function to do something useful in the case where x is duplicated. A good alternative is to fix the specification so that it doesn't say which position of x is found if there are more than one:

(* find lst x  is an index in lst at which x is
 * found; that is, nth(lst, find lst x) = x.
 * Requires: x is in lst *)
val find: int list * int -> int

This is an example of a nondeterministic specification. It states some useful properties of the result that is returned by the function, but does not fully define what that result should be. Nondeterministic specifications force the user of the abstraction to write code that works regardless of the way in which the function is implemented. They are sometimes called weak specifications because they avoid pinning down the implementations (and the implementers). The user cannot assume anything about the result beyond what is stated in the specification. This means that implementers have the freedom to change their implementation to return different results as long as the new implementation still satisfies the specification. Nondeterministic specifications make it easier to evolve implementations.

How much nondeterminism is appropriate? A good specification should restrict the behavior of the specified function enough that any implementation must provide the functionality needed by the clients of the function. On the other hand, it should be general (weak) enough that it is possible to find acceptable implementations. Clearly these two criteria are in tension. And of course, a good specification is brief, precise, and comprehensible.

Equational specifications

The specification of find actually says the same thing in two different ways; first, in English, and second, via the equational specification nth(lst,find lst x = x. Any implementation that always satisfies this equation behaves just as described by the informal language. Equational specifications are often a compact and clear tool for writing returns clauses, whether deterministic or nondeterministic.

Here we are relying on the convention given earlier: when an equation is given with unbound variables, the equation is meant to hold for all possible values of those variables. This is another way to write the returns clause that is given earlier, though probably the earlier version is more readable for most programmers.

Refinement

Sometimes one specification is stronger than another specification. For example, consider two possible specifications for find:

A: (* find lst x  is an index at which x is
    *   is found in lst; that is, nth(lst, find lst x) = x
    * Requires: x is in lst *)
B: (* find lst x  is the first index at which x is
    *   is found in lst, starting from zero
    * Requires: x is in lst *)

Here specification B is strictly stronger than specification A: given a particular input to function as specified by B, the set of possible results or outcomes is smaller than it is for A. Compared to A, specification B reduces the amount of nondeterminism. In this case we say that specification B refines specification A. In general, specification B refines specification A if any implementation of B is a valid implementation of A.

The interaction between refinement and preconditions can be confusing at first. Suppose that specifications A and B have the same postcondition (results clause), but specification A has a stronger precondition. In the example A above, we might require not only that x is in lst, but that it is in the list exactly once. In this case B still refines A: it has a stronger (more restrictive) postcondition and a weaker (less restrictive) precondition, which means that an implementation of B satisfies the spec of A. Thinking about this from the viewpoint of a client who expects A but actually gets B may be helpful. The client makes calls that satisfy A's precondition, and therefore satisfy B's precondition to. The client gets back results that satisfy B's postcondition, and therefore satisfy A's postcondition too.

There are other ways to refine a specification. For example, if specification A contains a requires clause, and specification B is identical but changes the requires clause to a checks clause, B refines A: it more precisely describes the behavior of the specified function.

We can think of the actual code implementing the function as another specification of the computation to be performed. This implementation-derived specification must be at least as strong as the specification written in the comment; otherwise, the implementation may do things that the specification forbids. In other words, any correct implementation must refine its specification

Automatic verification

We've been looking at how to write human-readable specifications. It is possible to write specifications in a formal language that permits the computer to read them. These machine-readable specifications can be used to perform formal verification of the program. Examples of systems that do this include ESC Java and Larch-C. Using a formal specification, an automatic theorem prover can prove that the program as a whole actually does what it says it does. Formal program verification is an attractive technology because it can be used to guarantee that programs do not contain bugs! However, it has not proven popular among programmers because it is difficult to formally prove programs correct and because the specifications are tedious to write in a form that the machine can understand. In practice machine-readable type specifications are very useful, but it seems to work better to put the rest of the specification in human-readable form. We will see soon that the specifications that we are talking about writing can be used to manually construct proofs that programs work, too.

Data abstractions

We have already talked about functional abstraction, in which a function hides the details of the computation it performs. Structures and signatures in OCaml provide a new kind of abstraction: data (or type) abstraction. The signature does not state what the type is.  This is known as an abstract type.

A data abstraction (or abstract data type, ADT) consists of an abstract type along with a set of operations and values. ML of course provides a number of built-in types with built-in operations; a data abstraction is in a sense a clean extension of the language to support a new kind of type. For example, a record type has builtin projection operators, and datatypes have builtin constructors. For a data abstraction, its signature creates an abstraction barrier that prevents users from using its values in a way incompatible with the operations declared in the signature.

Signatures and Structures

To successfully develop large programs, we need more than the ability to group related operations together. We need to be able to use the compiler to enforce the separation between different modules, which prevents bad things from happening. Signatures are the mechanism in ML that enforces this separation.

Signatures

A signature, or module type, declares a set of types and values that any module implementing it must provide. It consists of type, datatype, exception and val specifications.  The specifications are a bit different than we are used to so far, they specify only names and types, no values.  

A signature in OCaml specifies an interface that describes how clients can use a particular code module, rather than an implementation defining exactly how it does it.  A signature for a stack might look something like the following:


This signature defines a parameterized stack type, an exception called EmptyStack, a constant called empty, and two functions that operate on stacks.  Note that this example declares the polymorphic stack type 'a stack, but doesn't define what that type is. By convention signature names use all capital letters. A client programmer can use stacks based on these declarations without seeing the implementation.  Different possible data representations and corresponding code could implement this stack, for instance using lists or arrays.

Structures

A structure is used to implement a signature.  A structure must implement all the specifications in its signature.  It may implement more than what is in the signature, but those additional definitions are accessible only inside the structure definition itself, not to users of the structure.

Here is the simplest implementation of stacks that matches the above signature, using lists.


There are several ways to refer to the elements of a structure.  One is with fully qualified names: Stack.empty, Stack.push, Queue.pop., etc  Another is by using the open declaration, open Queue, which makes the names accessible without the need to specify the prefix.

The abstraction barrier

The abstraction barrier prevents the clients of the Stack module from using their knowledge of what poly is. In fact, the OCaml interpreter will not even print out values of a type like stack. Without the signature, we can see what stacks really are:

- Stack.push(1, Stack.empty);
val it = [1]: int Stack.stack

Once the module is protected by its signature, values of the type poly are printed only as a dash:

- Stack.push(1, Stack.empty);
val it = - : int Stack.stack

Hence, we cannot see how int Stack.stack is implemented. The use of signatures with opaque implementations ensures that programmers cannot depend on the implementation inadvertently.  They are free to change the implementation later, without worrying about changing the code that uses it. Using a new implementation data structure cannot break the user's code, since the user could never access the internal implementation data structures.

There are two ways of specifying that a structure implements a signature. In the stack example, we could write either "Stack : STACK", or "Stack :> STACK". In both cases, the structure Stack is an implementation of the STACK interface. The difference is that in the second case (when using :>) the concrete implementation type defined in the structure is not visible or accessible outside of the structure itself. The users of this structure can only "see" the facts expressed in the signature; the actual implementation type are hidden from them. In this case, we say that the type abstracted by the structure is opaque.

It is often tempting to write code that takes advantage of knowledge of the concrete implementation—because accessing the concrete implementation directly is simpler or faster. In the long run taking these shortcuts leads to disaster.   The value of opaque data abstractions show its advantages in large (e.g., tens or thousands of lines of code) programs. If data abstraction clients write code that depends on the internal structure of the ADT, then changing the implementation will turn into a time-consuming and error-prone task of manually changing each use of the ADT in the application. With opaque types, no changes outside the structure are required.

Designing interfaces

We have talked about what makes a specification good; a few comments about what makes an interface good are also in order. Obviously, an interface should contain good specifications of its components. In addition, a well designed interface strikes a balance between simplicity and completeness. Sometimes it is better not to offer every possible operation that the users might want, particularly if those users can efficiently construct the desired computation by using other operations. An interface that specifies many components is said to be wide; a small interface is narrow. Narrow interfaces are good because they provide a simpler, more flexible contract between the user and implementer. The user is less dependent on the details of the implementation, and the implementer has greater flexibility to change how the abstraction is implemented. Interfaces should be made as narrow as possible while still providing users with the operations they need to get the job done.

Modules in other languages

Modules and interfaces are supported in OCaml by structures and signatures, but they are also found in other modern programming languages in different form. In Java, interfaces, classes, and packages facilitate modular programming. All three of these constructs can be thought to provide interfaces in the more general sense that we are using in this course. The interface to a Java class or package consists of its public components. The Java approach is to use the javadoc tool to extract this interface into a readable form that is separate from the code. Because the interface consists of the public methods and classes, these are the program components that must be carefully specified.

The C language, on the other hand, works more like OCaml. Programmers write programs by writing source files (".c files") and header files (".h files").  Source files correspond to ML structures and header files correspond to signatures. Header files may declare abstract types and function types, just like in OCaml. Therefore, the place to write function specifications in C (and in C++) is in header files.

Java-style interface extraction makes life a little easier for the implementer because a separate interface does not have to be written as in OCaml. However, automatic interface extraction is also dangerous, because changes to any public components of the class will implicitly change the interface and possibly break client code that depends on that interface. The discipline provided by explicit interfaces is useful in preventing these problems for larger programming projects.