Resource-bounded computation as access permissions to massive storage

Senior thesis work in mathematics

Ryan Williams

 

Informal outline

Our study is primarily concerned with the difficulty of comparing complexity classes, both common (e.g. P, NP, PSPACE) and uncommon (e.g. TIME[n^(log n)], SPACE[o(n)]). For example, we would very much like to determine if there are computational problems solvable given O(n^c) working storage (and possibly exponential time) which cannot be solved using O(n^c) time. No proof methods known can exhibit such a problem, though most anyone who ever studies this question will come away from their efforts convinced that a problem of this type exists.

In reply, we ask the more philosophical than mathematical question: “what exactly is it that makes separating two complexity classes so difficult?” We contend that the difficulty rests in the very different nature of the resources at hand, such as time versus space. (In the context of the P vs. NP question, such reasoning does not apply so readily; but as complexity theory stands today, we are as far from P = NP as we are from L = NP, in the sense that there is no proof for either.) In our defense, we note that the time hierarchy and space hierarchy theorems were the initial foundation of complexity theory. These theorems are fundamentally more straightforward to prove due to the fact that the kind of resource under consideration is the same for both classes (e.g. linear time versus quadratic time). When space and time (or nondeterminism and determinism) are being compared, the differences become exceedingly more difficult to quantify.

Thus, we wish to somehow unify the intuitive notions of time, space, and nondeterminism under a uniform model, in which these notions are characterized in a more subtle manner. It would be our hypothesis that with such a characterization, the alleged differences between the notions would be that much more exaggerated. We believe that this is the case with our current theory.

The “uniform model” described above will be for the purposes of this presentation a multihead two-way finite automaton (k-FA), in which two heads can sense when they are reading the same tape square. The k-FA will also have an additional head attached to an auxiliary tape, called a reference tape.

It is known that such machines are computationally equivalent to Turing machines with only logarithmic working storage (Hartmanis, 1972). We will show that, depending on the size of this tape, the initial contents of this tape, and the access permissions the k-FA has to the reference tape, the k-FA can model NL, P, NP, and PSPACE, in many different ways: some surprising, some intuitively obvious. In every case, the access permissions specified and the sizes specified are all “natural” measures.

One major advantage of these models are that they allow one to study complexity theory with different strategies in mind. Two examples illustrate this claim vividly.

The NL ?= NP question can be rephrased in the following way: show that k-FAs with sequential (one-way) access to an arbitrary-content polynomially long (in input length) reference tape are strictly less powerful than the same machines but with random (two-way) access. (Note that random access is not necessarily equivalent to two-way access, but under this particular model, it is.) On an extremely informal level: if I am a very forgetful proof-checker who skims in one pass through the proof of a claim, can I emulate other very forgetful proof-checkers who actually save their current place, and can look back at previous parts of the proof in the process?

The NP ?= PSPACE question has an equally interesting analogue: show that k-FAs with read-only random access to an arbitrary polynomially long reference tape are strictly less powerful than the same machines with the same access to exponentially long (or arbitrarily long) reference tape. (By exponential, we mean 2^{n^{O(1)}}, where n is input length.) This formulation makes it appear like an almost certainty that the two classes are different: there is only one differing yet crucial parameter between the two models, and the difference is exponential!

Such synthesis of these open questions in complexity theory lead us to entirely new trains of thought about how they may be resolved. We study the number of accesses for each square of the reference tape that is necessary for characterizing NP (as opposed to NL, which by its reference tape access being one-way, only requires one access per reference tape square in order to accept NL sets), and find that only two accesses of each reference tape square are actually necessary for accepting all of NP when the model is restricted to random access rather two-way head movement. Perhaps the NP ?= PSPACE question is open to counting/Kolmogorov complexity arguments, based on the single, exponentially-differing parameter. (However, it is clear that any such arguments would have to be non-relativizing.)

Several new results about common complexity classes have arisen as a result of studying complexity based on access permissions; among them, a slight generalization of Savitch’s theorem (somewhat disjoint from the one derived using alternation), and a theorem relating NL vs. NP with the problem of 1-way NL relating to AC^0.