ICFP 2017 Review #13A
Paper #13: Revisiting Parametricity: Inductives and Uniformity of Propositions

                  Overall merit: C. Weak paper, though I will not fight
                                    strongly against it
                     Confidence: Y. I am knowledgeable in this area,
                                    but not an expert

                     ===== Paper summary =====

It is a well-known fact that parametricity for system F gives us theorems for free. What is less known is how to extend this result to richer languages. This paper presents a parametricity translation for a large fragment of Coq, and outlines the considerations and limitations of such translation, even fixing a bug in an already published result.

                  ===== Comments to authors =====

My main concern about this work is that it is staying in an uncomfortable middle ground: from a theoretical standpoint, it does not provide a proof of the abstraction theorem (even for a restricted set of terms); from a practical standpoint it does not provide enough information to quickly test the ideas presented in another setting.

About the second point, the paper mentions a translation that is otherwise not outlined in the paper, and is not included in the supplementary material. Moreover, in the one example included (Section 5) is not clear, at least to me, what is the free theorem being used and how.

As a last relevant point, I found the presentation hard to follow. The reader is asked to care about problems that are not yet in scope.

Minor observations:

ICFP 2017 Review #13B
Paper #13: Revisiting Parametricity: Inductives and Uniformity of Propositions

                  Overall merit: C. Weak paper, though I will not fight
                                    strongly against it
                     Confidence: X. I am an expert in this area

                     ===== Paper summary =====

Various translations exist in the literature that take judgements of dependent type theory and translate them to their relationally parametric interpretation in the same dependent type theory.
These translations split every variable x : T of the original judgement, producing two (for binary; or n, for n-ary parametricity) variables x : T and x_2 : T_2, plus a third variable x_r : [[T]] x x_2 which witnesses that x and x_2 are related according to the relation [[T]], which is the interpretation of the type T.
Terms t : T are interpreted to terms [[t]] : [[T]] t t_2. In particular, types T : Set are interpreted to [[T]] : T -> T_2 -> Sêt, where Sêt is some universe.

The parametric interpretation can then be used as a shortcut to prove theorems that would otherwise be tedious to prove, such as (see §5 of the paper) alpha-equivalence.

A disappointing property of existing translations is that the translation of a proposition P : Prop is useless: it takes the from [[P]] : P -> P_2 -> Prop, which is a rather uninformative (proof-relevant) theorem to have.
Indeed, regardless of what P and P_2 are, we have λ p p_2.True : P -> P_2 -> Prop.
Of course, we would prefer to have instead a translation [[P]] witnessing that P and P_2 are in fact isomorphic (i.e. logically equivalent).

The authors demonstrate with a simple counterexample that this is quite simply not the case if we do not make additional assumptions, and so they investigated which additional assumptions we can make to automatically prove logical equivalence of P and P_2.
They introduce two additional properties that we can assume for the relational interpretations [[T]] of the types T that P depends on: [[T]] may be assumed to be one-to-one (essentially an injective partial function T -> T_2) or total (every element of T and T_2 is related to some element of T_2 / T respectively).
It is investigated for Pi-types, inductive propositions, and inductive types, which assumptions one needs to impose on the input types.

                  ===== Comments to authors =====

For this paper, I asked a sub-review, included below (except for the summary which is included above).
I also read the paper myself and will include my own assessment above.

Strong points:

Weak points:

Summary assessment:

This paper presents a solution to an important problem, that is already practically useful.
However, I think there is still some work left on (1) finding ways to better motivate and explain the design, (2) developing more insights into the big picture of what you are doing and (3) improving the structure of the paper and the comparison to related work.
As such, I think this paper should not be accepted although I believe that if the authors are able to improve the above-listed weaknesses, a future version of the paper will stand a much better chance.

Major remarks:

General remarks:

Typos and general nitpicking:

Begin sub-review

1 Global impression

In principle I am enthousiastic about this direction of research. The authors make the correct and important observation that the current interpretation of propositions is useless. (Also for multi-element types, we will sometimes want to find an isomorphism, rather than a mere relation, but it is legitimate to not consider that in a first approach of the problem.) This has to my knowledge not been adequately addressed so far in any publication. Homotopy type theory (HoTT) does automatically produce equivalences, but also requires equivalences as input, which is more than sometimes necessary. The authors show that additional assumptions are needed and choose, I believe, the right direction when they investigate the structure of P : Prop to find out what assumptions are needed precisely. I find it surprising that considering the one-to-one and totality properties yields good results here; this approach is to my knowledge new not only in published work but also in the WIP that I am aware of.

The reasons why I give a C rating nonetheless, are:

In short, I think this is a relatively weak presentation of a valuable piece of work.

2 Lack of structure

While reading the paper, I repeatedly found myself wondering in puzzlement: what are we doing now and why? So I made an outline of the structure and found out that some section titles do not match the content of the section. I include the outline, with structure-related and other remarks, below.

3 Lack of readability

This is related to point 1, of course. My primary issue was that it is next to impossible to quickly reread something that you recall having read earlier.

Some key recommendations:

4 Aim for high-level understanding

At the level of proofs: your reader is not Coq

In general, a proof serves two purposes: to establish that a claim is true, and to allow the reader to understand why. I think an important virtue of proof-assistants is that they allow to separate those concerns: you can convince Coq that your results are true, and dedicate your paper to teaching the reader a feel for why they are true. This may mean that instead of giving a fully general and technical proof, you give an illuminating example that makes the general theorem credible.

It especially means that you should not talk to your reader the way you talk to Coq. A good example where you do this is on p16, ln 9-10. Here, you explicitly 'introduce' a variable and explicitly invoke the substitution rule. In Coq, in order to prove a universally quantified theorem, you use Intro. But when explaining to a human that something holds for all y, you say 'pick any y', not 'we introduce y'. Similarly, you need not tell a human reader when to substitute one thing for another; instead you can just explain why indices are equal and perhaps remind us of the equality when the substitution needs to happen.
The entire proof about IWP (which, by the way, could use a corresponding theorem) - as well as many proofs in the paper - reads like a direct English-language transcript of a Coq proof. I think instead of giving a solid but extremely technical proof, you should seize the opportunity to help your reader understand why you need all the assumptions, and only the assumptions, that you make.

The HoTT-book may serve as a guide here: it does an excellent job of doing type theory without treating its reader like a proof assistant.

At the level of the paper

As mentioned before, I think you need to gather somewhere the specifics of the interaction between proposition structure and assumptions. But I suggest you moreover discuss these gathered results: what do they mean, how do they arise, what are the patterns and how can we understand this? In rare occasions, truth is just ugly and there is nothing to say, but I am not convinced that this is the case here. In fact I think interesting further structure can be identified, see also my comments on §4.

One thing that I think may be illuminating, is to consider Church encoding. For types, this is problematic as Set is predicative, but indexed inductive propositions may (if I'm not mistaken) be Church-encoded using only Pi-types and Prop. E.g. we can encode le : nat -> nat -> Prop as

le m n = (P : nat -> nat -> Prop) -> ((k : nat) -> P k k) -> ((k l : nat) -> (P k l) -> (P k (S l))) -> P m n 

This may allow you to explain the assumptions made for indexed inductive propositions, in terms of the ones made for Pi-types. For inductive Sets, the same would be possible if Set were impredicative, see e.g. Robert Atkey, Neil Ghani, Patricia Johann, POPL 2014, 'A relationally parametric model of dependent type theory'.

6 Section-by-section and line-by-line remarks

1 Introduction

1.1 Summary of contributions


2 AnyRel translation

2.1 Core calculus

2.2 Inductive types and propositions

2.2.1 The previous translations of inductives: comparison
My summary of the section:
- you review Keller&Lasson's and Bernardy et al.'s translations
- you show that deductive-style is more suitable for inductive types (and necessary with Sêt = Prop)
- you show that inductive style is necessary for inductive props

2.3 Deductive-style Translation of Indexed-Inductive Types

My summary of the section:
you fix the subtle flaw and add an additional equality constraint

2.4 Pattern Matching (deductive-style)

2.5 Fixpoints (recursive functions)

2.6 Summary

3 Uniformity of propositions

3.1 Universal quantification

My summary: how to assert IffProps and CompleteRel for universally quantified propositions

3.2 Inductively defined propositions

3.3 Properties of Relations of Types

My summary: We will require props to be total & one-to-one rather than equivalent & complete
In fact, you explicitly say so: "In this section, we consider the subtyping rule"

3.4 Dependent Function Types

My summary: When is a Pi-type one-to-one or total?

3.5 Inductive types

My summary: When is an indexed inductive type one-to-one or total?
I'll admit that I didn't work through the details here.

4 IsoRel translation

My summary: This section explains the implementation approach you took.

4.1 [[]]iso

4.2 Eliminating unused hypotheses

See my remarks above

4.3 Limitations of [[]]iso

5 Applications

7 Future work and conclusion

This can be omitted in my opinion.

ICFP 2017 Review #13C
Paper #13: Revisiting Parametricity: Inductives and Uniformity of Propositions

                  Overall merit: B. OK paper, but I will not champion
                     Confidence: X. I am an expert in this area

                     ===== Paper summary =====

Free theorems for functional programs are all very well, but one might
expect that they would really be useful when generating proofs in a
proof assistant. This paper continues a line of work initiated by
Bernardy et al in synactic generation of proofs of Reynolds'
abstraction theorem for dependently typed proofs and programs. The
core of this paper is a discussion of a Coq tactic that generates
proofs given terms. The major contributions of this paper are:

  1. Extension of previous translations to handle a larger range of
    Coq's indexed inductive types. A detailed comparison between two
    methods of relational translation is made, with discussion of what
    works and what does not. A special, but representative case (W types)
    is formalised in Coq to show that the translation should always work
    (the actual tactic may fail during proof generation).

  2. A new translation (the "IsoRel" translation) that has the desirable
    property that two propositions (i.e. objects of the Coq sort 'Prop')
    are related only when they are are equivalent propositions. This is in
    contrast to the original translation (Bernardy et al, Lasson and
    Keller), which stated that all propositions were related. This is a
    much weaker property, and makes the translation effectively useless
    when translating propositions. Unfortunately, the cost of this extra
    strength is that relations between normal types are restricted to
    isomorphisms. A slightly ad-hoc claim is made that some of these
    assumptions of isomorphism can often be dropped in practice.

An application of the generation of abstraction proofs is given,
demonstrating its use in a compiler correctness project. It is claimed
that this use (would have) saved many hours of work.

                  ===== Comments to authors =====

Things I liked:

  1. The new translation properly handles (indexed) inductive types, and
    there is a detailed discussion of the issues involved.

  2. The discussion in the paper is based on a working tool for
    constructing Coq proofs, which has been shown to work in practice.

  3. The ability to transfer proofs between situations is really
    powerful, and gives, concretely some of the promised abilities of
    homotopy type theory. This is illustrated by an example in Section 5.

  4. The paper is generally quite clear, though I found the language a
    little too roundabout in places.

Things that could be improved:

  1. The custom equality types in section 2.3 (on page 10) seem to be
    solving the same problem as McBride's heterogeneous (or "John Major")


JMEq allows the statement of equalities between types that are not
yet known to be definitionally equal. It is specifically designed for
this kind of case of telescoping dependent indexes in inductive
types. Would using JMEq simplify your translation. JMEq supposes
proof irrelevance, but you are already doing this.

  1. The iso translation is justified by essentially: we tried to prove
    it without these assumptions and got stuck, so we added these
    assumptions (isomorphisms) and proved that they are preserved by all
    the type constructors. Is there a more high-level reason why
    isomorphisms are the right thing here?

Some possibly related work:

Edmund Robinson: Parametricity as
Isomorphism. Theor. Comput. Sci. 136(1): 163-181 (1994)

  1. The introduction keeps referring to "undecidable relations" when
    talking about relations that are expressed in Coq as functions into
    Prop. I understand that the point is that the 'AnyRel' translation
    does the right thing for relations with codomain 'bool', and that
    these are decidable. But the fact that the relations in Prop might be
    undecidable isn't what makes the translation difficult, right?

  2. The "Eliminating Unused Hypotheses" process described in Section
    4.2 seems very ad-hoc.

  3. The authors appear to have made everything fit the page limit by
    removing all vertical space around displayed pieces of Coq code. I
    found this quite difficult to read, despite the syntax highlighting.

  4. In general, the writing is a bit wordy in places and could really
    do with some editing. For example, page 2, lines 39 to 41 has a
    sentence that starts with "As a result, .." and ends with "... true
    and false" and contains two colons and several clauses. It could be
    simplified by (a) deleting the word "much"; (b) deleting "which is not
    a universe, and"; (c) deleting "they are mere ...".

I found that the prolix writing style made it quite difficult to
determine what was going on in some places. The constant back and
forth references to "explain below" (where?) and so on were also

Also, I think the sectioning could be improved. The discussion of the
translation of inductive types (Sections 2.2-2.6) should be split out
from the basic 'AnyRel' translation in 2.1 in order to cleanly
separate what is new from what is not.

PAGE LIMIT EXCEEDED: The submitted version of this paper exceeds the limit of 24 pages (not including references) that was specified in the call for papers. However, the main body of the paper does fit within the specified limit. For this reason, reviewers are instructed to treat the appendix to this paper as it had been submitted as supplementary material. In particular, this means that reviewers may choose not to look at the material in the appendix.