In this section, we mention in passing a number of issues that arise when thinking about polytypic programming.
Also, can one grab hold of the ``original/default'' polytypic method? The recursive nature of polytypic functions makes this dubious.
There is also interplay with module abstraction. For example, one can write a polytypic function usesInt<t> :: t -> Bool which indicates whether or not the type t uses an Int in it's internal representation. This would seem to violate some notion of abstraction, although I'm not quite sure what.
Another cannonical polytypic function is map, or more commonly, fmap:
fmap<> :: (
)
.
An easy extension is fmap2:
fmap2<> :: ((
,
)
)
(
,
)
which presumably has the semantics of ``zipping'' together the two
structures (discarding portions where the two don't have
the same structure) and mapping a function across the resulting
function.
But, we can continue extending, all the way to:
fmap{}<
> :: ((
, ...,
)
)
(
, ...,
)
Now, all such functions can be easily written in terms of repeated zips and maps, but the uniformity suggests a more general and direct approach. In particular, we would like to simply write fmap{4}<L> to get a function of the appropriate type. (Furthermore, a direct approach might avoid the inefficiency of repeated zips and maps.) A connection with multi-stage languages and compilation suggests itself.
Most polytypic functions we've seen don't have a reasonable
interpretation for the constructor. We would like
size<Int
Int> (const 1) to be a static type error
(rather than the runtime error that it's evaluation produces in
Generic Haskell).
How does polytypic programming interact with other extensions? In
particular, it should ``play nice'' with first-class polymorphism,
because it's implementation requires high rank polymorphism. Can we
treat as just another primitive constructor?
Extensible datatypes would be useful in this setting. Polytypic versions of pattern matching and unification want term datatypes with a distinguished Var variant. It would be very nice to have a user write down a variable free term datatype, and use extension and polytypism to get pattern matching and unification.
Finally, a little more ``intensionality''(?) wouldn't seem to be that
difficult. For example, in Standard ML, it would be nice to write
foreach{Var}<T> : (Var -> unit) -> T -> unit
and
foreach{Exp}<T> : (Exp -> unit) -> T -> unit
for functions with the obvious semantics.