hans@cs.rug.nl

Josje Lodder, Faculty of Technology, Open university, NL

The Open university of the Netherlands (from now on abbreviated as Ou)
is a nation-wide institution that caters for several tens of thousands
of students. (It is roughly comparable to the British Open
University.) Several hundreds of those are engaged in a computer
science programme, five-year on a fulltime basis, leading to something
comparable to an anglosaxon Master Degree in Computer Science.
(Actually, several thousands are involved in minor computer science
programmes.) Two logic courses form part of this programme: *Logic* and
*Logic and computer science*. (Both are in size comparable to a
`semester course'.) The courses have been developed jointly in a
four-year project (1989 - 1993). The course *Logic* has been in use
since 1991. Altogether several hundreds of students have followed one
or both courses.

The course product consists of:

- the textbook
*Logic for computer science*( Logica voor informatici ) (J.F.A.K. van Benthem, H.P. van Ditmarsch, J. Ketting, W.P.M. Meyer-Viol) (336 B5) - two workbooks
*Logic*( Logica ) (H.P. van Dit marsch, J.S. Lodder, others) (388 A4) - two workbooks
*Logic and computer science*( Logica en informatica ) (H.P. van Dit marsch, J.S. Lodder, others) (596 A4) - service items (formula and theory card, errata lists, sample exams)

All material is in Dutch. Both courses use the textbook Logic for computer science . Each course treats about half that textbook, using two workbooks.

The course *Logic* contains propositional and predicate logic. This
includes systematic validity checking with semantic tableaux, proofs
by natural deduction, and some treatment of Hilbert-style axiomatic
proofs. Much attention is given to formal semantics. Completeness
proofs are treated informally. The course Logic and computer science
contains: many-valued logic, second order logic, equational logic (not
extensively), lambda-calculus, logic programming (including Herbrand
semantics and Skolemization), modal logic (the major part of that
course, including epistemic logic, dynamic logic, time logic),
temporal interval networks, non-monotonic reasoning. The second course
contains several case studies.

In the rest of the paper we discuss, in this order, some strong points in the didactics used in both courses, some weak points, and some further developments such as experiences with other student populations.

The overall treatment of the subject matter is rather mathematical, though less rigorous than usual in logic textbooks. The textbook is reasonably concise, and contains few exercises. The workbooks are very elaborate, though explanatory, not formally. Aspects of the didactic presentation that are worth mentioning:

- standard Open university presentation and layout
- three different levels of explanation corresponding to three different study paths
- extensive use of graphical presentation
- explicit description of strategies for finding proofs
- treatment of non-monotonic reasoning at an undergraduate level
- case-studies

We discuss these points below.

The educational department of the Ou has developed a didactic model to facilitate self-study. Almost every Ou course is based on the components proposed by this model. As a result all Ou courses look more or less alike, in layout as well as in didactic presentation . One aspect of this Ou style is the three- or two-column page layout, with the first column used for structural information, in roman (such as `Exercise' and `Example'), and the second (or also the first) column used for other information, in italics (e.g. keywords such as Completeness theorem ). An example is given in the next section. Another aspect of this standard format is the basic unit of 4 hours of study for structuring course material (not alway strictly adhered to) and a standard structure for such a four hour unit, with components (in this order):

- short introduction
- list of knowledge and abilities to be acquired
- unit-specific recommendations and information
- the unit's `kernel' (text and exercises)
- test to check mastery of the unit's content

A course starts with a unit with general and course-specific instructions for study. A course's last unit typically contains a sample exam. Quite common also are rehearsal units at the end of each `block' of a course (one course typically consisting of a few blocks of several units each).

Since all Ou courses are presented according to this standard format, students will easily recognize the structure of the course material. Students are supported by these didactic components, and even those who prefer to structure their study themselves don't think the components are disturbing (only 9 would prefer study material without these components). Notable are the different ways in which the didactic components are used by students. When asked e.g. for the most important function of keywords, 23 of the students mention `making subject matter accessible', 25 mention `remembering subject matter' and 15 mention `structuring of the study material' .

The material is presented in roughly three levels of increasing complexity called:

- extra (below average)
- standard (comprehensive)
- star (difficult)

`Extra' comes from `extra support'. The motivation for this three-level treatment is, that to the above-average and possibly even to the average student, distance education material tends to be lengthy and therefore tedious. This is unavoidable, since the explanation given should, in absence of teachers, always be up to the level of below-average students. This dilemma has been solved by the three-level presentation. The standard level is comprehensive. Labels are attached to standard exercises that link to more extensive explanations and more elementary exercises: the extra support. The student that does not need it, will not be bothered with it. Much care has been given to this extra support. Several (not necessarily disjoint) functions of extra support are: teaching how to read the textbook, (more extensive) explanation of concepts and notation, training in simple exercises, prevention of common mistakes, and development of intuition and understanding. The `star' exercises are more difficult than standard. They go beyond the required level of comprehension.

The effects of this presentation have been tested informally by interviewing some 50 examination candidates, with the result that it is indeed a useful feature . Students do indeed use the three levels according to their needs, and they appreciate the extra support. A comparable three-level presentation is used in the another Ou course: Coding and Cryptology.

For an example see figure 1 below [4, (1) 28-30]. Some comments:

A workbook chapter starts with a reading instruction for the textbook (in this case: `Read textbook paragraph 12.8.', in Dutch: `Lees paragraaf 12.8 van het tekstboek.') Following are some elaborations on that part of the book. As mentioned before, margin text in italics concerns the topics of the so labeled section, such as that it is about many-valued logic (`meerwaardige logica'), whereas margin text in roman concerns the role of that section, such as that it concerns a section with standard exercises (`Standaardopgaven'). In standard exercise 1.1i one has to prove that the law of excluded middle is not a tautology in three-valued logic. This exercise is also labelled with an `A' and with the numbers 1.4 and 1.5.

`A' means that a hint (in Dutch A anwijzing) can be found in another part of the workbooks, in case the student cannot answer the question straightaway but still doesn't want to look it up. Hints are used on all and independently from the three levels. The hint given is, that in order to prove a tautology one has to compute truth-tables for all different valuations.

The two numbers 1.4 and 1.5 refer to the extra support below, i.e. the exercises 1.4 and 1.5, where one is asked to compute some specific truth tables. One is advised to give attention to extra support, in this case: making exercises 1.4 and 1.5, if and only if one stumbles on standard exercises, in this case: exercise 1.1. If one doesn't stumble, extra support can be skipped.

Exercise 1.6 is a `star' exercise: given truth tables for three-valued logic, now formulate something similar for quantifiers. The idea is to generalize on minimizing and maximizing truthvalues, analogous to the way this is done for conjunction and disjunction. The subject of many-valued predicate logic is beyond the required level of comprehension.

Simple exercises in the extra support often consists of questions about form, elementary technical training and details waived in mathematical proofs. As another example an exercise about a detail in the proof of the adequacy of the semantic tableau method:

exercise 11.1.2 The following is part of the proof of proposition 11.2:

there is an i in D such that M,b[x|->i] |= not phi

=> not M,b |= forall x phi (predicate logic semantics)

Give a detailed proof of this. [3, (1) 130)]

This kind of exercise, `training in how to understand a text' is uncommon in a non-distance-education environment, where students have (at least in principle) direct feedback on their personal style of `understanding' (and writing) mathematics.

A natural deduction proof can be represented in different ways. It can
be written down as a *sequence* of steps, each step being either an
assumption or being derived from one or more earlier steps (earlier
assumptions possibly being withdrawn). This is a linear or
one-dimensional presentation. Such a proof can also be represented as
a tree , where in rule applications such as (and-I) and (or-E) two or more subtrees
are joined. This is a two-dimensional representation. (A full example
is given in the next section.) Similar options between one- and
two-dimensional representation exist for the semantic tableau method,
for resolution proofs, and for ways of proof in modal logic. Actually
it's hard to imagine *any* kind of logic `construct' (either a
structured proof or an inductively defined formula) that cannot be
visualized in one way or another.

This extensive use of `pictures' (actually: trees and graphs) seems to be definitely enjoyed by the students: instead of being dry proofs it gets more like playing chess (this also relates to the next item: strategies).

Given a purely formal system of proof, proofs have to be found according to a strategy. Theorem provers have an explicit strategy, humans mostly an implicit one. Relevant for these courses are strategies for finding natural deduction proofs, axiomatic proofs, and semantic tableaux (and less so for resolution proofs, where the strategy is mostly fixed). The importance of good strategies for building proofs seems underrated in logic textbooks. A notable exception is chapter 3 in [6].

Also in courseware (e.g. Hyperproof and Tarski's World) one cannot follow (nor be guided in) backward strategies. One has to build up a proof starting from assumptions working towards conclusions. Possibly this is also for technical reasons.

One reason for skipping strategy seems to be that a good logician finds some proof anyway within seconds, another reason that logic teachers give away useful though implicit strategies while writing out proofs on the blackboard. As Ou students don't have lecturers (unlike the British OU, the Ou provides no television lessons), we had to make strategy explicit. We give some (by no means complete) strategy for natural deduction and for semantic tableaux. In our experience for a topic like natural deduction this makes all the difference between seeing it as a bag of tricks or as a systematic procedure. A bag of tricks is never fun, whereas a game (i.e. systematic procedure) is. Satisfactory examination results seem to prove us right (an informal observation, no reference).

As an example the gradual construction of the natural deduction proof of (r=>p) or (r=>q) from assumption (p or q) is showed in figure 2 [3, (2) 58 -- 59].

Again we refrain from translating the corresponding text. Do notice that dotted lines stand for as yet unknown subtrees. The last of the six different trees in the example is the entire proof. This tree is constructed in the other pictures working backward from the conclusion to the assumption. The most striking feature might be this bottom-up instead of top-down construction. A combined bottom-up and top-down strategy is propagated for all natural deductions. Secondly, it may be surprising how elaborate the answer has been worked out. Every separate step is given and motivated. We emphasize that there often is only one obvious rule to choose; in this way we stress the natural character of natural deduction. Of course, we also pay attention to situations where this choice is less obvious, for example when reductio ad absurdum is needed.

At the time of completion of the second course (1993), slow and easy introductions of non-monotonic reasoning, nevertheless resulting in some mastery of formal techniques, were -- as far as known -- nonexistent. We have chosen to treat three different subjects: minimal models, CWA (closed world assumption), and circumscription. Circumscription isn't treated in the textbook. The rough setup for the workbook extension has been proposed by Catholijn Jonker and was further written by Josje Lodder.

Minimal models are introduced as a generalization of Herbrand models, already known from the part on logic programming. In addition to the theory in the textbook, the workbook contains an exhaustive example (tweety indeed). The example starts with explaining why you need the unique name assumption. Then several predicate-minimal models for this example are discussed (all actually drawn). We stimulate the student to compare the different technical results from minimizing on different predicates with his/her intuition on the preferred model for this theory, the conclusion being that in this particular case the preferred model can only be obtained by minimizing according to a preference order.

The part on CWA starts with naive CWA as a kind of negation by failure, known from Prolog. Then generalized CWA is introduced. Extensive attention is paid to the somewhat complex technique of how to decide which formulas are negation free. As a third variant careful CWA is introduced.

Circumscription is introduced as a way of formalizing the theory of minimal models in second order logic. The line of explanation is as follows:

One can't directly translate the idea of minimal models into second
order logic, since one can't express that 2 models differ in the
extension of one predicate P. P can be compared with other predicates
though, that have the same `properties'; and that a predicate X is
contained in P *can* be expressed. In this way we spend more than one A4
on introducing the circumscription formula, very different from
standard texts on the subject. The reason for this is that Ou students
(and not only those) lack the maturity to read second order formulas,
a prerequisite for understanding and reading most texts on
circumscription. As a strategy for deducing results from the
circumscription formula we recommend the student to use minimal
models.

In spite of the effort put in presention, students still find this subject difficult to understand. But they also find the effort rewarding: they feel they `really learn' something (as repeatedly communicated by students to both authors).

As mentioned before, the second course, *Logic and computer
science*, contains five case studies. One of the functions of
these case studies is to learn the student to how to apply logic to
computer science problems. A second goal is to motivate the students
to study logic. The case studies are about:

- treatment of many-valued logic in a KBS shell reference manual
- relation Pascal -- Hoare calculus
- relation Prolog -- logic programming
- production planning with temporal interval networks
- protocol verification with epistemic logic

We give some details on the case study on three-valued logic. It is based on an object-oriented knowledge-based system shell called KAN. KAN is used with an introductory Ou course on KBS and AI, that most students have already followed when starting this course. Although the shell is `real' the actual application, i.e. the expert system, is not a realistic application but a toy problem: it is about the different consequences of Snow-white eating an apple. This is so we can waive domain theory, in order to reduce complexity. The central question `What kind of many-valued logic is used in KAN' is split up in several secondary questions, such as `How to model KAN expressions in predicate logic' and `Does the reference manual render the many-valued system correctly'. The answer to the last question is `no'; which is the reason why it is a perfect case study. The complication over which the reference manual apparently stumbled, is that in KAN (as usual in shells) one alternately encounters `unknown' as a value , as in questions to the user, and as a syntactic construct used in conditions of expert system rules. This case study is therefore also quite useful to make students realize the difference between syntax and semantics, a central topic in logic instruction.

It is not easy to find good cases. Some requirements to be met:

- not too complex
- realistic
- several solutions

The case study on many-valued logic meets the first and third requirement, but not the second. Unfortunately some students therefore regard this case as childish. In general most students highly appreciate case studies.

Having discussed didactic aspects that are worth mentioning, we now proceed with some critical notes. Several aspects might qualify as shortcomings:

- no courseware
- mathematical style
- no training in predicate calculus
- not enough logical modelling

We discuss these points below.

A major shortcoming might be called the absence of logic courseware
(such as Tarski's World), specially for distance education material.
Still, as the examination results for *Logic* (with a considerable
amount of applicants) are rather good, and students are also generally
satisfied with the quality and amount of instruction, this does not
seem to be cause for worry. Available courseware has been overlooked
at the time of development; a major problem anyway would have been the
rather low requirements for student hardware: in 1989, IBM-compatible
with 720K. Rather severe hardware limitations are realistic since
distance education students study at home and use their own computers,
not their university's. By now however the standard has been raised to
4 MB and Windows.

What is mathematical style? Or rather: when is using mathematical style a presentation `hasard'? Two examples.

In the textbook accompanying both courses the logical equivalence of (forall x. not phi) to not (exists x . phi) is introduced (and at the same time proven) in the following way [1, 124]:

A logical correspondence exists between the quantifiers forall and exists . We can show with predicate logic semantics that a formula always has the same value as a formula (given a particular model and an assignment ):

V(forall x not phi = 1

<=> for all d in D holds that ... (omitted)

<=> for all d in D holds that ... (omitted)

<=>there is no d in D such that ... (omitted)

<=> V(exists x . phi) = 0

<=> V(not exists x . phi) = 1

In other words: (forall x. not phi) and (exists x . phi) are logically equivalent. Apparently we may now (...)

Although this equivalence is, naturally, afterwards widely used, we think this kind of argument with models and assignments somewhat cumbersome when after all we have to resort to our logical intuition anyway (when moving from the third to the fourth line in the proof).

Also the mathematical style is too dominant in a reasoning as `since every valuation is a model of the empty set, we can test whether phi is a tautology by making a tableau for the sequent o phi'. Most mathematicians are fond of this kind of argument, students can't make head or tail of it, or, once they understand, consider it tricky.

The other extreme in presentation are Dijkstra and Feijen [2, 163] when stating that:

The following holds:

(40) not (E x: P: Q) == (A x: P: not Q)

From which, to parallel the result above, directly follows not (E
x: true: Q) == (A x: true: not Q) . No explanation of *why*
such a theorem holds is given, it's just presented as a -- very useful
-- computation rule.

We will refrain from general considerations for choosing either of these approaches, but mention some possibly relevant aspects. When `applying' logic to computer science, one can:

- treat logical aspects of computer science applications
- adapt the language of proof (and instruction)
- adapt the logical formalism (starting with: symbols and conventions)

In the Ou logic courses the first solution is chosen. Given mathematical logic as the basic subject matter, this therefore also means that the style of treatment will be mathematical. Also, all course team members, most notably Johan van Benthem, are trained in a mathematical style of writing about logic. This `mathematical style' apparently includes a strict distinction between formal logic and the (natural) language of mathematical proof. And possibly this also means that many implicit assumptions are made about the sorts and names of symbols, such as that is a variable, whereas is a constant; and that is typically a natural number.

Within computer science (and logic textbooks), more attention for procedural aspects has -- naturally -- become popular. But not only for the formal systems under consideration, but also for the language of mathematical proof and instruction.

Another standard procedure in logic textbooks has become to illustrate logic constructs by functions, declarations and procedures. A good example is [7]. Of course one then assumes programming instruction to have taken place before logic instruction.

Extensive training in standard equivalences, the so-called predicate
calculus (De Morgan, absorption, ...), has been somewhat overlooked
during development, to the advantage of e.g. training in natural
deduction and understanding theoretical results, such as on
completeness, decidability and correctness. The goal of an
introductory logic course should be aptitude as well as understanding:
students need to *use* logic, in argumentation, in mathematical
proofs, in correctness proofs, etc.

It is customary and good practice to include in introductory logic
courses training in the formalization of expressions in natural
language. At the time of development this was considered less
interesting for computer science students. Maybe this was not a right
decision. Also interpreting logic in `ordinary life' situations as
described by such natural language examples, has been overlooked.
Instead of that, the `preferred' models for predicate logic modelling
in the course *Logic* are finite graphs, the natural numbers
and family relations.

Contrariwise, the second course, *Logic and computer science*,
very much concentrates on modelling information in *modal*
logic. Axiomatic proofs are generally skipped in favour of questions
concerning interpretation (epistemic logic for process communication,
temporal logic for expressing properties of parallel processes,
interval networks for production planning).

We now pay attention to some experiences with other student populations, and to the plans concerning revision of the courses.

The textbook has been in use for introductory logic instruction at a majority of Dutch universities. A second, slightly revised, printing has been published in 1993.

Hans van Ditmarsch, the first author, has taught from this textbook at four different universities (apart from the Ou, at the University of Amsterdam, the Vrije Universiteit Amsterdam and the University of Groningen), and for different student populations (computer science, cognitive science, mathematics), sometimes with, and sometimes without the Open university workbooks. Apart from the Ou these were standard lecture-type courses on `ordinary' universities. Some personal observations:

- A lecture-type course typically consists of a
*selection*of the topics treated in an Ou course of comparable size. There are two explanations for this. The first one runs as follows: According to Hans van Ditmarsch there seems to be a general rule that lecturers run out of time, whereas in a distance education course they stuff everything they always wanted to teach about the subject, so that now it's the students that run out of time. Even testing on 10 students (a standard Ou procedure during course development) whether the estimated course time is realistic does not seem to prevent this. A second reason might be that in a lecture situation one is inclined to spend much time on technicalities, specially so if the students are actively involved in the lecture; which slows down the speed of transfer. - Students underestimate the complexity of the subject matter in an Ou course, because of the elaborate explanations and the written-out answers to exercises (understanding the answer to an exercise does not replace making it).
- Students measure time investment for a course according to the
number of contact hours in it,
*not*according to the credits given for it (and so, since for regular courses based on distance education material one needs fewer lectures, students do not spend enough time on it). - One's lectures have a tendency to deviate more and more from the standard given and required in the course. This dynamics is more difficult to implement in a distance education environment, where subjects and treatment are fixed for years on a row.

The first course, *Logic*, has a more central position in
the Ou computer science curriculum than the second. Only the first
course is compulsory. This results in few individuals registering for
the second course. One reason for this might be that students choose
their subjects according to their specialization. Another reason might
be that they are somewhat discouraged by the rather abstract character
of the first course, *Logic*. (This course being too abstract
is the outcome of an evaluation of the Ou CS courses.) As the second
course more than the first one treats computer science applications of
logic, this is unfortunate.

Also because the second course turns out to be somewhat overloaded (by comparing it to the contents of lecture-type courses, and by some more or less informal assessment among Ou students), both courses are therefore currently being revised. The following subjects disappear from the first course:

- natural deduction
- completeness
- model theory
- semantic tableaux for predicate logic

Some subjects more directly related to computer science are taken from the second course and added to the first: logic programming, Hoare calculus, and the two corresponding case studies on Prolog and on Pascal. Also the topic of many-valued logic is included and the already mentioned case study on the KBS shell KAN.

The credits for the second course will be doubled. The second course will begin with a large section on proof theory (mostly the parts skipped from the first course). A large case study (as yet undecided) will be added to the second course.

Apart from the revision of these two courses, an introductory mathematics course preceding the logic courses now contains considerable training in predicate calculus.

- J.F.A.K. van Benthem, H. P. van
Ditmarsch, J. Ketting W.P.M. Meyer-Viol,
*Logica voor informatici*(Logic for computer science), 2nd edition, Addison-Wesley, Amsterdam 1994. - E.W. Dijkstra W.H.J. Feijen,
*Een methode van programmeren (A method of programming)*, Academic service, 's-Gravenhage 1984. - H.P. van Ditmarsch, J. Ketting J.S. Lodder,
*Logica*(Logic), Open University, Heerlen 1991. - H.P. van Ditmarsch, W. van der Hoek, C. Jonker, J. Ketting, J.S.
Lodder M. de Rijke,
*Logica en informatica*(Logic and Computer Science), Open university, Heerlen 1993. - H.P. van Ditmarsch E.M. van de Vrie,
*Didactische Modellen voor Logica en Discrete Wiskunde vergeleken*(A comparison of teaching models for the Logic and Discrete mathematics courses), TW-report, Open university, Heerlen 1994. - D. Gries,
*The science of programming*, Springer, Berlin 1981. - S. Reeves M. Clarke,
*Logic for computer science*, Addison-Wesley 1990. - A. van Staa C. van Meurs,
*Didactische hulpmiddelen in Ou-cursussen*(Didactic resources used in Open university courses), Report JOO/1992-III, Open university, Heerlen 1992.