A3: Adventure Augmented
In this assignment you will finish developing the text adventure game (TAG) that you began in A2. Although this handout provides some requirements regarding the functionality you should implement, many of the details are up to you. If we left something unspecified in this handout, you are empowered to make your own choices about what to do. We hope you have fun making the game your own!
Grading will work very differently for this assignment than previous assignments, because of its open-ended nature. Your grade will primarily be based on a successful, in-person, working demo of your game to a grader. With that in mind, you should concentrate your efforts on making sure that the code you submit will not exhibit any errors during the demo.
Make sure to read this entire handout before starting. You will especially want to be aware of the grading rubric at the end of the handout before you start coding.
This assignment is just a little more difficult than A2. On a similar assignment last year, the mean hours worked was 11.4, and the standard deviation was 4.9. Please get started right away and make steady progress each day. Please track the time that you spend. We will ask you to report it.
Do not assume anything about this assignment based on previous years’ iterations of it. The requirements have changed. Be wary of course staff telling you that you have to implement something the way they did in the past: not only have the requirements changed, but you are free to make different choices than they did.
Table of contents:
- Getting Started
- Step 1: Finish A2
- Step 2: Gamify
- Step 3: Sample Adventures
- Step 4: Items
- Step 5: Augmentation
There is no starter code provided for this assignment. Rather, you should begin with your A2 code.
This assignment will not be autograded. You will demo your game in
person to a grader. Hence there is no new makefile or
make check is not a requirement for this assignment.
Instead, your submission:
- must successfully compile,
- must pass your own test suite with
- must successfully generate documentation with
make docs, and most importantly,
make playmust successfully launch the game.
You are welcome to change any and all interfaces, to add new code, to
revise your A2 implementation, to link against libraries of your choice,
to change the JSON schema for adventures, etc. If you do add new compilation
units, you will need to list them at the top of the
Makefile to ensure they
(and their documentation) are built correctly.
If you link against new libraries, you will need to add them to
.merlin. And both new compilation units and new libraries could
require you to modify
.ocamlinit. To be clear, there is no need at all
to link against new libraries, and most people will not. But it is a
A few more notes:
The prohibition of imperative features still holds.
Your A3 game engine does not have to be backwards-compatible with A2 adventure files.
You still may not print inside
State.go, because that violates a design principle for games.
Step 1: Finish A2
If there’s anything you didn’t finish in A2, begin A3 by finishing it now. In particular, your A3 solution must provide the “go” and “quit” verbs, and a user interface, and be capable of loading adventures from data files.
Here are a couple definitions to recall from A2 for the rest of this handout:
Verbs, objects, and commands: A player command is of the form “VERB OBJECT”, where the verb is a single word, and the object might contain several words.
Data driven: The responsibility for implementing the game is factored between the game engine, which the OCaml code that defines the verbs, and the adventure file, which is the JSON file that defines the objects.
Step 2: Gamify
Arguably, what you built in A2 is not yet a game, because there is no notion of winning or losing, or of comparison to other players. Add functionality to your software to make it a game, as follows.
Introduce a notion of score to your game. A player’s score should be based on which rooms they have visited, and possibly other factors (which we leave up to your choice). Each room should be worth some number of points for visiting. Moreover, the adventure file should specify the number of points for each room, meaning that each room can be worth a different number of points from all other rooms. In other words, the number of points any room is worth should be data driven, not hardcoded. You should provide a “score” verb to display to the player their current score.
Implementing this will require:
extending adventure files to incorporate additional data per room, and
extending your program with a new verb, “score”, as well as functionality to compute the player’s current score.
We leave the exact rules for scoring up to you, as long as they satisfy the above requirements.
Step 3: Sample Adventures
Create your own adventure by constructing your own JSON file. It may not be based on any sample files we have already given you. We encourage you to create an interesting and creative adventure! But your grade won’t be based on that. Instead, we simply require that it have at least five rooms. If you are stuck for ideas, consider trying to model Gates Hall, or a dorm at Cornell, or a quad.
Then create a second sample adventure. The point of this step is to demonstrate that your game engine is data driven and does not hardcode objects (i.e., rooms or their points). It must also have at least five rooms. We encourage you to make it a second “level” of the game, somehow related to your first adventure. But, that is not required.
The names of both sample adventure files must end with
This is the stopping point for a satisfactory solution. See the rubric at the end of this handout for how you will be graded.
Step 4: Items
Adventure games usually involve items that the adventurer can move between rooms. For example, in the Colossal Cave adventure, the player’s goal was to collect all the items and put all of them in a designated room, making it a kind of treasure hunt. In Myst, the player’s goal was to collect missing pages of books, and to put those pages into the books. While moving items between rooms, the adventurer carries the items in their inventory.
Extend your game engine and your sample adventures with the notion of items and inventory. Your main sample adventure should contain at least three items. It should be possible for the player to issue commands that cause the adventurer to move items between rooms and their inventory. The interface should display helpful acknowledgments and error messages for those commands.
The player’s current score should depend upon which room each item is currently located in. (“Room” here means the same as in A2. The player’s inventory is not a room.) Each item should potentially be worth a different number of points than all other items. It isn’t sufficient to simply change the player’s score depending only on whether the player has picked up an item; rather, the score should depend on the room location of the item.
Finally, there should be some winning condition based on the items and their room locations that causes the engine to notify the player that they have won the game. The game could automatically end at that point, or let the adventurer keep exploring. We leave the exact design of these commands and rules up to you.
The items in your game should be data driven—that is, every item should be defined by the (JSON) adventure file, not by your (OCaml) source code. The (OCaml) types representing items, of course, will be defined in your source code.
On the other hand, the new commands that you introduce will involve new verbs, and those verbs—like “go” and “quit” and “score”—will need to be hard coded. (It is possible to make even verbs be data-driven; but, it requires more sophisticated techniques than we contemplate here.)
Here is a recommended way to satisfy the above requirements:
Extend adventure files to have an array (that’s a JSON array, not OCaml) of items, each of which has a name, starting room, and number of points that it is worth.
Modify the interface to display the items currently located in a room whenever the room’s description is printed.
Add “take”, “drop”, and “inventory” commands. The command “take <item name>” would transfer an item from a room to the adventurer’s inventory, and “drop <item name>” would do the opposite. The “inventory” command would display the items currently carried by the adventurer.
Extend adventure files to designate a treasure room.
Modify the computation of the player’s score to add points whenever an item is in the treasure room. Of course, the player shouldn’t keep getting extra points for repeatedly picking up and dropping the same item.
Modify the interface to print a “win message” when all items have been dropped in the treasure room. The win message comes from the adventure file.
This is the stopping point for a good solution. See the rubric at the end of this handout for how you will be graded. As usual, the excellent scope will be worth very few points but will require substantial effort. So if you’d prefer to opt out at this point, no worries!
Step 5: Augmentation
Thus far, the gameplay of our adventures does not involve very much adaptation to what the player does. Let’s make the gameplay more interesting by making it dependent upon the state of the game.
Implement these augmentations, and extend your sample adventures to demonstrate your work:
Doors, locks, and keys: The exits from rooms can be blocked by doors, which can be locked, unlocked, and relocked. Items (in the same technical sense of the word “items” in the Good scope above) can be used as keys to unlock doors and pass through. Each door can have its own individual key. This adds additional challenge and interest for players.
Note that we are asking for keys, not proximity cards nor passphrases. Keys are items that must somehow be actively used in conjunction with a door. Proximity cards passively provide access when attempting to pass through a door, just by virtue of having them in your inventory. Passphrases are strings a player could type in to pass through a door. To be clear: you are required to implement keys, not proximity cards nor passphrases. You will not get credit for the latter two.
Dynamic descriptions: The description of a room depends upon what items are located inside it, and what items are in the adventurer’s inventory. That enables the game to adapt to player actions.
Simply printing the player’s inventory as part of each room does not suffice to satisfy this requirement: you must make the base description itself potentially customizable. For example, if the player had a lamp in their inventory, perhaps the room description would change from “It is dark” to “You see a dragon.” Likewise, simply printing “you’ve been here before” (on a second visit) is not a change to the base description.
As before, verbs will be hardcoded, but objects should be data driven. For example, which key opens which lock would be determined by the adventure file, but verbs such as “unlock” or “use” would be hardcoded.
You might consider commands that have indirect objects, such as “use <object> with <other object>”.
Make sure your NetID is in
author.mli, and update the
variable at the end of
author.ml. That should be the hours you spent
after submitting A2. Please do not include the time you worked during A2.
That is, the sum of your hours worked in your A2 and A3 submissions should
represent the total time you spent working on the entire game.
Ensure that your solution compiles and passes your own test suite. Run
make zip to create a zipfile to upload to CMS. Your zipfile must contain
all your OCaml source files ( both
.mli), your two sample
make zip command we
provided in A2 will automatically do that for you, so use it
rather than any graphical tools.
Submit your zipfile on CMS. Double-check before the deadline that you have submitted the intended version of your file.
Your section has a grading team composed of your section TAs and some consultants. Your section TAs will give you directions on how to schedule a one hour meeting with a member of your grading team to demo your finished game.
Based on past experience, we know there are various failure modes for demo scheduling. So, we regretfully need to impose the following policies:
The deadline for scheduling your demo is the same as the deadline for late submissions on this assignment (Friday, Oct 4, at 5 pm)—though we encourage you to contact the grader earlier than that. If you have not scheduled a demo by that deadline, there will be an mandatory deduction of 10 points.
We recommend having the demo over the weekend or on Monday. The deadline for having the demo is Wednesday, Oct 9, at 5 pm —exactly one week after the original deadline. If you have not had a demo by then, there will be a mandatory deduction of 25 points.
If you fail to show up for your demo, or if you fail to ever schedule one, then the grader will proceed with grading in your absence, with the appropriate deductions. No regrades will be accepted on the grounds that the grader misunderstood something about your software—you missed the chance to be there to explain it.
At the demo, the grader will run through the rubric below with you. When that is over, congratulations! Your adventure is complete.
- 25 points: submitted and runs
- 35 points: satisfactory scope
- 35 points: good scope
- 5 points: excellent scope
We are not going to assess testing and code quality on this assignment. That’s not because they are unimportant; rather, it’s because the demo grading will take so much time that the graders won’t be able to spend extra time on testing and code quality. Don’t worry—we will return to assessing them on A4.
The rest of this rubric is written as instructions to the grader to follow during the demo, but you should read them before you begin coding on the assignment.
Submitted and Runs:
Download the student submission (if any) from CMS. Run
make playon the submission. If the game engine fails to compile and launch (or if there was no submission), the submission loses 25 points. No exceptions whatsoever will be made to that.
But, if the student wants to substitute any newer version of the code, they may do so, at the loss of the 25 points. That version must be uploaded into CMS by you at that moment as the definitive record of what was graded, in part for Academic Integrity purposes. Do not wait until later to do this.
[5 points] Ask the student to load their main sample adventure and to demonstrate that the “go” command works: attempting to go through an existing exit succeeds.
[5 points] Attempting to go through a non-existent exit fails with a helpful error message from the engine.
[5 points] Each room can be worth a different number of points in the adventure file.
[10 points] After moving around through some rooms in the sample adventure, the “score” command seems to work correctly: visiting rooms earns the player points.
[5 points] The main sample adventure has at least 5 rooms, and you can successfully move through them.
[5 points] The secondary adventure file exists, and you can load and play it, and it also has at least 5 rooms.
[5 points] Each item in the adventure file can be worth a different number of points.
[10 points] It is possible to move items between rooms.
[10 points] The player’s score depends upon which room each item is located within.
[5 points] There is a winning condition, and the engine notifies the player when they achieve it.
[5 points] Both adventure files have at least 3 items.
Doors, locks, and keys:
[1 point] Exits can be blocked by doors, which can be unlocked and relocked with keys.
[1 point] Door state (locked vs. unlocked) persists.
[1 point] Each door can have its own key.
[1 point] The items in a room can change its description.
[1 point] The items in the adventurer’s inventory can change the room’s description.
If you discover that any objects (as in “verbs” vs. “objects” in player commands, not as in Java objects) are hard coded instead of data driven, give zero points for the scope in which they are discovered.
For any of the rubric items above, you might encounter a situation in which the submission has only partially implemented the functionality, or the implementation is buggy. In that case, use the following deduction scheme:
-25% of the points: it’s a minor quirk. Something noticeably goes wrong, but the player can continue playing the game without having to restart, and it doesn’t affect whether the player can win the game by accumulating all the points.
-50% of the points: it’s a moderate bug. The game continues, but gameplay becomes significantly diminished in quality. Perhaps parts of the map become unvisitable, or the game becomes unwinnable.
-75% of the points: it’s a show stopper. The game crashes or becomes unplayable.