A3: Adventure Augmented

Adventure Augmented

In this assignment you will finish developing the text adventure game (TAG) that you began in A2. Although this handout provides some requirements regarding the functionality you should implement, many of the details are up to you. If we left something unspecified in this handout, you are empowered to make your own choices about what to do. We hope you have fun making the game your own!

Grading will work very differently for this assignment than previous assignments, because of its open-ended nature. Your grade will primarily be based on a successful, in-person, working demo of your game to a grader. With that in mind, you should concentrate your efforts on making sure that the code you submit will not exhibit any errors during the demo.

Make sure to read this entire handout before starting. You will especially want to be aware of the grading rubric at the end of the handout before you start coding.

This assignment is just a little more difficult than A2. On a similar assignment last year, the mean hours worked was 11.4, and the standard deviation was 4.9. Please get started right away and make steady progress each day. Please track the time that you spend. We will ask you to report it.

Do not assume anything about this assignment based on previous years’ iterations of it. The requirements have changed. Be wary of course staff telling you that you have to implement something the way they did in the past: not only have the requirements changed, but you are free to make different choices than they did.

Table of contents:

Getting Started

There is no starter code provided for this assignment. Rather, you should begin with your A2 code.

This assignment will not be autograded. You will demo your game in person to a grader. Hence there is no new makefile or make check. Passing make check is not a requirement for this assignment. Instead, your submission:

You are welcome to change any and all interfaces, to add new code, to revise your A2 implementation, to link against libraries of your choice, to change the JSON schema for adventures, etc. If you do add new compilation units, you will need to list them at the top of the Makefile to ensure they (and their documentation) are built correctly.

If you link against new libraries, you will need to add them to _tags and .merlin. And both new compilation units and new libraries could require you to modify .ocamlinit. To be clear, there is no need at all to link against new libraries, and most people will not. But it is a possibility.

A few more notes:

Step 1: Finish A2

If there’s anything you didn’t finish in A2, begin A3 by finishing it now. In particular, your A3 solution must provide the “go” and “quit” verbs, and a user interface, and be capable of loading adventures from data files.

Here are a couple definitions to recall from A2 for the rest of this handout:

Step 2: Gamify

Arguably, what you built in A2 is not yet a game, because there is no notion of winning or losing, or of comparison to other players. Add functionality to your software to make it a game, as follows.

Introduce a notion of score to your game. A player’s score should be based on which rooms they have visited, and possibly other factors (which we leave up to your choice). Each room should be worth some number of points for visiting. Moreover, the adventure file should specify the number of points for each room, meaning that each room can be worth a different number of points from all other rooms. In other words, the number of points any room is worth should be data driven, not hardcoded. You should provide a “score” verb to display to the player their current score.

Implementing this will require:

We leave the exact rules for scoring up to you, as long as they satisfy the above requirements.

Step 3: Sample Adventures

Create your own adventure by constructing your own JSON file. It may not be based on any sample files we have already given you. We encourage you to create an interesting and creative adventure! But your grade won’t be based on that. Instead, we simply require that it have at least five rooms. If you are stuck for ideas, consider trying to model Gates Hall, or a dorm at Cornell, or a quad.

Then create a second sample adventure. The point of this step is to demonstrate that your game engine is data driven and does not hardcode objects (i.e., rooms or their points). It must also have at least five rooms. We encourage you to make it a second “level” of the game, somehow related to your first adventure. But, that is not required.

The names of both sample adventure files must end with .json.

This is the stopping point for a satisfactory solution. See the rubric at the end of this handout for how you will be graded.

Step 4: Items

Adventure games usually involve items that the adventurer can move between rooms. For example, in the Colossal Cave adventure, the player’s goal was to collect all the items and put all of them in a designated room, making it a kind of treasure hunt. In Myst, the player’s goal was to collect missing pages of books, and to put those pages into the books. While moving items between rooms, the adventurer carries the items in their inventory.

Extend your game engine and your sample adventures with the notion of items and inventory. Your main sample adventure should contain at least three items. It should be possible for the player to issue commands that cause the adventurer to move items between rooms and their inventory. The interface should display helpful acknowledgments and error messages for those commands.

The player’s current score should depend upon which room each item is currently located in. (“Room” here means the same as in A2. The player’s inventory is not a room.) Each item should potentially be worth a different number of points than all other items. It isn’t sufficient to simply change the player’s score depending only on whether the player has picked up an item; rather, the score should depend on the room location of the item.

Finally, there should be some winning condition based on the items and their room locations that causes the engine to notify the player that they have won the game. The game could automatically end at that point, or let the adventurer keep exploring. We leave the exact design of these commands and rules up to you.

The items in your game should be data driven—that is, every item should be defined by the (JSON) adventure file, not by your (OCaml) source code. The (OCaml) types representing items, of course, will be defined in your source code.

On the other hand, the new commands that you introduce will involve new verbs, and those verbs—like “go” and “quit” and “score”—will need to be hard coded. (It is possible to make even verbs be data-driven; but, it requires more sophisticated techniques than we contemplate here.)

Here is a recommended way to satisfy the above requirements:

This is the stopping point for a good solution. See the rubric at the end of this handout for how you will be graded. As usual, the excellent scope will be worth very few points but will require substantial effort. So if you’d prefer to opt out at this point, no worries!

Step 5: Augmentation

Thus far, the gameplay of our adventures does not involve very much adaptation to what the player does. Let’s make the gameplay more interesting by making it dependent upon the state of the game.

Implement these augmentations, and extend your sample adventures to demonstrate your work:

As before, verbs will be hardcoded, but objects should be data driven. For example, which key opens which lock would be determined by the adventure file, but verbs such as “unlock” or “use” would be hardcoded.

You might consider commands that have indirect objects, such as “use <object> with <other object>”.

Submission

Make sure your NetID is in author.mli, and update the hours_worked variable at the end of author.ml. That should be the hours you spent after submitting A2. Please do not include the time you worked during A2. That is, the sum of your hours worked in your A2 and A3 submissions should represent the total time you spent working on the entire game.

Ensure that your solution compiles and passes your own test suite. Run make zip to create a zipfile to upload to CMS. Your zipfile must contain all your OCaml source files ( both .ml and .mli), your two sample adventures, and _tags and Makefile. The make zip command we provided in A2 will automatically do that for you, so use it rather than any graphical tools.

Submit your zipfile on CMS. Double-check before the deadline that you have submitted the intended version of your file.

Demo

Your section has a grading team composed of your section TAs and some consultants. Your section TAs will give you directions on how to schedule a one hour meeting with a member of your grading team to demo your finished game.

Based on past experience, we know there are various failure modes for demo scheduling. So, we regretfully need to impose the following policies:

At the demo, the grader will run through the rubric below with you. When that is over, congratulations! Your adventure is complete.

Rubric

We are not going to assess testing and code quality on this assignment. That’s not because they are unimportant; rather, it’s because the demo grading will take so much time that the graders won’t be able to spend extra time on testing and code quality. Don’t worry—we will return to assessing them on A4.

The rest of this rubric is written as instructions to the grader to follow during the demo, but you should read them before you begin coding on the assignment.

Submitted and Runs:

Satisfactory Scope:

Good Scope

Excellent Scope

Deductions