Group Project 3 - Lord of the Cache

CS3410 Spring 2015

Final Due: 11:59pm, Friday, April 24

Late Policy: Start early, don't be late.

Reminder: you must work in a group of two for this project. You don't need to have the same partner as the earlier projects.

This assignment will help you and your partner further hone your skills as elite programmers. Your goal is to write a program (a wizard) that must become the Lord of the Cache. This game simultaneously tests your knowledge of computer systems, your programming skills, and your ability to protect programs from being hacked. Writing a top notch bot will require all the skills you have been developing so far in the course and, hopefully, be a fun and interesting challenge. At the end of the assignment, we will hold a tournament pitting all your wizards against each other and the winner will receive not only bragging rights but actual bonus points on the assignment! Plus we will be holding a TA-student banquet (read: pizza, cookies, and soda) for all while we watch your wizards battle!

Prelude: A Long-expected Cache-Hit

Listen to this while reading

The Lord of the Cache is a dark and evil entity known as Sauron. Long ago, twenty rings of power existed - 3 for elves, 7 for dwarves, 9 for man, and one for Sauron. However, he lost the One Ring - a powerful device that contains much of his power. It grants anyone who uses it vast abilities including invisiblity, an ideal cache-line eviction policy, and an impossibly perfect compression algorithm. But now, he is seeking the ring once more.

It is now the 3018th year of the Third Age. You, a great and powerful wizard, have discovered the One Ring and are determined to destroy it to prevent Sauron from enslaving the digital world. With 4 of your trusty hobbit companions, you form a fellowship and set out on your journey to dispose of the ring forever.

However, Sauron has sent another evil wizard with 4 hobbits to sabotage you and obtain the ring for himself. You must fend this opposing wizard off and race to complete your mission to destroy the ring and become the new, benevolent Lord of the Cache!

Game Overview

The game is played by two wizard with four hobbits each. All eight hobbits are placed in a single shared memory space, running on a simulated 8-core MIPS machine with a very small partitioned, direct-mapped data memory cache. The hobbits are executed by the same simulator we have used previously, but modified to simulate a simple data cache, and to simulate 8 MIPS CPUs simultaneously: one instruction of core 0 then one instruction of core 1, and so forth.

Each team of hobbits is assigned a payload, a particular byte of data. The goal of the contest is to fill memory with their team's payload as much as possible (capturing the location in memory), while simultaneously trying to impede the progress of the other team. All access to data memory goes through a very small cache, and there is a large penalty for missing the cache, so a clever wizard will need to carefully optimize its algorithms for filling memory. Each core can normally only read and write its own team's half of the memory space using its own half of the data cache. However, a core can Equip_Ring, which enables the wizard to both read or write the opponent's half of memory space and to take advantage of the opponent's half of the data cache.

Each wizard has several dangerous tools at their disposal to aid their quest to become the Lord of the Cache. Each team's hobbit can use a Nazgul screech, which will deafen the opponent and stall them for a short period of time. Each wizard can also summon a Balrog that, if encountered, will stall the opponent's wizard for a long period of time. The operation of these attacks is described further below.

Each team supplies a single MIPS executable program. At the beginning of the game, the simulator starts each wizard by calling the function

 void __start(int core_id, int num_crashed, unsigned char payload);

Each of the four cores devoted to one team will invoke the function when it first boots, passing its core ID (a number from 0 to 3), and the team's payload (from 0 to 239). If a core ever crashes, it will simply reset and invoke the function again. The num_crashes parameter is 0 when a core first boots, and is incremented each time the core crashes.

A bot can Equip_Ring by calling

 void Equip_Ring();

and can leave Equip_Ring mode by calling

 void Unequip_Ring();

Equip_Ring mode can potentially let a wizard fill memory more quickly, because it can take advantage of the opponent's memory cache, and can also be used to impede the progress of the opponent by interfering with the opponent's use of that same cache. However, these come with risks. Whenever a bot is in Equip_Ring mode, its core ID is written into a taunt[] array that is available to its opponent. Additionally, it is possible to detect the presence of an invading wizard by looking at cache usage statistics and timing information. A core can accuse one of its opponent's cores of Equip_Ring mode by calling:

 int Eye_Of_Sauron(int opponent_core_id);

If the opponent's core was caught by the Eye while in Equip_Ring mode, that core is captured by Sauron and removed from the quest - essentially locking up the core for the rest of the game. However, the Eye of Sauron is not a perfect tool. If a wizard attempts to use the Eye of Sauron on a core that is not in Equip_Ring mode, the accusing core stalls for 100,000 (STALL_ON_DISTRACTED) cycles, because it takes time to set up the Eye.

A wizard can summon a Balrog by writing 0xF0 to a location in memory. Balrogs are set up immediately once 0xF0 is written to a memory address but due to the intense work to set up Balrogs, the hobbit will rest for 100 (STALL_TO_SUMMON_BALROG) cycles before executing the next instruction. Balrogs are triggered when any core attempts to write to a memory location that currently holds the Balrog byte. Such an attempt causes the write to fail and the writing core to be stalled for 1,000,000 (FLY_YOU_FOOLS) cycles. Since the write fails, multiple cores can be trapped at the same memory location. Balrogs are not triggered by reading memory. Due to the immense mana required to summon a Balrog, each team's hobbit can set up at most 15 (BALROGS_PER_TEAM) Balrogs in a game of Lord of the Cache.

To prevent accidental self-Balrogging from functions that happen to contain the address 0xF0, the upper 4 MB (4 * STACK_SIZE) of memory is protected from Balrogs.

A wizard can neutralize a Balrog by summoning Gandalf to clear it. It does so by writing 0xF2 to the Balrog memory location. However, Gandalf takes time to arrive when needed and clearing a Balrog takes 100 (WAIT_FOR_GANDALF) cycles. This will not retroactively un-stall any core that has already been trapped at that location, but will of course save you from being trapped there in the future.

Wizards can also blast their opponents by using a Nazgul_Screech. This is done by writing the byte 0xF1 into a location in memory. When a Nazgul screech is set in one half of memory, any enemy hobbits who try to write to that half of memory during the next 100 (NAZGUL_SCREECH_DURATION) cycles will be stunned for 300 (STALL_WHEN_STUNNED) cycles. Due to weak hobbit vocal chords, each team's hobbits can perform at most 30 (NAZGUL_SCREECH_PER_TEAM) Nazgul screeches before they tire out.

The game ends when one team manages to fill the equivalent of half of its data memory with its payload, or after a fixed number of cycles, whichever comes first. The team having the most memory filled with its payload wins the match and the rights to proceed in the tournament to become the Lord of the Cache. But beware...for if you do win the tournament, a great challenge awaits.

Lord of the Cache is interesting because the hobbits on each team must cooperate to make effective use of the limited data memory cache, and be resilient to interference from the opponent's attacks. A successful team will use clever strategies to fill memory quickly and try to slow the opponent's progress, while also ensuring that it is not vulnerable to the same types of interference.

Game Details: Memory mapping and layout

The simulator makes it seem like all four of your team's hobbits live in the lower half of the memory address space (addresses below 0x80000000). The first 1 MB of this memory is read-only, and is where the text segment of your MIPS executable is loaded. The next 31 MB is read-write memory, where global variables and stacks are placed. Global variables are typically placed near the bottom of this segment by the compiler, and the stacks are initialized to near the top of the segment and grow down. All four of the wizards on a team share this same space, and share the same global variables. Your opponent is mapped into the upper half of the memory address space (addresses starting at 0x80000000), with an identical layout.

Program stacks: The four program stacks for the four hobbits on a team are initially spaced evenly near the top of the 31 MB read-write segment (though your code can set $sp to anything you like once it starts executing). The stacks are spaced 1 MB apart; this is plenty of space for most program, but if you invoke a deeply recursive function, the call stacks can overwrite each other.

Status and cores: The simulator keeps track of the information about each team and each core, and maps this data into part of the 1 MB read-only memory region, where your bots can access it, and where your opponent can access it when it is in Equip_Ring mode. This includes information about your current score (how many bytes of memory currently contain your team's payload), how many cycles are left remaining in the game, the contents of all four of your wizard's register files, and statistics about cache misses and stalls for each of your four wizards. Some of this information is available in other places as well: your team's payload is passed to __start(), for example; and cache statistics are also accessible via a system call.

Physical addresses: Both wizards view the lower half of memory as their own, and both access their own code, data, and stacks using addresses below 0x80000000. These, of course, are virtual addresses. In actuality, the first player's virtual addresses correspond directly to physical memory addresses, while the second player's virtual addresses are mapped in reverse: virtual address 0x00001234, for example, is mapped to physical addresses 0x80001234, and virtual address 0x80001234 is mapped to physical address 0x00001234. This virtual to physical mapping is completely transparent to the program.

Caches: Instruction fetch is assumed to use an infinitely large cache with no miss penalty. All loads and stores to memory done by the program use a data cache. Cache hits are serviced in a single cycle, with no processor stalls. Cache misses cause roughly 100 cycles of processor stalling. The data cache is direct-mapped and physically tagged, with 4 cache lines, and a 256 byte block size, for a total of 1024 bytes of cache. However, the data cache is partitioned, with the first 2 lines used to cache the lower half of physical memory (the first player's half), and the last 2 lines used to cache the upper half of physical memory (the second player's half).

Cache Management: The version of MIPS we are using (MIPS-I) lacks instructions to manage the cache, but we have provided system calls instead. A wizard can call

 void invalidate(void *ptr);

to remove from the cache any block containing the data pointed to by ptr. This only works if the wizard has access to the memory in question at the time the call is made. Similarly, when a wizard calls

 void prefetch(void *ptr);

the cache will start to fetch the block containing the given address into the cache. The call returns immediately with no stalling, and roughly 100 cycles later the data will arrive in the cache and be available for use. This call only works if the wizard actually has access to the specified block of memory. For each cache line, there can be at most one outstanding fetch in progress. Prefetch requests are ignored if a fetch is already in progress for that line (in this context, "fetch" means either an earlier prefetch, or a core waiting on the cache line to fill because of a cache miss), and actual data fetch requests (e.g. from load/store instructions, rather than prefetch requests) supersede any prefetch requests that are in progress. There are additional system calls to read the current cache tags and the tags for lines that are being fetched. This same information is also available in the memory mapped data-structures.

We have provided a header file (described below) containing various useful macros for accessing your own code and data segments, and those of your opponent. The header also describes in detail the memory layout, cache organization, and available system calls.

Pointer arithmetic in C: When performing pointer arithmetic in C, the compiler implicitly multiplies by the size of the data type that is pointed to. For instance:

  char *byteptr = (char *)0x1000;
  byteptr[3] = 0xFF; // store one byte at memory three BYTES after ptr, so address 0x1000+3 = 0x1003
  byteptr += 5; // increments pointer by five BYTES, to address 0x1005

  int *wordptr = (int *)0x1000;
  wordptr[3] = 0xFFFFFFFF; // store one word at memory three WORDS after ptr, so address 0x1000+12 = 0x100c
  wordptr += 5; // increments pointer by five WORDS, to address 0x1000+20 = 0x1014

From this, we can see that the following code is almost certainly a bug (two bugs, in fact), since the first statement advances by 12 cache lines instead of 3 cache lines, and the second statement advances by 8 cache lines instead of 2 cache lines:

  int *ptr = (int *)HOME_DATA_START + 3*CACHE_LINE;
  pre += 2*CACHE_LINE;

To advance by 3 cache lines (first statement) or 2 cache lines (second statement) when using integer pointers, one could do:

  int *ptr = (int *)(HOME_DATA_START + 3*CACHE_LINE);
  pre = (int *)((int)pre + 2*CACHE_LINE);


  int *ptr = (int *)HOME_DATA_START + (3*CACHE_LINE)/4;
  pre += (2*CACHE_LINE)/4;

Basic Strategy

What can be said? Design code that makes efficient use of the data cache to fill memory quickly, and find clever ways to beat your opponents, stop them from posting their payload, and interfere with your opponent's cache usage. And although you cannot directly modify your opponent's registers, you can modify their call stack and global variables. Coupled with what you know about exploiting programs, you may even be able to get your opponent to work for you. At the same time, you need to be on the lookout for hobbits from your opponent's team. Because all of these tasks require processing time and access to memory, you will need to carefully plan how to coordinate use of these scarce resources.

To get started, we will supply several example teams that employ some basic strategies. Their source code can be found in the same directory as the Lord of the Cache simulator. The example teams are:

Note: these teams do not do a particularly good job making effective use of the cache. For instance, most write a single byte at a time to memory, and take only a little care to ensure that their bots don't compete with each other for cache lines.

Playing the Game

The Lord of the Cache simulator and example teams can be found on the CSUG machines at:


Copy the whole directory to your home directory to get started.

 $ cp -R /courses/cs3410/pa3 ~

If you will be using the VM and not CSUG then follow the instructions on this page

Each team should create a single program, written in C and compiled into a MIPS binary executable. The four wizards on a team all share the same code (though your program can obviously do different things depending on the core_id parameter it receives from the simulator). The wizards directory contains all the teams listed above, and a Makefile to help compile them. Changing to that directory and typing make will compile them all. When you write your own wizard instance, you can add it to the src/ directory and run make to compile.

In the main directory you will also find lordofthecache, the simulator used to play the game. The simulator lordofthecache takes two arguments, the names of the teams in the match, for example:

 $ ./lordofthecache wizards/bin/greedy wizards/bin/fbi

You will also find simple shell script cachevall which takes the name of a single team and pits it against every other team in turn. This lets you compare how one team does in comparison to all the others:

 $ ./cachevall wizards/bin/greedy wizards/bin

Compilers don't always produce the code you expect them to, so you will likely want to read the compiled assembly. As you have seen before, you can decompile compiled code using objdump:

 $ /courses/cs3410/mipsel-linux/bin/mipsel-linux-objdump -xdl wizards/bin/greedy

Finally you have two tools at your disposition. First, you will want to read the lordofthecache.h header file in the wizards/src directory. It is the only header file you should import into your wizard programs, besides any you might write yourself, but it is designed to be all you will need. It contains the basic functions Equip_Ring , Eye_Of_Sauron, Unequip_Ring and so on, along with various other system calls like printf and rand(). But more importantly, its comment describes in detail how your program is laid out in memory and how simulator statistics and other data is mapped into your address space.

Your second major tool is the lordofthecache simulator which has several useful command line options. Type lordofthecache with no arguments to see a list.


Your grade for this assignment will have three parts.

What to submit

Note the due dates

By 11:59pm, April 24, 2015

As you're coding keep this in mind:

One Ring to rule them all,
One Ring to find them,
One Ring to bring them all and in the darkness bind them.