Genetically Optimized N-Dimensional TicTacToe

CS473 - Introduction to Artificial Intelligence Practicum

Ernest M. Post

March 22, 1999

Introduction Methods Results Discussion Conclusions Description of the Code

 

Abstract

Introduction

            A genetic algorithm allows a population of simulated automatons to each attempt to solve a problem and uses the characteristics of the most successful one(s) to create subsequent generations of more "skilled" automatons. This approach should provide a model that allows a computer to learn to play TicTacToe because the rules of the game are simple, but the playing space is enormous once the "board" is expanded beyond two dimensions. This project was designed to create a simulation wherein automatons are genetically evolved to play TicTacToe optimally with any number of dimensions and any edge length.

Methods

            The first part of the project was to develop the infrastructure for playing the game, determining when a player has won and implementing a playing strategy. The infrastructure had to be designed to be independent of the number of dimensions and edge size of the board. The playing strategy I chose was evaluating each potential move using the number of marks for each player in any line containing that cell (the "features").

            The second part of the project was to implement genetic algorithms. The "genetic material" ("genes") for the automatons ("Genies") were the weights to apply to each feature before summing them to choose the best move. The results of a set of games determined which Genie would "die" and which would contribute genes to a new Genie. "Mutations" were also simulated by adding randomness to the determination of genes when a new Genie was created.

            The third part of the project was to determine the effects of varying the number of dimensions and the edge size on the playing time and on the resulting genes. I performed experiments stretching the parameters as far as possible with the available resources.

Results

            The infrastructure was made possible with recursive algorithms. The machine's limit (playing time too long to allow analysis of multiple games) was reached at 10 dimensions for a board with 4 cells on an edge and at about 15 cells on an edge for a three-dimensional board. Running time increased exponentially for increasing dimensions and increasing edge length. The initial plans for Genie replacement, the starting point for genes, and the method for gene mutation failed to evolve a successful population. Subsequent adjustments resulted in the development of a fairly reproducible set of genes. Changing the goal from winning to drawing resulted in a distinctly different gene pattern.

Discussion

            This model could be explored further in several ways. For example, the evolutionary approaches that were abandoned might be successful if the population was larger or was allowed to evolve for a longer time. These questions are interesting because the original approaches more closely resemble biologic evolution. It would also be interesting to develop a MinMax algorithm for this model to compare its results and running time to the genetic algorithms.

Conclusions

            Developing Genetically Optimized N-Dimensional TicTacToe was a challenging and enjoyable experience. The model could be applied to the development of other genetic programs, to other problems involving a multidimensional integer space, or reinforcement learning of an evaluation function.

Introduction

            Genetic algorithms are a common means of applying computer resources to problem solving. They work by allowing a population of simulated automatons each attempt to solve the problem and then using the characteristics of the most successful one(s) to evolve subsequent generations of more "skilled" automatons. This approach should be useful in building a model that allows a computer to learn to play TicTacToe because the rules of the game are simple, but the number of possible different ways in which it can be played becomes enormous once the playing "board" is expanded beyond two dimensions. The playing space is too large to be readily analyzed using any deterministic approach, such as MinMax.

            The purpose of this project was to create a simulation wherein automatons would play TicTacToe and be genetically evolved to play it optimally. The goal was to create the simulation in a way that would generalize the play of the game to a board with any number of dimensions and any edge length. This allowed me to investigate the effects of those generalizations on the playing strategy that evolved and on the time required to play the games.

Methods

            The first part of the project was to develop the infrastructure for playing the game, determining when a player has won and implementing a playing strategy. The infrastructure had to be designed so that it was independent of the number of dimension in which the game was played. The playing strategy that was chosen was to evaluate each potential cell in which a mark could be placed by determining the number of your own marks in any line containing that cell and determining the number of the opponent's marks in such a line. Any line which contains both your and your opponent's marks was not considered because it was not a potential winning line for either of you. The counts of potential winning lines included those with no markers of either type. For coding simplicity, these were counted twice, once for each player, and combined when analyzing the genes that evolved. For example, if the final genes had a weight of -3 for none of my markers and +3 for none of the opponent's markers, the combined weight was zero.

            The second part of the project was to apply the principles of genetic algorithms. The features for each potential next cell in which to place your marker were the numbers of lines with each possible number of yours or your opponent's markers in any line that contained that cell. For example, on a three-dimensional board a corner cell was included in seven lines so there might have been one line with two of your markers, two lines with one of the opponent's, etc. The "genetic material" ("genes") for the automatons was the weights to apply to each of the features before linearly summing them to choose the best move. For each set of games played the win, lose and draw scores were tabulated for each automaton ("Genie"). The scores were combined into a "lifeScore" and used to determine which Genie(s) would "die" (be discontinued) and which would contribute the genes to the creation of a new Genie. "Mutations" were also simulated by adding randomness to the determination of the genes when a new Genie was created. The Genies were divided into two families, one that was allowed to evolve and one with a fixed set of genes. That allowed me to assess the success of the evolution by comparing the lifeScores of the two families. Confirmation of the validity of the algorithms was attempted by evaluating the effect of changing the goal from winning to reaching a draw.

            The third part of the project was to determine the effects of varying the number of dimensions and the edge size on the playing time and on the resulting genes. The code was written so that almost all of the parameters are determined in the Main() function (Appendix C, Main.cpp), including the following: dimensions, edge length, number of Genies, strategies for each of the two families of Genies, number of Genies in the first family, number of games per Genie, mutation rate, and number of generations per run. Table 1 shows the parameters used in each experiment.

            The initial genetic methods that were used included weighted random choices of Genie replacement, a fixed "smart" starting point for genes, an even number of invariant and evolving Genies and random weights for mutated genes. Those methods were subsequently changed to a deterministic Genie replacement policy (worst Genie dies and best Genie provides the genes for the new Genie), randomized starting genes, only one invariant Genie and only small changes (+1 or -1) in mutated genes. Four playing strategies were tried: 1. choosing randomly, 2. choosing the first available cell in the linear grid representing the N-dimensional board, 3. focusing only on winning or blocking a line that is about to be completed, and 4. evaluating every possible move by multiplying the genes (weights) by the numbers of lines that have each possible number of your or your opponent's markers. If two moves had equally weighted sums, the first of the two in the linear array was chosen.

Results

            The infrastructure for playing in any number of dimensions (N) and any edge size (E) was achieved using recursive algorithms. I inductively found a method for determining the number of possible rectilinear and diagonal winning lines and confirmed the derived numbers empirically (for N=2-4, E=2-4) by sampling all possible combinations of E cells and counting those which formed a line. The playing space grows rapidly with increasing dimensions or edge size (Table 2). The necessary functions were implemented in four classes (Appendix C). The algorithm for mapping a cell to the lines that contain it is stated in the comments with the code (Appendix D, Board.cpp). Running time grows exponentially for increasing dimensions (Figure 1) and increasing edge length (Figure 2). The machine's limit (playing time over 12 hours per game or going beyond allowable memory) is reached at 10 dimensions for a board with 4 cells on an edge and at about 15 cells on an edge for a three-dimensional board. Analysis of evolutionary outcomes (using 10 genies, 30 games per generation, 1000 or 5000 generations) was not attempted beyond 5 dimensions with an edge length of 4.

            The initial plans for the evolutionary rules failed to evolve a successful population. The "lifeScore" was initially calculated using normalized lifetime win, lose, and draw scores with a penalty for aging (living through generations). Despite the aging penalty the initial approach often resulted in a static population because once a Genie was successful for a while, a new Genie could never accumulate enough winning games to outscore the older one. The calculation that was eventually used was:

lifeScore = 100 X (2 X win - 2 X lose - draw) / (2 X win + 2 X lose + draw) + 101

That gives a range of possible scores of 1-201. The success of the Genies was assessed from the average of the final lifeScores of the family that was allowed to evolve. To retain some intergenerational stability, the win, lose and draw scores from the previous generation of play were halved (with integer arithmetic) and retained for the next generation of games played. Some of the experiments were done using 5 non-evolving and 5 evolving Genies. Others were done using 1 and 9, respectively. After some initial experimentation (data not shown) the non-evolving Genies always used strategy 3 and the evolving ones strategy 4. Since evolving Genies were always playing against each other as well as against the static Genie(s), their average lifeScore could not exceed a ceiling of about 125 with 9 evolving Genies (Figures 3, 4, 5) or 160 with 5 evolving Genies (Figures 6, 7, 8). The evolution of successful Genies was usually fairly abrupt, as can be seen in the figures. The considerable intergenerational variation in the average lifeScores was due to the small genetic changes being made and to the variation in numbers of drawn games played between two members of the evolving family. The randomness in the genetic drift and in the choice of playing opponents resulted in a considerable variation in the number of generations required to reach a winning set of genes. Figure 5 shows the evolution of two separate runs using identical parameters. One abruptly achieved winning genes after about 500 generations (Figure 7). The other reached that goal after about 1100 generations, including a 200 generation "pause" at mediocre scores (Figure 8). One run of 4-dimensional games did not achieve successful genes until about 2600 generations (Figure 4), but another run of 5-dimensional games did so at about 300 generations (Figure 5).

            To achieve the above results, I had to make other changes in the original plans for the genetic rules. Originally, the choice of which Genie(s) to replace and whose genes to copy was made using both the lifeScore and a random probability factor. When that failed to evolve successful Genies in a reasonable number of generations, I switched to a deterministic policy of replacing the Genie with the lowest score using the genes (with possible mutations) from the Genie with the highest score. Similarly, the initial approach to gene mutations was to allow a mutated gene (weight) to be replaced by a weight randomly chosen from a reasonable range. That was replaced by a policy that just allowed a small "drift" in the weights of +1 or -1 from the inherited genes. Mutation rates of 10 and 20 percent (likelihood of a gene changing in one generation) were tried and the latter was found to evolve successful genes faster.

            I also experimented with several different starting points for the genes that were to evolve. Initially I chose a set which matched those used for strategy 3, i.e. used non-zero weights only for lines that were about to result in winning or losing the game in 1 turn. I also tried starting from all zero weights and from two ranges of random weights, 0 to 9 and -5 to +5. Most of the results matched what intuition would suggest (Figure 9, Table 3). In 3 dimensions with an edge length of 4, starting from random genes (0 to 9), and running for 1000 generations, the average genes at the end (solid bars) placed the greatest weight on completing a line that already had 3 of your own markers ("+3") and the next greatest weight on blocking a line of 3 of your opponent's markers ("-3"). The weights for -2, 0, +1, and +2 were also significantly different from 0 (p < 0.05). The weight for 0 would be expected to be non-zero because that gene gives preference to cells that are included in the most empty lines, which seems as though it should be particularly important at the beginning of a game. The mean of the final lifeScores for these sets of Genies was 121. In Figure 9, the second set of bars (shaded) shows the weights which were evolved after starting from all zero weights. The pattern was the same as the previous set with the highest two bars at +3 and -3, but none of the other bar heights were significantly different from zero. In 4 of the 5 runs, the final lifeScore of the non-evolving Genie was 1 and in the fifth run it was only 35, so the Genies that evolved were successful. However, their average final lifeScore was only 112, which indicates that they had more draws among them. Since they were mostly playing against each other, it is hard to say whether that indicates they were doing better or worse than the previous set. The next set of bars (dotted) in Figure 9 shows the results for runs with 5 evolving and 5 non-evolving Genies. Despite running for 5000 generations, only 3 of the five runs reached a successful set of genes. The genes showed basically the same pattern, but only those at the extremes (3 in a line of either marker) reached significance. The runs using 4 dimensions and only 1000 generations also failed to consistently win. The gene for completing a winning line was the only one that was significantly different from zero. Table 3 also shows the results for experiments using an edge length of 6 (Figure 10). The runs with two dimensions were often unsuccessful and the resulting genes did not fit any pattern. However, using three dimensions resulted in a set of genes that fit the expected pattern of placing greatest importance on the lines that were nearest completion.

            The results seemed to confirm that using numbers of markers in lines as the evaluation features was valid, but I also tried to confirm that the whole system works by changing the goal from winning to reaching a draw. I exchanged the "win" and "draw" variables in the lifeScore determination and did two runs of 1000 generations each. Both runs "succeeded" in achieving better lifeScores for the Genies than the opponent (Table 3). Both gave higher weights to defensive moves (blocking the opponent's lines) than offensive ones (Figure 9, diagonal bars). The weight for marking lines with no players was the greatest as would be expected for a strategy designed to reach a draw. More runs might have allowed the results to reach statistical significance.

Discussion

            The timing runs for 4 cells on an edge and 2-10 dimensions displayed an almost perfectly logarithmic relationship. However, the results of timing runs of various edge lengths for a fixed number of dimensions displayed much more variability in the individual test results. Repeated trials continued to produce variable results. (Data not shown.) This variability might have been due to operating system unpredictably slowing the progress of the games or to the random starting genes of the Genies. The randomness would affect the way the Genies played and therefore the length of the game.

            The 3-dimensional games resulted in a winning set of genes in fewer than 1000 generations whenever nine evolving Genies played against each other and just one static Genie. When five evolving Genies played against five static Genies, only three of the five runs were successful even though the runs included 5000 generations. These results may have been due to chance, but were more likely due to the evolving Genies having fewer opportunities to "fine tune" their genes by playing against each other. Additional trials would be needed to evaluate this, possibly including several trials with 10,000 generations to determine if success can always be achieved and if so, how many generations it usually takes.

            The results of the 4-dimensional trials were inconclusive. Only two of the five trials seemed to reach successful lifeScores in 1000 generations. When the sequence of scores was examined for another trial, it reached successful lifeScores after 2600 generations. Determining the reliability of reaching successful lifeScores in 4-dimensional play would probably require playing five sets of 5000 generations.

            This model could be explored further in several ways. The code could be optimized for faster playing time and lower memory utilization. These changes and/or running for a longer time or on a larger computer would allow several possible investigations. For example, the original evolutionary approaches that were abandoned might be successful if the population was larger or was allowed to evolve for a longer time. These questions are interesting because the original approaches more closely resemble biologic evolution. Another enhancement working towards a biologic model would be to allow "sexual" reproduction with cross-linking of genes and/or two copies of the genes in each Genie. It would also be interesting to develop a MinMax algorithm for this model to compare its results and running time to the genetic algorithms.

Conclusions

            Developing Genetically Optimized N-Dimensional TicTacToe was a challenging and enjoyable experience. Conceptually, the most difficult tasks were determining the number of winning diagonal lines and finding methods for mapping each cell to the diagonal lines that contained it. These tasks were accomplished in part by changing from a geometric mental model to one of considering each cell abstractly as a set of coordinates. It was rewarding to demonstrate that using the intuitively correct features usually resulted in a successful playing strategy. The exponential rise in playing time with increasing board space was expected. The number of parameters that can be varied in this model is large, so there are many additional interesting experiments that could be undertaken. The Genetically Optimized N-Dimensional TicTacToe model might also be applicable to the development of other genetic programs, other problems involving a multidimensional integer space, or reinforcement learning of an evaluation function.

 

 

Appendix A. Electronic Documentation

The following files are included in the Project17 folder:

1. The Code folder contains a file for Main and files for each of the three classes and their headers.

2. The Results folder that contains files with all the raw data, each named for the date on which it was generated. It also contains the QuatroPro spreadsheet used to generate the Tables and Figures.

3. This report (final.doc).

 

Appendix B. Original Proposal

Educational Goals

  1. Create a computerized game that follows a set of rules
  2. Develop a class of automatons that use the rules to play
  3. Provide a "genetic" mechanism for the automatons
  1. Allow the genetics to determine how the automaton plays
  2. Allow the "genes" to evolve with generations
  1. Determine the limitations as the game grows in size and number of dimension

Phase 0 – Establishing the infrastructure

Write programs to instantiate the game, two classes of automata to play it (against each other), rule-based strategies for winning, rules of life (living and reproduction), display mechanisms, and an accounting/evaluation system. The accounting system will maintain a family tree to track the genetics and win/loss record of each automaton. It will use the tree to determine how well each class is evolving. Each class of automata will be allowed to evolve from a single parent automaton, but a finite upper limit on the combined population will be set. Reproduction will be allowed to take place either sexually or asexually.

Phase 1 - three dimensional, 4 X 4 X 4 playing space,

             linear evaluation function strategy with limited set of features

The "DNA" will be the set of weights used in the linear evaluation function. Initially I will use a three pronged evaluation function. It will first consider the number of potential winning combinations that placing a piece would allow. For example, placing the first piece on a 4 X 4 X 4 space in one of its corners allows 7 possible winning directions (lines). The evaluation function would determine the number of such possible winning lines at the time the piece is placed. The second part of the evaluation function will be to consider how important a move is to block the opponent's lines-in-the-making. The third part will be determining the value of a move in its ability to create longer lines for the current player. Each of the possible values generated for each of the three parts will be weighted according to the genes in the automata.

Each automaton plays another randomly selected automaton and probably (p) gets to reproduce asexually (allowing mutations) or sexually with another winner, allowing crossover and mutations. The loser probably (possibly using 1-p) does not get to reproduce. Automata will age. After the maximum population is reached, one of the older and most frequent losers will be deleted from the population each time a new automaton is bred.

One measure of success of the project will be the evolution of a race of automatons each of which can consistently beat any naïve automaton (one with no evaluation functions). This will be determined by having automata periodically play "unofficial" games against a naïve automaton. They will be unofficial in the sense that they are not used for life/death/reproductive determinations. Different evolutionary schema will be tried, such as different mutation and crossover rates, to determine their effect on how rapidly an optimal population can evolve. Similarly, the effect of allowing or not allowing inter-class pairing will be examined.

Phase 2 - Expand one or more of the following:

a. number of dimensions of the playing space

b. size of each dimension of the playing space

c. number of features included in the evaluation function, possibly including pattern matching or features determined by the automatons if I can achieve that level of generalization.

I will determine the effect of those changes on the time to play each game and on the generations it takes to get to an optimal population.

Phase 3 (if time allows)

Compare the speed and success of the optimized automatons with a MINMAX approach. Once the board size and/or dimensions become non-trivial, this will also entail developing heuristics to evaluate intermediate board positions.

 

Implementation

  1. Language – I plan to implement this in C++. That should provide rapid execution, which will be important as the project progresses. Java might provide a nicer interface, but I am just beginning to learn it.
  2. Phase 0 should be achieved by 10/26.
  3. Phase 1 should be achieved by November 27. While that is near the end of the semester, the programs will be written so that expansion to phase 2 and possibly phase 3 can be fairly easily achieved.

 

 

Appendix C. Description of the Code

General Comments

1. Throughout the program, I used arrays to possibly give a faster run time and to allow easier conversion to Java if that becomes desirable.

2. My effort was focused on writing code that runs and I did not spend a lot of time trying to improve the running time or space utilization of the code.

3. I also did not focus on maintaining the privacy of class members and may have left some members public that could be private.

4. I wrote my own Pow(int,int) function because it is likely to be more efficient than the one in math.h that uses float type.

5. Many functions were used during the development and validation of the code, but not in the final product. Such use is indicated in the function descriptions that follow.

6. Internal documentation in the code needs improvement, but is fairly complete here.

Board.h & Board.cpp

I created a Location class to allow a board position to be specified by the site[ ] integer array which has one element for each dimension of the board. The only other member of the class is gridSize (=edge length), which is set when the constructor is called and should always be the same as the gridSize of the board. (I could not think of a convenient implementation that shared the gridSize member with the Board class while allowing multiple Location instances.)

I created the Board class as the playing board. Its cells are each an enumerated Marker that has the values X, O, or _ (empty). The grid[ ] array for the board is linear with one element for every board position. The size of the array, gridLen, is the size of an edge (gridSize) raised to a power equal to the dimensions of the board (dimensions). The linear representation of an N-dimensional board facilities the construction of the board and much of the analysis. Board also includes a lines[ ] array with two elements for each possible winning line on the board. One element is used to track the X markers and one for the O markers. I derived algorithms described below to map each cell on the board to all of the lines in which it is contained. The third array in this class is the markerCnt[ ] array, which keeps the count of the features used to evaluate each potential move, i.e. the number of lines with each number of your or your opponent's markers.

Board Functions (alphabetically)

BoardClear() clears all the markers off the board and sets the elements of the lines [ ] and markerCnt[ ] arrays to zero.

Display() calls the recursive DisplayRec() function.

DisplayRec() displays the markers currently on the board using a recursive function to allow the display of a board of any edge size and any number of dimensions. If the board has more than two dimensions, the third dimension is shown across the page (if it fits). If there are more dimensions, they are shown with descriptive headers for layers showing the last three (or two) dimensions.

FindLines() was used during program development to validate the process of marking winning lines. It marks every cell on the board to fill all places in all lines      

IncrLine() is called by MarkLines() and MarkDiag() to actually increment the marker count in each of the appropriate lines. It also determines if there is a winner yet.

Mark() allows the insertion of a Marker on the Board at a specified Site. A second function with the same name retrieves the Marker from any specified Site. (The latter was used during development only.)

MarkDiag() is called by MarkDiagonals() as part of the process of mapping a cell to the diagonal lines in which it is included. This function could probably be included in MarkDiagonals() rather than existing separately.

MarkDiagonals() is called by MarkLines() to recursively find and mark all the diagonal lines that contain a specific cell of the board.

MarkLines() determines all of the lines that include any specific cell of the board. When the marker parameter is X or O it increments the appropriate counts of elements in each line. When the function encounters a line that is full (a winning line), it returns true.

MinMaxRegr() is the beginning attempt to implement MinMax. The function has not been completed.

Show() generates a character representing the Marker at any Site.

Site() uses a Location instance to find the indexin the linear grid[ ] array from the specifications of indices in each dimension. Site() is recursive so that it remains valid for any number of dimensions.

Unsite() performs the complementary function of Site(). It uses the grid[ ] position to determine a set of coordinates that it stores in a Location instance.

WinCoord() was used during development to determine the coordinates of the cells included in a winning line by "decomposing" the lineNo.

WinCount() was used during development to recursively enumerate winning lines. It works by checking all possible combinations of the minimum number of markers needed to complete a line.

WinFindCoordRegr() is a recursive function called by WinCoord() to allow finding winning coordinates for a board of any size.

WinDiagonals() is called by WinLines() and recursively calculates the number of winning diagonal lines.

WinLines() calculates the number of possible winning lines for a board.

Other functions in Board.cpp that are not part of the Board class

Combinations(a,b) gives a Choose b.

Pow(a,b) gives a^b.

Genie.h & Genie.cpp

I created the Genie class as the repository of the information about each automaton (Genie). Its data members include the following, most of which are self-explanatory: age, win, lose, draw, strategy, color, lifeScore, genes[ ].

"color" is actually used as a Genie number with a unique number assigned as each Genie is created.

genes[ ] is the array of weights for each Genie that are multiplied by the features to determine the desirability of a potential move. The genes[ ] for any Genie remain constant for its lifetime, but the genes for the whole population evolve.

lifeScore is the evaluation function based on win, lose and draw that is used to determine which Genies will die and which will be used as the gene models for newly created Genies.

strategy is the playing strategy being used by this Genie. (See GeniePop::GetMove1-GetMove4 below.)

Genie Functions

Init() initializes all the values for the Genie. During evolution, Genies are reinitialized rather than being deleted and "new"ed.

LifeScore() calculates the lifeScore as described above.

 

GeniePop.h & GeniePop.cpp

I created the GeniePop class to contain the information about the population of Genies and the functions for playing the game, evolving the population and recording the results.

geneCount is the number of genes per Genie, which is twice the number of cells per edge.

gPop[ ] is an array of all the Genies.

lastBlack the highest (positive) Genie number.

lastRed is the most negative Genie number. Negative number are used for the first class of Genies.

nGenies is the number of Genies in the population.

pWeighting is no longer used, but was the likelihood of being chosen to play if a non-linear algorithm is used.

GeniePop functions

Chosen() randomly chooses two Genies to play each game. Code for a non-linear choice is written, but not used.

GenieDisp() displays (and optionally prints to a file) the current status of all the Genies including for each their "color" (Genie number), strategy, win, lose, draw, and genes (weights).

GetMove1() finds the next move using strategy 1, which is random.

GetMove2() finds the next move using strategy 2, which is linear, i.e. the cell in which to play is the first available one in the linear grid [ ] array representing the N-dimensional board.

GetMove3() finds the next move using strategy 3, which focuses only on winning or blocking a line that is about to be completed.

GetMove4() finds the best move using strategy 4, which evaluates every possible move by multiplying the genes (weights) by the numbers of lines that have each possible number of your or your opponent's markers. If two moves have equal weighted sums, the first of the two in the linear array is chosen.

LeDorVaDor() find Genies with the best and worst lifeScores. The one with the worst score is replaced by a new Genie whose genes are copied from the one with the best score, with possible mutations.

Maybe() is not presently used, but was used to map a lifeScore to a boolean non-deterministically, introducing some randomness into the life and death choices.

Mutate() creates the genes for a new Genie by either copying each gene from the parent or changing that gene (weight) by -1,0,+1. Mutate/100 is the chance of attempting to alter the gene, but the three choices above indicate that the actual chance of changing the gene by +1 or -1 is only 2/3 * Mutate/100.

TicTacToe() plays the game.

 

Main.cpp

The main function is used to vary the parameters and to display and save most of the results. It also was used to implement testing of each of the algorithms as it was coded so the file includes several segments of code that have been "commented out."

a(E, dim) is the playing board with E edges in each of dim dimensions.

dim is the number of dimensions of the board (referred to as N in the body of the text)

gamesPerGenie is multiplied by noGenies to get the number of games played in each generation. That makes the average number of games played in a generation by each Genie =

2 * gamesPerGenie since 2 Genies play every game.

generations is the number of generations of Genies that play and evolve in each run.

geniePop is the instantiation of the GeniePop class.

mutate is the mutation rate*100 during the creating of a new Genie. However, a "mutation" can be a change in a gene (weight) of -1, 0, +1 so the true mutation rate is 2/3 of mutate.

noGenies is the number of Genies in the population.

noGenes is the number of genes for each Genie, which is twice E, the edge length.

repl is the index of the first of the Genies who evolve, the second family. Those below repl are immortal and retain their starting genes.

strat1 is the playing strategy (1-4) of the first family of Genies.

strat2 is the playing strategy (1-4) of the second family of Genies.

Functions in Main.cpp

JulianDate() returns the Julian Date as a character array to use as the filename for saving data.

 

 

Appendix D. Code

board.h

genie.h

geniePop.h

board.cpp

genie.cpp

geniePop.cpp

main.cpp