Research Graphic
 

Concordia DEC Lab

about us . courses . students . research. publications . events . people . contact us
 

 

Ayo, the Awari Player,

or How Better Representation Trumps Deeper Search

Mohammed Daoud, Nawwaf Kharma, Ali Haidar, Julius Popoola

 

I.  Introduction

Awari is a very old game that seems to have originated from Ethiopia, and gradually spread to all of Africa and beyond. The Masai say that the Awale was invented by Sindillo, the son of the first man, Maitoumbe, and was originally called Geshe. It is believed to be about 3500 years old [1].

Awari is a game that requires calculation and strategy, with the aim of capturing seeds while keeping to the rules, agreed by the players [13, 16]. Due to the number of strategies and amount of calculation involved, Awari has captured the attention of many Artificial Intelligence researchers and computer scientists [3, 4].

There are up to ten different types of Awari shareware softwares available in the market at the moment. Probably, the most popular software is "Awale" developed by Didier et al. of Myriad Software [5].

The aim of this paper is to illustrate the advantage of using more features within the evaluation function rather than increasing the search depth in an endgame such as Awari. Most of the previous work on Awari focuses on implementing improved search techniques, which are sometimes augmented with databases [12].

A Genetic Algorithm (GA) was used to evolve a new Awari player called "Ayo", which uses a mini-max search. A special tournament slection technique made it possible to construct a fitness evaluation mechanism that measures the ability of one player relative to a pool of other players in a population of players. Ayo produced better results with depth 5 searches than Davis et al. player, which used a depth 7 search [5]. The principle established in this paper is applicable to other endgames like chess and checkers.

 

Figure. 1   The Awari Board

A. What is Awari?

Awari is a game played by two players. It is traditionally played on a plank of wood with twelve pits. The pits are arranged into two rows (north and south) of six pits each. It has 48 seeds, as shown in Fig.1 [1, 14].

The objective of the game is to capture as many seeds as possible. Each seed is worth one victory point, and when one of the players gets at least 25 seeds he wins.

To start, four seeds are placed in each of the twelve pits on the board. The player on the north side must then choose one of the pits on his side of the board. The player scoops up all four seeds and sows them in a counter-clockwise fashion  around the board. This is done by placing one seed in the pit to the right of the original pit, then one seed in the pit to the right of that one, and so on until there are no more seeds left in the player's hand. When a player gets to the last pit on his side, he simply places a seed in his opponent's pit and keeps going counter-clockwise. If a player has enough seeds to go all the way around the board (generally known as kroo move), he skips the pit he started with.

Seeds are captured when the last seed sown is placed in an opponent's pit with one or two seeds. In addition, if the previous pit has two or three seeds in it after sowing, those seeds are collected as well. And so on, up to five pits' worth, therefore it is possible to totally wipe out an opponent's side of seeds this way. During the game, players alternate turns; a player never takes two turns in a row.

B. Related Work

Awari is one of the games that were played in the Computer Olympiad [1, 5] until 2000, when the Olympiad was discontinued due to lack of interested participants. The Computer Olympiad is a multi-games event in which all of the participants are computer programs. The purpose of the games is to find the most competitive programs in a number of categorics including chess.

In 1990, van der Meulen et al. constructed an artificial Awari player called Lithindon [2]. Lithindion used an alpha-beta search algorithm and an endgame database containing the game theoretic values of each position with 13 seeds or less.

In 1991, Lithindon performance was improved with a pn-search and a larger database containing the game theoretic values of each position with 17 seeds or less [1]. Van der Meulen et al. improved Lithindon performance in 1992 with the use of an Opening Book. These enhancements in the performance of Lithindon enabled it to win and defend the gold medal title in the Computer Olympiad in 1990, 1991 and 1992. After 1992 it was retired.  

Lincke developed an Awari player known as "Marvin" [17], which autamatically constucts opening books.

Marvin uses a hybrid method called drop-out expansion that can mix depth-first and breadth-first search techniques according to the setting of a single parameter.

Van der Goot designed an algorithm to construct a large endgame database for the game of Awari known as "Softwari" [1]. The algorithm does not employ reverse moves, but instead takes multiple passes over the database until internal consistency is achieved. The Awari database contained 52 billion positions, representing 4.5% of the total state space of the game.

Marvin and Softwari played against each other in the Awari competition held in the 2000 Computer Olympiad [13, 15, 18]. In an eight game tournament, Marvin won the first game, while Softwari won the second game. The next five games were played so perfectly that they resulted in draws. In the final game Marvin made 13 errors but still won the game and the tournament.

All the work reported thus far focuses on search techniques and database utilization. Generally speaking, these previous players gave little importance to the evaluation function. In contrast, van Rijswijck proposed a GA-based technique that mines endgame databases for relevant features useful in the construction of evaluation functions [3]. The evaluation function was applied to a decision tree in which each node represents a binary decision based on atomic features, where the atomic features describe the current board position. Each board position is statistically mapped to one unique class whose evaluation values are not learned by game playing but extracted directly from the database. The class mapping is done using the evolved decision tree, which allows the evaluation value to be computed quickly.

Davis et al. designed an Awari player, which uses a simple evaluation function [5]. The evaluation function is a linear combination of features that represent the current game position. Each feature has a weight associated with it. This weight is evolved using an Evolution Strategy. The output of the evaluation function is used in a mini-max search to determine the next move to be made by the player.

Davis et al. evolved the weights of the evaluation function for a 7-levels deep mini-max search tree. They played each member of the population against evry member of the population for 250 generations. The evaluation function used for deciding the next move took into account the number of pits vulnerable to having 2 or 3 seeds captured in the next move, and the current scores of both players. The performance of this program was measured by playing against "Awale". The program beat Awale 5 - 0 at the initiation level, 5 - 0 at the beginner level, 3 - 2 at the amateur level, but lost 0 - 5 in the master level.

I.  METHODOLOGY

A. Overview

The name of our Awari player is "Ayo". This name was chosen based on what the game is generally known as in Nigeria. Ayo is designed to be able to compete and win against "Awale".

Ayo uses a mini-max search which employs an evaluation function to asses all possible future moves. Our aim is to prove that increasing the number of features in the evaluation function leads to a reduction in the mini-max search depth. Thereby reducing the response time, the CPU usage, and the amount of memory required during evaluation. The evaluation function used by Ayo is:

                                                           

 

where:

w1..w12

The weights of each term of f. They range between [0,1].

a1

The number of pits that the opponent can use to capture 2 seeds. Range: 0 - 6.

a2

The number of pits that the opponent can use to capture 3 seeds. Range: 0 - 6.

a3

The number of pits that Ayo can use to capture 2 seeds. Range: 0 - 6.

a4

The number of pits that Ayo can use to capture 3 seeds. Range: 0 - 6.

a5

The number of pits on the opponent's side with enough seeds to reach to Ayo's side. Range: 0 - 6.

a6

The number of pits on Ayo's side with enough seeds to reach the opponent's side. Range: 0 - 6.

a7

The number of pits with more than 12 seeds on the opponent's side. Range: 0 - 6.

a8

The number of pits with more than 12 seeds on Ayo's side. Range: 0 - 6.

a9

The current score of the opponent. Range: 0 - 48.

a10

The current score of Ayo. Range: 0 - 48.

a11

The number of empty pits on the oponent's side. Range: 0 - 6.

a12

The number of empty pits on Ayo's side. Range: 0 - 6.

 

The evaluation function has twelve features, the first six (a1 - a6) are identical to those used by Davis et al. [5], and the rest are added to enhance Ayo's performance.

Using a mini-max search tree, all the possible moves are outlined to the required depth, then the evaluation function evaluates each terminal node.

Each feature has a degree of importance, reflected in the value of the weight associated with it. These feature weights are unknown and there is no mathematical method of calculating them. We use a GA  to evolve these weights.

B. Genetic Algorithm Operation

The primary function of our genetic algorithm is to evolve the feature weights of the evaluation function. Figure 2 provides an outline of the operation of the GA.

1) Problem Encoding:  A binary encoding scheme was selected for chromosomes.  Each chromosome consists of 48 bits, 4 bits for each weight - see Figure 3. 

Fifty chromosomes are randomly created to construct the first population. The population size is constant throughout the run.

2) Fitness Evaluation:  The fitness of each chromosome is evaluated using a special form of tournament selection which assesses the fitness of each individual relative to its peers. More specifically, twenty percent of the population (ten chromosomes) is chosen randomly to form a fitness set. Next, each chromosome in the population plays twice against every chromosome in the fitness set, once as north and once as south. The winning player is awarded 2 points for a win, 1 point for a draw and zero points for a loss.  The sum of points awarded to a chromosome in those twenty games are equal to the fitness of that chromosome

3) Selection and Elitism:  Copies of the best ten percent of the population are placed without changes in the elitism set. Elitism ensures that the best chromosomes will not be destroyed during crossover and mutation.

The selection process is then implemented. Fitness proportional selection (with replacement) are used to select chromosomes for the mating pool. The size of the mating pool equals ninety percent of the population size.  

Figure. 2   GA Operation

 

4) Crossover: Single-point crossover is used with probability of 0.5.

    

 

Figure. 3   The chromosome Structure

 

5) Mutation: Bit-flip mutation was used with probaility of 0.001.

The chromosomes resulting from crossover and mutation are then combined with the elitism set to construct the next generation.

6) Termination Criteria:  The GA runs for one hundred generations. Next, the weights extracted from the best chromosome in the last generation are used for the fitness function employed by Ayo.

                                                 

Page 1, Page 2

DEC Lab Home

Last Updated April 6, 2004