# AI for Soul of Eden

AI for Rayark's “Soul of Eden”
GPU 2017 Final Project
Team 1

## Introduction to Soul of Eden

#### ． Two main groups and over 150 cards.

(containing heroes, soldiers, buildings, and magic.)

#### ． Adopt their strategies for different kinds of situations and win the battle.

For more details, please refer to Soul of Eden.

## Basic Game Rules

．Simplify the game
The original game is complicated. Since our main goal is to show our AI’s feasibility, we simplify the game. The simplified game is as following.

．Destroy the opposite’s castle
In the given time, players need to destories the opposite’s castle. The one who destroy the opposite’s castle wins the game. If none of the castles is destroyed in the time limit, the game is considered to be draw.

．Game Cards
There are 3 kinds of cards in the game.
For every 4 seconds, each player should choose one kind of cards to play.
After playing 30 cards, the players stop playing cards and wait for the final result.

240 seconds

### Number of Cards

(each player can play in a round)

30

### Castle

2800HP
Medium attack power
Fast attack speed.

### Types of Card

#### Tank

621HP
Low attack damage
Block the attack from the enemy’s archer

#### Archer

162HP
High attack damage
The main attacker in the game,
usually protected by tanks

#### Gunner

162HP
Medium attack damage
Buff nearby teammates.

### During the game

Each player chooses a type of card to play every 4 seconds.
Each player should stop playing cards after 120 seconds.
The game continues until one of the castles is destroyed or the time limit exceeds.

### Goal

Destroy the enemy’s castle.
If none of the castles is destroyed in the time limit, the game is considered to be draw.

## Data Preprocessing

3 kinds of cards we can use and 25 positions we can put.

-> Therefore, a behavior $b$ is formed by the card we use and the position we put.

Denote $c \in \{0,1,2\}$ as the card and $p \in \{0,1,...,24\}$as the position.

$b = 3 \times c + p$, $b \in \{0,1,2,...,74\}$.

We then use $b$ as the format of our training data.

## Results

#### ． Statistics Model v.s Random Model

We let statistics model and random model play over 3,000 games and the statistics model have more than 80% winning rate, meaning that it is superior than the random model very much.

#### ． RNN Models v.s Statistics Model

We let RNN models play with the statistics model. Both of the models choose the best behavior they think as the decision, and therefore the game is deterministic. The result is that all of the RNN models defeat the statistics model, showing that RNN models are better than statistics model.

#### ． Two versions of RNN Model

We also let the two versions of RNN models play with each other. The result shows that their winning rates are nearly equal.

### Different Parameter Sets of RNN Models

In the RNN model, we can decide the number of layers to stack the cells. Here we choose 2 and 4. The training curve is like the graph.

## Conclusions

#### ． RNN > statistics > statistics

The statistics model performs better than random model, and the RNN models outperforms the statistics model.

#### ． RNN v1 ≈ RNN v2

Version 1 and version 2 RNN models have similar performance.

#### ． RNN 4 layers of cells < RNN 2 layers of cells

The model that uses 4 layers of cells leads to a lower loss than the model that uses 2 layers of cells.

## Future Work

#### 1. High quality training data

Since we have some basic models now, we can use them to generate training data that has higher quality. We believe that the model could perform better if we use higher quality data.

#### 2. Mapping (embedding) method in rNN models

In version 2 of RNN model, we can try to use more complicated mapping method to map the 2 inputs into a vector. Maybe the version2 does not work well since the mapping is too easy.

## Team Member

Wei-Chung Liao
B02902105@ntu.edu.tw

Chun-Wei Chen
B02902063@ntu.edu.tw

Po-Yao Chen
B02902071@ntu.edu.tw

NTU CSIE
2017.6.29