February 2014 blog archive

This page contains all the blog posts from february 2014. To read the most recent blog posts, click here.

23/02 - preparing next steps

Past week was rather quiet. I had to reinstall my PC, and install all programming related software, which always takes a bit of time. But, the last few core elements of the AI have been added, which allow the AI to interpret the strength of the opening hand and allows it to choose what to discard during discard phases. The Ai is now also capable of interpretating the results of a simulation of its own combat phase, that allows the AI to compare the outcome with the score of card plays. As in HDx, if the AI runs out of cards to play, it will end the turn and start the combat phase. But in HDx, it will always run a quick calculation before playing any cards to see if the AI can defeat the opponent simply by attacking. This still happens, but the AI now needs to take into account that it might have multiple opponents, so if the AI has cards that provide it with a benefit over one opponent it will often choose to play that card even if the AI can defeat another opponent.
The score of the combat phase simulation also serves as a cut-off for low-scoring cards. In HDx and HDs, the AI played rather agressively as it will always play everything it has energy for, even if such cards have relatively low scores (i.e. they aren't very efficient or useful). With the setup in HD3, if the AI has cards it can play or activate, but they have a score lower than that of the combat phase, they simply don't get played as the AI will end its turn so that the combat phase can start. This is of course something that is going to have to be monitored closely, to avoid situations where the AI will stop playing cards and go straight for the combat phase for several turns in a row.

From here onwards, the AI is ready to be updated so it can handle other abilities in the game. Similar to how i've added in the mechanics for a default duel, i've started with a basic, but fundamental set of elements for the AI, that way i can test both the AI and a normal duel, before adding more complex things.
One of the first things that i'll have to add next is a proper game over detection system. Right now, a duel just keeps on going - this helps when i'm testing the AI, but this means there's no way to end a duel and start a new one, unless i close the game completely and start it again. Having a game over check that leads to the game over screen, will allow me to quickly start a new duel if needed. For now, the game over screen will be very basic, as i haven't made any final decisions yet on card and item droprates, scoring, achievements and so on.

The very basic elements of the game are now ready, which is an important step in the development of HD3. From here onwards i can start adding in new mechanics and content and gradually work towards a first alpha release. At this point i don't know yet which mechanics i'll work on next, nor how long it will take to have an alpha ready. The main purpose of a first alpha is to test the basic mechanics of the game, so it's likely that the new mechanics for HD3 won't be part of this - since these require the core of the game to be a bug-free as possible. So it's more important that core has been tested thoroughly first.
Content for now is only cards (other things are planned, but they require new mechanics to be present first). Adding cards means that i need to add code (called ability handler) so that the game can correctly apply the effects of abilities on those cards to a duel in progress, but also that there has to be code so that the AI can interpret the results of a simulation involving cards with such abilities.
In HDx, i added the entire set of cards in one go, which was possible since both the ability handler and AI code didn't differ much from those in HDS, so i knew what to expect. This wasn't a very fun way to work however - for a relatively long time, i had to focus on a single aspect of the game. Coding is more fun if i have some variation. The plan with HD3 is to add card sets gradually, inbetween adding other elements of the game. I have already mentioned that HD3 currently supports a small set of cards, so I'll probably work on a new mechanic first, before adding in another set of cards, the code for their abilities, then testing this set of cards. This means that an alpha is unlikely to contain all cards currently in HDx (some of these need changes first due to the new ability system in HD3, before they can be added).

16/02 - AI testing

All abilities in the game currently now have code that allows the AI to interpret their usefullness and impact on the board. All this needs to be tested of course. The first and second layer had already gone through a testing phase where all cards and abilities were simply assigned random scores. This allowed to make sure that this part of the AI was working as intended and that the AI could play or activate cards. These tests were relatively simple as the code of these two layers is not that complex, but with the ability code it's different.

One of the negative elements about having a full simulation setup (read about the pros and cons here), is that it's quite hard to get feedback from these simulations as these are completely run in memory - they have no visual component at all. I can add code that prints out various things for me to read while the game is running, but there are so many things going on in a single AI turn that this often amounts to lots and lots of stuff getting printed out on the screen. In a turn where the AI plays 3 or 4 cards and contemplates activating a card or two, this can easily result in almost 1000 lines of text. This varies greatly from card to card as some cards only need one or two simple simulations, while others might require multiple simultaneous simulations. So when something happens that doesn't look right, i have go through all this text feedback from the game and try to recreate the entire process the AI went through to see where things went wrong.

Another thing is that there are two ways in which this final layer of the AI needs to be tested. There's the pure mechanical testing, which looks for bugs in the code. If the AI targets one of its own ships with a cruise missile, that's a mechanical bug, since the AI has been set up such that it shouldn't be doing such things. If the AI makes a wrong or sub-optimal decision when it's playing or activating cards, this could be a mechanical bug (there could be an error in a calculation perhaps), but it could also be a content bug, and those are hard to track down and even harder to fix.
For now, the focus is purely on the mechanical, though i do deal with obvious content problems. Content bugs in the AI are things that often only get revealed in very specific cases, this has been true since HD spectrum. The AI might make correct decisions for certain cards except in a few rare gameboard states. If i want to fix something like this, i really must be able to see it myself. While player reports have been extremely helpfull in the past when it came to AI problems, some problems are the result of elements on the board that one might not expect to be related to the actual problem. If a player tells me a certain card should have activated its ability on a certain ship, but it didn't - it's likely that the AI did so, not because it did't see this ship, but rather that other cards on the board made the AI decide to choose another target or even activate or play an entirely different card. With the large number of cards and abilities in the game, it's obvious that tracking down specific problems can take a very long time, especially when these occur very rarely. During development of HDx i played several 1000 games - in a lot of these, the AI was given decks with possible troublesome cards, in the hope that i would be able to spot those scenarios where the AI took a poor or wrong decision.
In HD3, there are enough cards for me to build two or three different decks with. So i could spend a few days playing games against the AI with these 3 decks. But at the same time, the game only has 15 different cards, and the decks i can build with these cards aren't the most powerful - most players can easily come up with stronger decks, if only the game had all Human cards instead of these 15. Also, as the game gets worked on and new elements get added, these elements often need to be tested by starting a duel against the AI - so i will be playing plenty of duels anyway where i'll have a chance to see if the AI is working as intended.

The focus has to be on the mechanical first, as mechanical bugs can cause the AI to act strange - this kind of bug can give rise to content bugs. So if i see something that looks like a content bug, i first try to figure out if it could be the result of a mechanical bug instead. There's little point in testing the AI for content bugs when there are still unsolved mechanical problems.
The fact that the AI now works primarily with simulations to try predict the outcome of playing or activating cards, gives rise to a new kind of bug that didn't exist at all in HDs or HDx : simulation leaking. This happens when the AI runs a simulation that ends up interfering with the actual board. On one hand it might be obvious to see when this occurs, but tracking it down is a different thing.

At one point during testing i noticed that some the AI's structures would suddenly leave play. This only happened to Solar Harvesters, which have a self-destruct acti that allows them to deal damage to a ship. When this occured however, all Solar Harvesters would be destroyed almost instantly and no ships were being damaged. First of all, the AI isn't supposed to be able to activate multiple actis in short succession - there's always a small pause between different card plays or activations, unless the AI is running a simulation. The other strange thing was that only one part of the ability seemed to be working : the self-destruct, but not the damaging of ships. The simulation itself looked fine, so at first it seemed that sim leaking wasn't the problem.
It turned out the problem was with the code that applies the effect of abilities to the board. Every ability has its own block of code that makes sure this ability can actually do the things it's supposed to do. This code has been set up such that it can target the actual board or any simulation. The bug here was a simple oversight : the damage dealing portion of this ability worked ok and could target any board, but the self-destruct part of the ability always targeted the actual board. So if the AI ran a sim of this acti being activated, this structure wouldn't die in the sim, instead it died on the actual board. Even if the AI ended up deciding that it didn't want to activate this ability, it still lost the card.

I talked before about static scoring, which is a simpler way of scoring abilities which rarely required simulations to be run. Auto abilities are often scored in such a way. For some abilities (such as the Akata's repair abilities, both play and acti versions), the game needs to run a full turn simulation, so that it can see if the Akata is capable of letting allied ships survive through an attack from the opponent. A full turn simulation starts with the attack phase of the AI, then the draw and upkeep phases of the opponent and then the combat phase of the opponent. When the opponent to the AI had vectors in play, it appeared as these vectors activated their auto abilities on the actual board. The problem here was that the AI wrongly calculated the opponent when it was scoring the auto ability of the vector, as a result the ability was activated for a player on the actual board instead of for a player in a sim.

I do expect that there will be more bugs like these two examples. This entire system i've made for the AI is still pretty new and through adding in more and more abilities, i will have to figure out which things need special attention as they're likely to cause bugs.
I'll continue working on this for a while longer, the next step is finishing the last few AI functions, which involve giving the AI the capability to interpret it's hand so it can make decisions when it comes to picking an opening hand or discarding cards. Things have been going a bit slower recently due to PC problems - i might have to reinstall the PC, which likely brings some more delays with it, but at this point the state of my computer is interfering with my work. I do hope that a reinstall will fix things and it's not a hardware problem.

09/02 - AI ability scoring

The code that scores individual abilities on cards is gradually being added in for the small set of abilities (24) the game currently supports. About half of these abilities can be scored in two ways (static and active, see last week's devblog), which means they need two blocks of code.

This is going relatively slow as these are the first abilities under the new AI system and i'm still getting used to the 3-layer setup i made for. Going from HD Spectrum to HD Xyth didn't introduce such large changes in the way the AI worked compared to going from HD Xyth to HD3. So far, though, things are looking good. I've managed to seperate the more common simulations that i expect many abilities will need, so that these abilities generally only need one line of code to start a new simulation, or to play a new card on an board that exists in a certain simulation, or even when an ability needs to run a full turn simulation. The major advantage here is that the amount of code needed to score individual abilities - especially the more complex ones - is a lot smaller compared to HD3. In the long run, as more and more abilities get added to the game, this should really speed up creating AI code for these new additions.

In the meanwhile, i had to increase the number of simultaneous simulations that the game must be able to handle to three. Each sim has 4 players, just as the actual board of a game in progress does, so technically, the game is now supporting 16 players. This does make the management of different sims a bit more complex, as expected.
The reason why there must be 3 sims currently is that some abilities require two sims to be compared. If you have a card with a certain play ability that has the potential to have a large impact on the board, but this impact is not immediately visible, two sims need to be run : one where this card enters play and one where it doesn't enter play. Then, the AI can run additional sims ontop of these, for instance a combat sim. These two sims can then be compared so the AI can have an idea in which ways this particular play ability would influence the outcome of a combat phase. This uses up two out of three available sims - the 3rd one is used when this same ability needs to calculate the static scores of cards. For instance, in the sim above, the AI realizes that when it plays the card with this auto ability, that a ship of the opponent ends up being destroyed. In general, this is a good thing, but the ship being destroyed might not be much of a threat to the AI, so it receives a score, allowing the AI to compare the outcome of this ability with the abilities of any other cards it might play (perhaps one of those other cards is capable of destroying a ship that is a lot more dangerous to the AI).
The static score of an entire card is the sum of all the static scores of the abilities on the card and the intrinsic stats of the cards (attack and defense in case of ships). In general, static scores of abilities are relatively simple to calculate and rely mainly on the ability itself as well as its strength (a damage ability that does 5 damage is going to receive more score that one that does 2 damage). Since i can't rule out that static scores might need to run a sim themselves, these would thus need a seperate board where they can run this sim on, as the two sims that were created earlier might need to remain intact as the AI is still comparing them.
This 3rd sim is temporary, so it's always available whenever the AI quickly needs to calculate the static score of a card.

I don't foresee (but don't rule out) that there's going to be a need for a 4th simulation. Technically, most parts of the game can currently handly any number of simulations and those parts that can't generally won't be needed in simulations and only ever interact with the normal game board (these are mostly things related to rendering the board).

02/02 - AI layers

The second layer of the AI is now pretty much complete. The first layer is the AI loop i talked about a few weeks ago (click here). In short, that loop is repeated until the AI decides to end its turn. In each loop, the AI will look at all cards it can play or activate and calculates scores for these cards. At the end of each loop the AI then looks at these scores to see what it wants to do (play a card, activate a card, recycle a structure, or start the combatphase). The actual calculation of these scores doesn't happen in this layer, it doesn't even happen in the second layer, but the point of having layers is to seperate these steps so that a single layer doesn't get too complex to handle (both for me and the game).

The second layer contains a much larger amount of code than the first one, this code in general contains the steps the AI needs to go through for each of the global actions the AI is capable of doing. First of all, these are the global actions : playing a ship or structure, recycling a structure, playing an action, or activating a card in play. Calculations related to determining the outcome of a combatphase and related to making decisions during the discardphase are also part of the second layer, but are not considered global actions.
In the first layer, the AI moves through a maximum of 21 steps : one step where it calculates the outcome of the combatphase, 8 steps to score up to 8 cards in hand, 5 steps to score up to 5 structures in play and 7 steps to score up to 7 ships in play. Everytime such a step comes up, the first layer checks if the card tied to this step exists at all and is playable, if this is true, the second layer gets called upon, which will then do the preparations needed so that this card can be scored.

Having these different global actions is important since there's quite some difference in how these are going to be scored. The actual setups for each of these global actions is what ended up taking the most time in creating the second layer. Things had to be rewritten or updated several times, to deal with the way simulations work. While i consider this second layer to be complete right now, it's not unlikely it will go through more changes as i start work on the 3rd and final layer (this is where the actual scoring happens). This might happen as i can't predict if the current setup is capable of dealing with all abilities that the game might have in the future.

As an example of how the second layer works, here is the setup for playing a new structure from hand :
The second layer only gets called when a card is playable, so at this point the AI has enough energy to play this structure and there are enough open slots in play and there are no other elements active that would keep the AI from play cards.
The AI starts by making a list of all open slots in play where it can play a structure. For each of these slots it creates a simulation where it does play this structure in this slot. This simulation won't be used to calculate a score for this structure, rather it's used to see which elements of the 3rd layer (where the scoring does happen) are going to be needed for this particular structure. This simulation is important since the act of a card entering play is capable of modifying the card or even destroying it. HDx almost never took such elements into account (except for example when playing Prismatic Demons, who may be destroyed the moment they enter play). But since it's likely that HD3 at one point may support abilities that trigger when another card enters play, there's a chance that such triggers may modify the newly entered card.
The AI checks two things in this simulation : it wants to know if the card is still in play, and it wants a list of all abilities on the card (as this list of abilities might be different to the one in reality, where the card is still in the AI's hand).
Abilities on cards are the primary source of the card's score - in case of structures and actions, it's their only score, since these cards are useless if they don't have any abilities, unlike ships, who still have an attack and defense value. Even if the card was destroyed the moment it entered play, some of its abilities might have influenced the board, but for now, only play abilities (those who trigger when a card enters play) can do this. The AI will loop through all abilities on the card, and calculate a score for each ability, based on the impact this ability had on the board. If the card was destroyed the moment it entered play, this list of abilities that gets scored will be much smaller, so if the AI can play a structure in two slots and it dies in one of these slots, the AI will clearly see the slot where the structure remained in play as more valuable, unless the structure only had play abilities, in which case there will be a tie between the scores of the two slots.

Next to these steps, part of the work that went into creating the 2nd layer went into deciding how abilities should be scored and it turned out that abilities should often be scored in different ways, mainly depending on the current global action.
In the example above, if the structure had passive abilities, the AI will call upon a set of code (yet to be added for the most part, but the outline of it does exist in game already) that calculates the score of a single passive ability at a time. When setting up the code that will do this, i had to figure out the importance of passive abilities to see if these would ever need simulations for instance, or other complex calculations. Passive abilities are things like resistance, retaliation - they can have an effect on the board, but that effect is not present all the time and is mostly isolated or local. Example : the resistance of a ship only matters when that ship takes damage, and for a ship owned by the AI, this is generally only during the combat phase. So if the AI is scoring a ship it has in hand, the fact that it has resistance needs to be taken into account, but the point in time at which this ability can start becoming useful is relatively far into the future (the next time the AI gets attacked). While it's possible to simulate an attack of the opponent on the AI, such a simulation is not very accurate as it completely ignores (and simply can't predict) what both the AI and this opponent might play in their main phases, which have to be completed before the combat phase can start. On top of that to get the impact of the resistance on a ship, the AI would have to run two such simulations : one where the ship normally, and one where the ship has no resistance and then compare the outcome of these simulations. There is very little point in going through all this trouble to begin with, on top of that both simulations are not 100% accurate. Instead, the AI simply looks at how much resistance the ship has and gives this ability a score based only on that amount. If this ship has other abilities, the score of the resistance passive will generally be relatively low, but it can make the difference between two ships that are otherwise very similar.

This type of scoring is called 'static' scoring - and this is what happened almost exclusively in HDx and HDs. Some ability types can still be scored in a static way in HD3 (like passives) as it provides a reliable way of comparing the importance of abilities with other abilities on the card, even though it's not as accurate as running a full simulation. As the example above showed, running a simulation is no guarantee however, to get a more accurate picture of an ability its worth.
In case of new cards being played, some abilities must of course never be scored in a static way, such as play abilities, and should get a full simulation instead. While this makes sense for new cards being played, imagine the scenario where the AI has an action that allows it look at the opponent's hand and then choose a card to be discarded. The AI will inspect all cards and apply scores to them, to find out which one might be most valuable to the opponent. Since these cards are in opponent's hand, they may be played relatively soon (as soon as next turn), so the AI needs to put some importance on the play abilities of these cards. Instead of running a simulation of each of these cards entering play, the AI will here opt to score the play abilities on these cards in a static, or semi-static way. In some cases, play abilities will not receive scores at all : when the AI is looking for targets in play for an action or acti, it will determine the potential worth of targets by scoring the abilities on these cards. For cards in play, their play abilities have long since triggered and no longer have an effect on the board, so there's no point in taking these into account.

To explain 'semi-static', auto abilities are probably best suited. If the AI has an action that targets one of its own ships, the AI will once again try to figure out which of its ships is best suited to be targeted. This depends on what the action does of course, but in most cases, the AI will need to look at the abilities on the ships it has in play and apply scores to these abilities to find out which ship is the optimal target. Auto abilities can have a big impact on the board - after all, they trigger once ever turn. When the AI is in it's main phase however, these abilities have already triggered and it will be quite a while until they can trigger again (more than one turn from now). Thus the AI will apply a full static score to these abilities, this is a score that's mainly based on the strength of the ability. Now, if the AI has an action that requires a ship target owned by the opponent, auto abilities on these ships will trigger relatively soon (at the start of next turn, so basically as soon as the AI decides to end its turn). In this case, the AI will opt to run a semi-static score calculation, which might include a simulation. In case of auto abilities, these trigger during the upkeep phase, so if you want to simulate these, you technically have to first simulate your own attack phase, before you can get to to the upkeep phase of the opponent. During the upkeep phase, all auto abilities will go off, not just the one on this particular ship you want to know the score about, so the effect of this single ship its abilities might be hard to figure out from the result of the simulation. Instead, the AI runs a sim in which there is no attack phase and where only the auto ability of this one ship gets triggered. This way the AI can figure out the impact on the board of just this one ability, which makes it easier to apply a score to this compared to doing so in a full simulation. There is some inaccuracy with this however : the combatphase could for instance result in this particular ship being destroyed, or other auto abilities might affect this ship as well. But if the AI were to simulate the combatphase and the full upkeep phase it would still not be an accurate simulation as the AI can't predict what else it might play before it would end its turn.

Basically, the detail with which individual abilities get scored depends on the type of this ability (passive, auto, play, etc) and the overall state of the game (whose turn is it, who is the owner of the card). The longer it takes before this ability will trigger or will have an effect on the board, the less accurate simulations will be. So at one point there's no longer a benefit in running full simulations and the AI resorts to simpler ways of scoring abilities, such as full-static setups or semi-static setups with occasional local simulations.