The OpenTafl Computer Tafl Open - An Abstract Strategy AI Tournament

Hello there! If you follow the AI forum here, you may have seen ags1’s thread about Hnefatafl, a Norse board game popular from the British Isles through Northern Europe from the 5th century through the 11th century. It has seen something of a renaissance lately, and as a student of the game and a hobbyist student of games and artificial intelligence generally, I thought it was high time that tafl games join the club comprising abstract strategy games with AI tournaments.

You can read my (long) intro/rules/FAQ post over at my website, or I can sum up here.

  1. What’s tafl?
    It’s an asymmetric board game. The king and his defenders, at the center of the board, are outnumbered by the attackers, who start around the edges. The king must reach a corner space, and the attackers must stop him from doing so. (There are a large number of variations on the rules, since no single version survived. The most common variants are played on 7x7, 9x9, and 11x11 boards.)

  2. How does the tournament work?
    You write an AI to plug into OpenTafl, the tafl engine I’m working on, using this protocol, which in turn uses this notation.

Your AI is due by December 15 of this year, and the tournament will play out over the next few weeks. AIs will play two-game matches, one as each side, to account for imbalances in the game. There will be a round-robin stage to establish ratings, then a single-elimination tournament seeded by the ratings established during the round-robin stage. If I can convince a high-level player to play, there may be an AI-vs-human exhibition match at some point after the AI tournament.

The rules are a slight modification of a well-known tafl variant. They’re on the page linked above.

There are prizes.

  1. Anything else?
    Nothing I feel like I have to reproduce here—hopefully, this is enough to pique your interest. Thanks for your time. I hope to see you on the field of battle!

Shall we post alphas of our AIs on this thread?

Please do. I’m excited to see what everyone comes up with.

I have something resembling intelligence:


  | a b c d e f g h i j k
 -+----------------------
 1| H - - X X X X X - - # 
 2| - - - - - X - - - - X 
 3| - - - - - - - - - - - 
 4| - - - - - X O - - O - 
 5| X O - X - - - - - - - 
 6| X X - O O T O O - - X 
 7| X - - - O O O - - X - 
 8| X - - - - O - - - - X 
 9| - - - - - - - - - X - 
10| - - - - - X - - - - - 
11| # - X - X X X X - - # 

King home? true
Last move: c1-a1
Killed pieces: 1
Enter move (eg: e1-c1:
DEFENDERS WIN!
Victory at move: 16

I was playing defenders, and I had to think a little. The computer curently has a strategy of preferring moves that restrict king freedom, but the AI does not look ahead to future moves yet.

Also, I have a minor bug in my move generator, where pieces are unable to cross an empty throne (the throne is in the middle of the board, only the king can “sit” there, but other pieces should be able to move through the square if it is empty).

@Fishbreath, can you extend your protocol description with a an ordered list of examples? Starting with the literal hello and rule setup, followed by a couple of moves and a resignation or victory claim? Then I can use your ordered list of examples as a test of my protocol implementation (with the test players mocked to return the responses mentioned in your example). In fact a few “recordings” of games delivered as XML/JSON covering all the edge cases would greatly simplify integration with OpenTafl.

You also mention the need for an ini file. Can you provide a template?

What happens when an AI gives an illegal move? Does OpenTafl validate the moves, and if it does, what happens?

For invalid moves, OpenTafl will send an error command, an opponent-move command (which contains a board state), and a play command. As a general architectural rule, when OpenTafl is hosting an external engine, OpenTafl maintains its own internal game state, which is considered the authoritative version.

Here’s an example ini file. Note that OpenTafl’s working directory for engines is /engines, so you’ll probably want to put your engine under the engines directory.

In the saved-games/replays directory, there are a few annotated game replays. The file format contains human-readable move lists, so that’s a place to start for test games. I’ll see about making a few games which explore some rules edge cases, too.

If I have time at lunch, I’ll whip up a version of OpenTafl which saves all of the engine interactions during a game, and put that file up somewhere.

ags1, here’s a link to a full game’s engine interactions. This is from the host’s point of view: lines labeled received are things external AIs send to OpenTafl, and lines labeled sent are things OpenTafl sends to external AIs.

Both my sides got game now! I’m getting defender AI and attacker AI winning randomly.

I don’t see mention of how many seconds I will have per turn. I don’t care about overtimes or other fancy rules, I just want to know how many seconds I have per turn.

In this interaction:

Received: hello Sent: rules dim:11 name:Computer_Fetlar atkf:y tfr:w start:/3ttttt3/5t5/11/t4T4t/t3TTT3t/tt1TTKTT1tt/t3TTT3t/t4T4t/11/5t5/3ttttt3/ Sent: clock 202798 200000 15 4 4 Sent: play attackers

… I’m not sure what the clock command is telling me, probably because I don’t understand how competition clocks work.

Also time is not very meaningful for a computer without a benchmark of performance. Do you have any benchmark numbers for the AWS instance you will use?

I’m a bit paranoid about time, so I’m concerned my AI might send a command late. Is there any possibility to send a preliminary move and then send refinements until the clock runs out for that move?

Can I make my AI networked so that it can offload MCTS playouts to a set of client computers? I guess that’s not allowed for AI v AI competition, but would it be allowed for the possible exhibition match against a strong player?

Competitive board games are played with a time limit for the game, not per move. In the example interaction, you, as the attackers, have ~203 seconds of main time to make all of your moves, plus overtimes.

Overtimes are as in go: so long as you don’t expend an overtime, it resets to its full time when your turn starts. (So if I have three 10-second overtimes, and I use 9 seconds on my turn, I start my next turn with three 10-second overtimes—I didn’t ‘use’ one. If I use 15 seconds on my next turn, my third turn begins with two 10-second overtimes. I used all of one, and half of another.)

For the tournament, you’ll have three ten-second overtimes and no main time. You’d probably be fine just using ten seconds per turn.

I don’t have benchmarks handy, but the c4.large EC2 instances I’ll be using have a pair of modern Xeon cores available.

Preliminary moves aren’t in the protocol, and since there isn’t a strict time-per-move measure in most cases, it would be difficult to put them in. I’d encourage you to build in some facility for fast stopping and move extraction from partial results—it’s a broadly-used feature in a lot of board game AIs.

For the inaugural potential exhibition match, I think I lean toward one computer against one human, to establish a benchmark between strong humans and tournament-winning computers. (If the human stomps the AI, I can always scale up the match for next year; if a networked AI stomps the human, it’s hard to push for scaling back the computers.)

I was thinking you could change the move command from:

a3-a1

to:

a3-a1 1000

Where 1000 is the ms to wait for a better move from the AI. If the current overtime would expire, play this move rather than expiring the overtime.

Otherwise millisecond mismatches (quite possible with comms by stdin and stdout) can cause an AI to run out of its overtimes.

I’ll look into that. It’s a backwards-compatible change, which is nice—don’t have to bump the protocol version or anything.

In the interim, I’ve just been reserving 250-500ms for comms overhead and extracting the primary variation after search stops.

I’m currently just naively doing dozens of playouts to victory per position and give a score to each possible move based on the victory (victory in 10 moves gets a high score, victory in a hundred moves gets a low score, anything over 150 moves is a draw, etc etc). The AI does get better in proportion to the number of playouts, but even with a crazy number of playouts I find the AI does not play smart. I’m going to try another AI where I only play out a couple of dozen moves into the future but in cases where the game is still unresolved, I would evaluate the board position based on:

  • number of squares that have a path to a corner
  • number of moves the king has
  • king has path to corner
  • number of defenders
  • number of attackers

Weights for the various factors can be derived genetically.

This will let me radically increase the number of playouts while still identifying the imminent victories.

Fishbreath, is this the AWS server for the competition?

http://www.headline-benchmark.com/results/4cd09b76-b8dc-4cf8-be6f-7bf68bb1ed44

No, that’s just my web server VPS—I was playing around with the benchmark tool. Expect an AWS benchmark and a working network play implementation sometime in July, probably.

I think I’ll probably enter this. Thanks for the early heads-up, free time is hard to come by but I ought to be able to find a few days before December :slight_smile:

That’s precisely why I chose to announce it so far in advance. Looking forward to seeing your entry!

Thanks to Jono for a bug report—the fix is up now, if I got the cause right.

In other news, lunch hours and evenings have brought me to the cusp of a public release of OpenTafl-with-networking. (It turns out my application model, which is already built to accommodate asynchronous, unpredictable players like humans and external engines, is also well-suited to accommodating networked players.) I just played my first networked game against myself, although I have a lot to do before the initial release. I have to reorganize the settings menu, do in-game chat, add configuration for server address and other network parameters, send time settings over the network and handle clock updates, and tidy up the UI and add some fault tolerance. All that considered, though, it should be a week or two tops before the initial public release, and spectators should be soon after that.

Longer term, I might like to try doing a headless AI client, so people can run AIs on the server on a semi-permanent basis, and build some sort of server management interface, with the eventual aim of building a tournament infrastructure which can automate a lot of what I’ll be doing by hand this time around.

I’ve encountered a bug with the latest build (one after 0.2.5b). When playing with human (attackers) vs AI (defenders) the board state send with opponent-move commands stops including the king after the first capture.

E.g.

move /4tttt3/5t5/3t7/t4T4t/t3TTT3t/tt1TTKTT1tt/t1T3T3t/t4T4t/4Tt5/11/3ttttt3/
opponent-move d11-d9 /4tttt3/5t5/3t7/t4T4t/t3TTT3t/tt1TT1TT1tt/t1T3T3t/t4T4t/3t1t5/11/4tttt3/

The opponent-move captured a piece at d10 (not the king), but the king still disappeared from the next board state.

Drat, I thought I’d solved that one. Must have reintroduced it somewhere.

I’m on the road today, but I’ll take a look at it tonight (and add a test for it).

Edit: fixed it! Newly-released 0.2.5.3b has it.

I made a thread in the WIP section for general discussion, and also moved the source over to Bitbucket for easier pull requests and issue tracking.