Hnefatafl

different from traditional = modernized . It scans for me.

I disagree, “modernized” carries with it the sense that whatever it replaced was outdated and that’s simply not possible in an abstract game.

I’m at the point at which I can start teaching the AI to play well. As I don’t know how best to prune the tree, I think I will use genetic algorithms to discover a pruning function.

I downloaded some Tablut games for Android, and I think I can do better. Anyways, I will definitely enter Fishbreath’s competition.

Delighted to have you! I look forward to seeing your entry.

Fishbreath, how do I integrate with OpenTafl? I don’t see instructions on your site. I’d like to see a minimal AI Java implementation to bootstrap my effort.

Also, what resources is OpenTafl providing me - can it give me board state and available moves info, or do I have figure out this in my own AI (so every AI might have different ideas about what a legal move is).

Also, it might be worthwhile to post your competition in the competitions board on JGO, rather than burying mention of it in my thread

Here’s a link to the engine protocol. You can get board state (and OpenTafl sends external engines a position record when the board state changes), but you’ll need your own move generator.

OpenTafl communicates in a specific human-readable notation. OpenTafl’s code for generating and parsing OpenTafl notation lives in this package, and is Apache-licensed, so you can crib from it directly.

This is the source file where the engine client implementation lives, and should get you started.

Thanks for the tip about the contests page—I didn’t realize it was for community-run contests, too. I’ll post over there.

Well, my dwarves and elves are tinkering away on my hnefatafl automaton, and we have a (1) fast AI that prunes but does not do playouts and (2) an MCTS AI. The MCTS AI uses the fast AI for the playouts. I have been seeing how increasing the number of playouts improves the results for the MCTS AI, and I see a clear progression. Here the MCTS is playing defenders and the fast AI is attacking. In each iteration I double the number of playouts per position.

dwins = number of games won by defender : average turns to win
awins = number of games won by attacker : average turns to win
draws = games that went over 250 turns without a winner

2 playouts for defender: dwins=10:100, awins=6:107, draws=4 4 playouts for defender: dwins=15:66, awins=3:103, draws=2 8 playouts for defender: dwins=19:60, awins=1:87, draws=0 16 playouts for defender: dwins=19:40, awins=1:39, draws=0 32 playouts for defender: dwins=18:31, awins=0:0, draws=2

EDIT: I did the reverse case of playing attackers, which is a harder problem to compute:

2 playouts for attacker: dwins=5:524, awins=12:150, draws=23, time=1192668 4 playouts for attacker: dwins=9:1100, awins=18:125, draws=13, time=1877313 8 playouts for attacker: dwins=0:0, awins=37:111, draws=3, time=2911840 16 playouts for attacker: dwins=0:0, awins=40:80, draws=0, time=4858704 32 playouts for attacker: dwins=0:0, awins=40:67, draws=0, time=8562867 64 playouts for attacker: dwins=0:0, awins=40:62, draws=0, time=17018320

It seems like attackers hit a wall…