An AI challenge

The Scenario;
A bot is released in a viable location on an unknown map (but of a known size).
The bot must locate and visit five randomly located targets as quickly as possible.
The bot has can cast sight-rays but has no more information than that.
The map is full of large, mobile obstacles.

The Question;
What is the best way to achieve this?

I’ve set up the experiment with random bots here (targets are yellow dots).

I’ve a couple of ideas myself (click on ‘show debug data’ for hints) but I wonder what other approaches there might be?

provide an interface and a test space so that we can do our own robot

first idea would be to use a 2d array.
each step will throw a ray in the direction of the most unknow surface direction (using the 2d array to find this direction) and will update the 2d array, then a path finder will help in exploring the whole surface.

does the robot know the difference between a wall and a moving object ?

Yeah Simon, do it - sounds like fun! ;D

No! That’s what makes this so tricky!

The full source code is here. It should compile ok… ::slight_smile:

Look at src/RandomAI.java for implementation instructions.
VectorAI.java, GridAI.java and FuzzyAI.java also give example code (I’d ignore the other *AI.java files for now).
If you post/send AI code I can add it to the webpage for all to see - showing-off encouraged!

It’s all a bit scrappy - it wasn’t really intended for distribution - but I hope its reasonably clear.


...
...

src\MyAppletStub.java:7: <identifier> expected
  private Hashtable<String,String> _properties;
                   ^
src\MyAppletStub.java:26: '(' or '[' expected
    _properties = new Hashtable<String,String>();
                               ^
src\Player.java:111: <identifier> expected
        public Vector<Player> visibleEnemies=new Vector<Player>();
                     ^
src\Player.java:112: <identifier> expected
        public Vector<Player> visibleFriends=new Vector<Player>();
                     ^
src\Player.java:113: <identifier> expected
        public Vector<Waymark> visibleTargets=new Vector<Waymark>();
                     ^
src\Player.java:115: <identifier> expected
        public Vector<Waymark> visitedTargets=new Vector<Waymark>();
                     ^
src\Player.java:415: '(' or '[' expected
                Vector<String> keywords=new Vector<String>();
                                                  ^
src\PlayerManager.java:13: <identifier> expected
        public Vector<Player> players=new Vector<Player>();
                     ^
src\PlayerManager.java:146: '(' or '[' expected
        Vector<Player> tempPlayers=new Vector<Player>();
                                             ^
src\PlayerManager.java:147: '(' or '[' expected
        Vector<Player> shuffledPlayers=new Vector<Player>();
                                                 ^
src\PlayerManager.java:275: '(' or '[' expected
        Vector<Player> tempPlayers=new Vector<Player>();
                                             ^
src\PlayerManager.java:276: '(' or '[' expected
        Vector<Player> shuffledPlayers=new Vector<Player>();
                                                 ^
src\VectorAI.java:94: <identifier> expected
        Vector<Point> points=new Vector<Point>();
...
...

cant compil due to <…> syntaxe

erm, use java 5 language level then!

[quote=“DzzD,post:5,topic:32038”]
Um, sorry! Should have said it was Java 5+

I’d say divide the portion of the map that the bot can see using a quad tree into increasingly smaller grid segments. Then when they are of a certain size, use A* to find the best path at a certain time step. Because objects are moving, reevaluate the path every time step.

Then when the goal is out of the sight range, use a point that is closest by Euclidean (or Manhattan) distance to represent waypoint.

On top of that, each bot needs to have a memory, so that if the closest point leads into a dead end or something, it will know not to try that location again.

So, in the end, you need:

  • A 2D array of booleans representing the entire map - visited or not visited.
  • A temporary 2D array representing the bot’s current vision.
  • Find the point within the temporary array that is closest to the goal and hasn’t been visited.
  • Solve the path to that point using A*.

That should work, and won’t be too costly.

Latest version here (source here) with shiny new bot: AStarAI!

@Demonpants

That’s pretty close to what I’ve done, but no quad tree and the grid is ints and indicates ‘visitedness’ - the lower the value the less that square has been seen and the further away it was when it was seen (this is cheap to do as I can use the g values from the A* search).

I’ve time-sliced the A* because (did I forget to say?) the idea is to get each game turn done in <=2ms…

AStarAI looks for targets and if it finds one it tries to path to it. If it can it goes there, if not it looks for the least visited square it can see and goes there. If it sees a new target it immediately repaths to try and reach it. There’s a bit of fuzzy logic obstacle avoidance thrown in to stop the bot getting stuck on corners. It uses full A* each time (to find the shortest path).

This strategy works but it doesn’t look very ‘natural’ though; if you turn off the debug data it’s hard to work out what the bot is up to…
It’s pretty rubbish on the maze map too. :frowning:

Next experiment: partial AI (first path found).

Oh, very cool.

Maybe you could try a Q Learner or a neural net? Could be fun. ;D

New Bots added: PartialAStarAI & NoAI + a new circular map. Applet here, source here.

They’d both work ok, trouble is that they’re both evolutionary-type methods so they need a learning phase. The idea of these experiments is to ‘get it right first time, and fast’. It’s like looking for casualties after an air crash in unknown terrain; no second chances…

What I’d really like is alternatives to grid-based methods - maybe using vectors or point-source influence maps or something…

Anyone got any suggestions?

The AStarAI and PartialAStarAI seem to spend too much time in going to places where there is nothing new to see.

I think it would be a good strategy to locate the walls and move to areas from which you can see places which you have not yet seen. Something like this:

Each grid cell has one of three states: UNKNOWN, EMPTY, WALL (let’s assume that the bot can tell the difference between a wall and a moving obstacle - for example by measuring the distance to the obstacle twise and checking whether it moved or not). In the beginning all cells are UNKNOWN, except that the bot’s startup location is EMPTY. When the bot sees a cell, it is marked WALL or EMPTY depending on whether there is a non-movable wall or not.

The bot will locate the closest UNKNOWN cell which is next to an EMPTY cell - let’s call this cell X. Then the bot will locate the closest EMPTY cell from which it can see X - let’s call this cell Y. (Or even better, locate the closest EMPTY cell from which you can see any UNKNOWN cell.) Then the bot will move to Y and look in the direction of X as far as possible. As a result, X and the UNKNOWN cells around it are marked EMPTY or WALL. Repeat.

If the bot sees a target, reaching this target may or may not take priority over looking at UNKNOWN cells. It might be smart to look at UNKNOWN cells which can be seen when moving along the path to the target. For example, if the target is straight ahead, instead of moving directly to the target, stop a couple of times to look left or right (or take a detour) if there are UNKNOWN cells nearby. Of course, if you know that it is the last available target, then there is no need to look any further.

Well, basically you would teach them and then write the resulting “brain” to an AI file. A typical Q Learner brain for that case will be only like 100kb at the most.

[quote=“SimonH,post:11,topic:32038”]

Aren’t you cheating here? You’re looking around corners for flags, when the AI already visited that spot. In reality it might remember whether grids where accessible, but it can’t really see whether there is a flag around the corner… It would actually analyze the environment, and walk a bit to truely look around the corner, which would require quite a lot of intelligence.

Would this work even on entirely new maps, I wonder? I suppose it might…

[quote=“Riven,post:14,topic:32038”]
Not sure what you mean - bots can only see in straight lines…

That’s the challenge! I want bots that do the least work for the most effect. I’m currently looking at ‘frontier-based exploration’ which is used by robots, but it’s not ideal for use with mobile obstacles…

I implemented a bot using the strategy which I described in my previous post. It still has some problems in getting stuck easily, especially in the roundmaze level, but in other levels it beats the other bots clearly. Here is one test run:


- forest:

mapsize=800,880
config loaded - 4 reds and 0 blues
JackalsAI done 497 turns 30878ms
NoAI done 1019 turns 833ms
PartialAStarAI done 1063 turns 3699ms
AStarAI done 2003 turns 22724ms

- maze:

mapsize=800,759
config loaded - 3 reds and 0 blues
JackalsAI done 657 turns 10064ms
AStarAI done 5400 turns 14515ms
PartialAStarAI done 11473 turns 15299ms

- square:

mapsize=800,800
config loaded - 4 reds and 0 blues
JackalsAI done 304 turns 17811ms
PartialAStarAI done 493 turns 2640ms
NoAI done 1048 turns 633ms
AStarAI done 1490 turns 20162ms

Right now the bot’s sources are 905 SLOC and its tests are 593 SLOC. You can get the sources and compile/run them from http://www.orfjackal.net/temp/fow_orfjackal_2008-06-17.zip

P.S. You should reorganize the design of the simulator. Now the bots have free access to all information about the game world (especially through the FOW object and its fields such as “public Vector targets”). You should give the AIs only a very restricted interface through which to communicate. I wasn’t sure what was allowed (because nothing was prevented), so I used only those methods/fields which were mentioned in RandomAI and created a wrapper for accessing only those (JackalsAI.PlayerSensors).

I am impressed - now that’s showing off! (I’ve added your AI to the simulator). Officially the best AI yet…

The simulator is meant to be as flexible as possible for different scenarios (hence all the public code). Anyway, what would cheating prove? No glory in that…

I’ve added a (primitive) fuzzy navigaton method to the Player class (fuzzyNavigate()) which you could use locally to smooth the sticking a bit…

I added today some simple collision detection - if last move failed (apparently because of hitting a wall) then strafe randomly left/right or rotate to random direction. With the help of that the bot can finish roundmaze in a new record time. But it still sometimes gets stuck because of changing the target (unseen cell) too rapidly (the bot will rapidly try to move into two opposite directions). It’s also quite a CPU hog. For example it finds the closest target (with breadth-first search) and calculates the path to it (with A*) on every step, which causes visible slow frame rate when the target is far away.

Next I’ll improve the movement, add target priority management and do some profiling in order to optimize the worst bottlenecks. I’ll release a new version of the bot when I get these done.

Then you could provide a different set of interfaces for different scenarios. For example one interface has those methods which tell only what the bot itself can see. Then another interface tells information about the whole map’s structure. Again another tells the exact locations of all targets and enemies etc.

You could improve the framework code by having an interface which all bots need to implement. Now you are needlessly using reflection in Player.reloadAI() to call the gameTurn() and other bot methods, which is very error prone. Also please move all classes away from the default package. It’s not possible to refer to classes in the default package from any other package (this was one of the reasons why I created a wrapper for the Player object in my code).

Hi Simon,

I’m afraid I can’t resist challenges like these :slight_smile: My bot is just one file (TimsAI.java). The algorithm is pretty much the same as the one outlined by Jackal. But my bot is generally slightly slower to get to a solution. There are still a few bugs and inefficiencies, but here it is, warts and all.

Perhaps I’ll even get around to improving it a bit.

Cool. I’d was already using your fuzzyNavigation code, this just makes it easier! :slight_smile:

Cheers, Tim.

Cool! TimsAI looks pretty smart - that fuzzy avoidance gives it the edge…

I’ve updated the source and put it in the package edu.sph.fow (sheer laziness before!).
I’ve also added a build.xml file so you can compile using ANT if you have it…
These 2 changes should mean that you can use your own packages more easily.

Please remember though I have to check the code for malware by hand and I won’t bother if there’s hundreds of files (no jars accepted - java files only). Also could people compile to java 5 as a standard? Not all JREs are 6 yet.

Oo! This is getting exciting now! I have my own cunning plan which should be a killer - watch this space!

EDIT: Forgot to say there’s now also a global config.txt file to save having to modify the individual map config files.