An AI challenge

And then some! In fact I think that both Jackal and I have cheated somewhat in this regard. In my bot I check that walls are still walls on a periodic basis. This applies to walls that my bot can’t currently see… it’s working from it’s memory of what walls it found before. Jackal’s bot additionally appears to check for walls in places the bot has never seen.

Both of these procedures make it easier to write the robot.

If we couldn’t do this, then if the robot runs out of places to look for targets, it’d have to go round and look at all the walls again, perhaps for a long time, before finding one that has moved.

So Simon, what’s your feeling about doing this kind of wall checking… is it allowed? :slight_smile:

Of course we can’t find the targets with this kind of method. The robots have still got to go around an look at all the possible places they could be.

Cheers, Tim.

Um, No. :stuck_out_tongue: The bots should only use data that they’ve actually collected from observation - as if they were humans put in the maze for the first time. You should assume that your ONLY sources of new data are from lineOfSight() and visibleTargets. What you haven’t actually seen you can’t know about… I said it was tricky!

BTW can anyone suggest a good single-figure scoring system? The best bots should have the lowest turns and the lowest CPU usage - what’s a good way of combining these numbers to give an overall ‘fitness’ score?

  1. My bot only ADDs walls by observation. It’s only the removal of SOME walls it does “by ESP” :slight_smile: So it’s not so bad.
  2. lineOfSight unfortunately takes 2 points as parameters. So just specifying that lineOfSight should be used does not restrict the robot from testing any points, even if it can’t see them. I agree with Jackal in this regard… an interface that only allows the operations that are considered legal would help to produce bots that abide by the rules. :slight_smile:
  3. You can remove this “cheat” functionality from TimsAI by deleting the if statement that begins like this: if( parent.gameTurn % PERIOD_OBJECTS == 0. TimsAI still works without this, but it’s no longer guaranteed to find all the targets. It requires a whole extra strategy to deal with this situation.

Cheers, Tim.

OK, point taken! I’ve added 3 new methods to Player (new source);

int castRay(x,y) - returns distance from bot to nearest wall or to x,y if no wall hit
int castRay(dir) - returns distance from bot to nearest wall in direction dir
boolean canSee(x,y) - returns true if x,y is visible to bot

So now instead of parent.lineOfSight(player.x,player.y,x,y) you can now use player.castRay(x,y).

These are now the only calls that a bot can legally use…

Erm… the source isn’t in the .jar file… it’s in the .zip file: fow.zip)

Thanks for that. It certainly makes it a lot clearer. :slight_smile:

However… (you knew that there was a however didn’t you? :slight_smile: )

(1)
These calls don’t restrict their ability to see to the direction the bot is facing. Is this what you intended?

(2)
Being restricted to only these calls for sensing makes it extremely difficult to create any kind of grid where it’s possible to test the emptiness, or otherwise, or a grid cell with exactitude (i.e. to repeatedly get the same answer, even if the bot has moved somewhere else but can still “see” the cell).

I’ve spent quite some time trying to come up with a reliable method for this, but so far I’ve failed. For any test I’ve come up with so far to define walls (the ones that never move), I haven’t been able to prove consistently that the cell is still occupied.

So eventually I thought… hold on, Simon’s AStar based bots define a grid… what do they do?

… and I find that they test emptiness by making calls to parent.canMoveTo(x,y); – so is this allowed? :wink:

UPDATE: OK, I’ve got a modified version of my bot (TimsAI.java) that ONLY uses the functions specified for it’s sensing of the environment. It doesn’t use parent.canMoveTo(x,y). It still manages to finish most of the time, but it’s efficiency is hampered by it’s inability to be certain about the status of the walls. :-\ I think I’ve now figured out a method to do wall sensing a bit more accurately (a verification scheme – but still not 100%), but it’d be a lot easier just to use parent.canMoveTo()!

Cheers, Tim.

(0) it’s in the .zip file: D’oh!

(1) I originally intended the bots to stick to their FOVs but on considration changed it. It seems fair to allow a bot to look in any direction and move all in one turn. Rabbits have near 360’ vision…

(2) Hmm… This is a close call - The purist in me is saying ‘No, only rays can be used!’ but the idle git in me is saying ‘aw, come on! if you can see the centre of a square then it’s OK to canMoveTo() the corners!’
reflective pause
I think no canMoveTo(). It’s up to the bots to refine their knowledge (maybe do a single ray on distant squares and a multi-ray on closer squares?). You can tell if a bot is stuck if player.lastMoveSuceeded==false.
I’ve modified the AStarAIs so they do a canSee() on the 4 corners and centre of each square - seems to work ok…

I’ve now updated the simulator and I’ve added a ‘solo’ option for switching off the other bots during testing.

The current fow.zip doesn’t seem to contain the source for these changes… :slight_smile:

Theres now an updated TimsAI.java

On a separate note, I think I’ve spotted a couple of minor bugs in your Bresenham (not Bresham) line algorithm code (in FOW.java). These bugs show up in both the canSeeBetween and lineOfSight functions.

(1) The horizontal code says “if (error > dx)” when it should say “if (error >= dx)”. This results in horizontal lines not doing the last Y coord increment (in combination with the error=0 discussed below).

(2) The error component is normally initialised to half of its full range (rather than 0). This is so that the line is as symmetrical as possible, instead of lop-sided. Need to take care with this as you’ve got the vertical parts subtracting from the error instead of adding.

Cheers, Tim.

Latest version here (source here).

New map added.

Oops! That’ll teach me to ‘borrow’ code! Can you tidy it up for me Tim?

Here is my bot’s latest version: http://www.orfjackal.net/temp/fow_orfjackal_2008-06-19.zip

It still relies on canMoveTo as I did not yet have time to change it. I’ll change it to castRay when I have more time.

Cool. I’ve added it in.
It looks really smooth but it eats CPU - on the openspace map the average peaked at 100ms per turn on my machine!

OK, I’ve made the changes. As I said, they were fairly minor bugs really. The code is in http://www.7sun.com/temp/fow/FOW.java – EDIT: this is the FOW from the previous release… I’ll just get the new one now and see if its the same. UPDATE: OK, I merged my changes into the new version of FOW.java now, and uploaded it to the same address.

While I was looking at making these changes, I found code that I believe can be made a lot more efficient.

For example, the lineOfSight routine calls canMoveTo(x,y) for each point tested along the line, but canMoveTo(x,y) tests for hitting a pillar each time. This means that it’s doing a pillar hit test for each pixel in the map along the line being tested. I think lineOfSight would be faster if it found the closest pillar once, and then used this info to limit its search in the map.

Would you like me to have a go at fixing this too?

Cheers, Tim.

Thanks!

Would you mind? I can’t believe I missed that!

Optimizing the canMoveTo and canSeeBetween methods would improve the performance of my bot, because those are its hotspots. But when I switch to using castRay, it should anyways use much less CPU. Casting some 100 rays each step should be faster than calling canMoveTo 10000-50000 times each step. ::slight_smile:

That’s true, but it’ll also improve the performance of everyone elses…
The advantage of optimising the ray casting is that it means tests will run faster. castRay() will always have a cost so the fewer (and shorter) rays a bot casts, the better!

OK, there’s a new version of FOW.java available. It has some of the changes to make the lineOfSight and canSeeBetween routines faster. It seems to make my bot about 2 to 3 times quicker. I’ve got some more changes planned though, so please bear with me.

Cheers, Tim.

Good changes, Tim. My bot’s CPU usage has dropped by 50-75%. Now 75% of my CPU time goes into checking all cells on the map with the forbidden canMoveTo method. I should be able to drop the CPU usage by at least 50% by changing the way that I look for obstacles, unless I manage to make a very CPU intensive way of doing it. I have some ideas on how to recognize those pillars even if they would not be moving… let’s see how much CPU it will require. ::slight_smile:

But first I’ll need to take a pause and concentrate on work. I’ll come back in a couple of weeks.

Here’s another iteration of FOW.java with some more speed improvements.

In my timings (4 runs, square maze) the previous version has an average of 0.524 ms/turn.
This new version has an average of 0.353 ms/turn for the same maze.

This is not such a large improvement as the previous iteration, but I think it’s worthwhile non-the-less. I don’t think I’ll be looking at any more speed improvements at this time. :slight_smile:

Cheers, Tim.


A quick note about timings… I had to change to nanoTime() to get the accuracy shown here. It would seem like a good idea to do this for all bots, and further, to take the timing code out of the bots completely.

Thanks Tim! That’s given a it huge performance boost. (what a klutz I am!)

BTW I originally used nanoTime() but found it totally inaccurate (2 identical bot’s timings differing by a factor of 10).
Is there some trick to using nanoTime?

I’ve never had any problems with using nanoTime() instead of currentTimeMillis(). I’ve only tried it on Windows machines, but other posts that I’ve read that mention it are from people running Linux machines. I can’t remember anything about it’s MacOS implementation.

It is necessary to check that the time hasn’t gone backwards (which it does when the counter wraps… the documentation is a bit misleading there – it wraps quicker than the documentation implies). But your code already does this for the currentTimeMillis() version anyway. The only other point is that of course the value returned is in nanoseconds not milliseconds.

Oh, one other thing… there seems to be a minor problem with it on multi-core AMD processors. nanoTime is based off the cpu’s high resolution timer, and on the multi-core AMD processor each core has it own timer, and it seems that they can get a little out of sync. But apparently just rejecting times that go backwards tends to fix the problem fairly well.

UPDATE:

see Game Timer

– doesn’t sound good does it? :slight_smile:

also see NanoTime for Dual Core AMD systems
– this one has some sample code for dealing with the problem.

Cheers, Tim.

Few minor changes. (source here).

Collision detection is now accurate (per-pixel), which has broken my bots :frowning: (not Tim’s or Jackal’s though)

I’ve added a new (nightmare) map which breaks all the bots except NoAI (evil chuckle!)

[quote]I wouldn’t suggest switching to System.nanotime
[/quote]
Doesn’t sound so good…