The AStarAI and PartialAStarAI seem to spend too much time in going to places where there is nothing new to see.
I think it would be a good strategy to locate the walls and move to areas from which you can see places which you have not yet seen. Something like this:
Each grid cell has one of three states: UNKNOWN, EMPTY, WALL (let’s assume that the bot can tell the difference between a wall and a moving obstacle - for example by measuring the distance to the obstacle twise and checking whether it moved or not). In the beginning all cells are UNKNOWN, except that the bot’s startup location is EMPTY. When the bot sees a cell, it is marked WALL or EMPTY depending on whether there is a non-movable wall or not.
The bot will locate the closest UNKNOWN cell which is next to an EMPTY cell - let’s call this cell X. Then the bot will locate the closest EMPTY cell from which it can see X - let’s call this cell Y. (Or even better, locate the closest EMPTY cell from which you can see any UNKNOWN cell.) Then the bot will move to Y and look in the direction of X as far as possible. As a result, X and the UNKNOWN cells around it are marked EMPTY or WALL. Repeat.
If the bot sees a target, reaching this target may or may not take priority over looking at UNKNOWN cells. It might be smart to look at UNKNOWN cells which can be seen when moving along the path to the target. For example, if the target is straight ahead, instead of moving directly to the target, stop a couple of times to look left or right (or take a detour) if there are UNKNOWN cells nearby. Of course, if you know that it is the last available target, then there is no need to look any further.