I think that’s ok - the diagonal moves actually are slightly cheaper: 14 cost instead of 10*sqrt(2) cost. Edit: I see what you mean now down the bottom right of the map. Hmm… wven without reducing the key shouldn’t this still work? :-\ Something wrong with the priority queue?
Edit2: I can’t access the files anymore. Is it using manhattan distance heuristic (overestimating cost) instead of Euclidean distance? That could explain it.
Yeah, I wasn’t very clear. When you add a neighbour to the open list, it may already be in there but with different f-value. This will happen pretty frequently in grid-based search. There are two ways of handling it:
-
Just add neighbours to the list and don’t worry about it. The priority queue will sort them so the best one is used first. Downside: if there are lots of duplicate neighbours, the open list will be much larger, slowing each operation.
-
Find the duplicate in the open list (usually via an extra hashtable), and compare the two f-values. If the f-value of the new neighbour is lower, you want to have it in the open list instead. Removing the old one and adding the new one is a duplication of effort. Instead you can update the existing node in the open list (including changing its f-value) and have the priority queue resort itself. This is the decrease_key operation (decreases the f-value).
Binary heaps have reasonably inefficient decrease_key operations (still log(n) though). Fancier heaps like Fibonacci heaps are more efficient. The problem is that when the amount the heuristic decreases after one step is equal to the cost for the move added to the g-value, a neighbour that is already on the open list will never have a lower f-value, and decrease_key won’t need to be used.
In the grid example here it’s not quite the case, but it is close enough that I think there will still very rarely be duplicates that need updating. If the cost to move between grid squares is different in different locations (like in any game with variable terrain), then this should turn up a lot more.