Pathfinder

I think that’s ok - the diagonal moves actually are slightly cheaper: 14 cost instead of 10*sqrt(2) cost. Edit: I see what you mean now down the bottom right of the map. Hmm… wven without reducing the key shouldn’t this still work? :-\ Something wrong with the priority queue?
Edit2: I can’t access the files anymore. Is it using manhattan distance heuristic (overestimating cost) instead of Euclidean distance? That could explain it.

Yeah, I wasn’t very clear. When you add a neighbour to the open list, it may already be in there but with different f-value. This will happen pretty frequently in grid-based search. There are two ways of handling it:

  1. Just add neighbours to the list and don’t worry about it. The priority queue will sort them so the best one is used first. Downside: if there are lots of duplicate neighbours, the open list will be much larger, slowing each operation.

  2. Find the duplicate in the open list (usually via an extra hashtable), and compare the two f-values. If the f-value of the new neighbour is lower, you want to have it in the open list instead. Removing the old one and adding the new one is a duplication of effort. Instead you can update the existing node in the open list (including changing its f-value) and have the priority queue resort itself. This is the decrease_key operation (decreases the f-value).

Binary heaps have reasonably inefficient decrease_key operations (still log(n) though). Fancier heaps like Fibonacci heaps are more efficient. The problem is that when the amount the heuristic decreases after one step is equal to the cost for the move added to the g-value, a neighbour that is already on the open list will never have a lower f-value, and decrease_key won’t need to be used.

In the grid example here it’s not quite the case, but it is close enough that I think there will still very rarely be duplicates that need updating. If the cost to move between grid squares is different in different locations (like in any game with variable terrain), then this should turn up a lot more.

Hmm. You have a point there. It should be incorrect, but the specific problem observed in the screenshot shouldn’t happen if the priority queue is working. The actual result looks like pure breadth-first search.

Raft,
I really, really appreciate you sharing that. I made a map 50x50 with random blocks spread throughout and did the same starting points, opposing corners and this is what I showed…

Your Javascript page showed the path found in 74 MS on a comparable map with 30 rows x 50 cols.

Of course I am pretty happy with that. I really appreciate it.

PJT33,
Most of the stuff your saying is way above my head. Rotation Matrices? I’ve never seen or heard of em. I still havent grasped Binary heaps. I will have to reread what you posted a few times to try to understand it. I really appreciate the tips, it just may take a while to sink in and try to institute.

Thanks a lot for the explanation, pretty in-depth stuff. I think I understand what you mean. In my mesh-based A* path finder (http://www.java-gaming.org/index.php/topic,19539.0.html) you must be referring to the percolateUp call on the last line of this code block:


// add reachable nodes to the openList if they're not already there.
for (int i = 0; i < currentNode.getReachableNodes().size(); i++){
	KNode reachableNode = currentNode.getReachableNodes().get(i);
	if (closedList.get(reachableNode) == null){
		//if (openList.contains(reachableNode) == false){
		if (openMap.get(reachableNode) == null){
			reachableNode.setParent(currentNode);
			reachableNode.calcHCost(endNode);
			reachableNode.calcGCost();
			openList.add(reachableNode);
			openMap.put(reachableNode, reachableNode);
		}else{
			assert reachableNode.getGCost() != reachableNode.G_COST_NOT_CALCULATED_FLAG;
			double currentGCost = reachableNode.getGCost();
			double newGCost = currentNode.getGCost() + currentNode.getPoint().distance(reachableNode.getPoint());
			if (newGCost < currentGCost){
				reachableNode.setParent(currentNode);
				reachableNode.calcGCost();
				// Since the g-cost of the node has changed, 
				// must re-sort the list to reflect this.
				int index = openList.indexOf(reachableNode);
				openList.percolateUp(index);
			}
		}
	}
}

I haven’t found that this method takes much CPU time yet, but when it does I’ll know it’s time to swap the BinaryHeap for a Fibonacci heap. Thanks Jono :slight_smile:

np, it’s a just a demo ;D

interesting, it obviously depends on browser and computer itself. on my old laptop, (ubuntu + firefox) it’s around 10-15 MS.

12 MS not bad, but i still think it can be better :wink:

this is a java application running same demo. run it a few times to give JVM a chance to warm up. it runs 0-3 MS on my machine. the jar also contains the source code except AStar itself, which is JKilavuz’s generic AStar implementation

r a f t

I did show 12ms the first time, then 0 ms every time afterwards running the application. I’m sure it definitely depends on the browser and pc running the applets. Mine is a bit slow and outdated which is why I love attempting to develop things with it. Shouldn’t have any problems with other people running them.

I do know that the way I find the adjacent cells and set their values is horribly unoptimized and causes major slowdowns, but until I understand what pjt33 eluded to, it will have to do. Of course I will keep poking and prodding at it til then, hehehe.

Thanks again for the insights.

Hahahaha, oh. Well that makes a lot more sense! I was wondering how you would manage to solve it so slowly. XD