# Faster A* Pathfinding

So I finally (5 days ago) seem to have achieved A* pathfinding. The only problem is that I want my pathfinding to be faster. A path of 180 steps takes 25 ms to complete, but I want to try to make this faster.

Pathfinder class:

``````package game.util;

import java.util.ArrayList;

public class Pathfinder {

private TileMap map;

public Pathfinder(TileMap map) {
this.map = map;
}

private ArrayList<ITile> open = new ArrayList<ITile>();
private ArrayList<ITile> closed = new ArrayList<ITile>();
private ArrayList<ITile> possible = new ArrayList<ITile>();

private ArrayList<int[]> oldTiles = new ArrayList<int[]>();

public Path getPath(int tx, int ty, int sx, int sy) {
long before = System.currentTimeMillis();
int ax = sx, ay = sy; // active tiles
Path path = new Path();
ITile active = map.getTile(ax, ay);
while (!(ax == tx && ay == ty)) {	// continue until the path has been completed or there are no possible tiles left
active = path.last();
// Get the surrounding tiles
ITile topleft = map.getTile(ax - 1, ay - 1);
if (topleft != null) topleft.setParent(active);
ITile top = map.getTile(ax, ay - 1);
if (top != null) top.setParent(active);
ITile topright = map.getTile(ax + 1, ay - 1);
if (topright != null) topright.setParent(active);
ITile left = map.getTile(ax - 1, ay);
if (left != null) left.setParent(active);
ITile right = map.getTile(ax + 1, ay);
if (right != null) right.setParent(active);
ITile bottomleft = map.getTile(ax - 1, ay + 1);
if (bottomleft != null) bottomleft.setParent(active);
ITile bottom = map.getTile(ax, ay + 1);
if (bottom != null) bottom.setParent(active);
ITile bottomright = map.getTile(ax + 1, ay + 1);
if (bottomright != null) bottomright.setParent(active);
// Swap out the tiles
if (topleft != null && topleft.weight < 100 && available(topleft)) {
}
if (top != null && top.weight < 100 && good(top)) {
}
if (topright != null && topright.weight < 100 && good(topright)) {
}
if (left != null && left.weight < 100 && good(left)) {
}
if (right != null && right.weight < 100 && good(right)) {
}
if (bottomleft != null && bottomleft.weight < 100 && good(bottomleft)) {
}
if (bottom != null && bottom.weight < 100 && good(bottom)) {
}
if (bottomright != null && bottomright.weight < 100 && good(bottomright)) {
}
// Get their values
double topleftF = F(sx, sy, tx, ty, ax, ay);
double topF = F(sx, sy, tx, ty, ax, ay);
double toprightF = F(sx, sy, tx, ty, ax, ay);
double leftF = F(sx, sy, tx, ty, ax, ay);
double rightF = F(sx, sy, tx, ty, ax, ay);
double bottomleftF = F(sx, sy, tx, ty, ax, ay);
double bottomF = F(sx, sy, tx, ty, ax, ay);
double bottomrightF = F(sx, sy, tx, ty, ax, ay);
// Determine the lowest step
possible.clear();
double min = min(topleftF, topF, toprightF, leftF, rightF, bottomleftF, bottomF, bottomrightF);
if (check(min, topleftF) && available(topleft)) {
}
if (check(min, topF) && available(top)) {
}
if (check(min, toprightF) && available(topright)) {
}
if (check(min, leftF) && available(left)) {
}
if (check(min, rightF) && available(right)) {
}
if (check(min, bottomleftF) && available(bottomleft)) {
}
if (check(min, bottomF) && available(bottom)) {
}
if (check(min, bottomrightF) && available(bottomright)) {
}
ITile possibleTile = null;
for (ITile tile : possible) {
if (tile.tx == tx && tile.ty == ty) {
possibleTile = tile;
break;
}
if (possibleTile == null) {
possibleTile = tile;
} else {
if (G(tx, ty, tile) < G(tx, ty, possibleTile)) {
if (oldTiles.size() >= 2) {
int[] twoAgo = oldTiles.get(1);
int[] oneAgo = oldTiles.get(0);
// Check for paths that go bad diagonally
if (twoAgo[0] == oneAgo[0] && tile.ty == oneAgo[1]) {
path.removeLast();
}
if (twoAgo[1] == oneAgo[1] && tile.tx == oneAgo[0]) {
path.removeLast();
}
}
possibleTile = tile;
}
}
}
active = possibleTile;
// Remove the tiles
ax = active.tx;
ay = active.ty;
oldTiles.add(0, new int[]{ ax, ay });
swap(active);
Log.print(ax + ", " + ay);
if ((ax == tx && ay == ty)) {
break;
}
}
long after = System.currentTimeMillis();
path.setOverall(map.getTile(sx, sy), map.getTile(tx, ty));
path.setMillis(after - before);
return path;
}

// -------------[ UTILITIES ]-------------

private void swap(ITile tile) {
if (tile == null) return;
open.remove(tile);
}

private boolean good(ITile tile) {
if (tile == null) return false;
return !closed.contains(tile);
}

private boolean available(ITile tile) {
if (tile == null) return false;
if (!open.contains(tile)) {
return false;
}
if (closed.contains(tile)) {
return false;
}
return open.contains(tile) && !closed.contains(tile);
}

private boolean check(double min, double F) {
return min == F;
}

private double min(double... doubles) {
double min = 1000000;
for (double double_ : doubles) {
min = Math.min(double_, min);
}
return min;
}

private double G(int tx, int ty, ITile tile) {
double dx = tile.tx - tx, dy = tile.ty - ty;
double G = Math.sqrt(dx * dx + dy * dy);
return G;
}
private double F(int sx, int sy, int tx, int ty, int ax, int ay) {
if (ax < 0 || ay < 0 || ay >= map.getHeight() || ax >= map.getWidth()) {
return 1000;
}
int dx = tx - ax, dy = ty - ay;
double G = Math.sqrt(dx * dx + dy * dy);
double H = Math.abs(dx + dy);
double F = G + H;
if (dx == dy) {
F = Math.max(0, F - 2); // diagonal
}
return map.getTile(ax, ay).weight + F; // Add to the weight of the tile
}

}

``````

Any help would be appreciated

CopyableCougar4

Have you profiled?

That said, jump point search is a stupid fast extension of A* if you meet the conditions.

I take the ms before and after and print the difference.

CopyableCougar4

The trick to faster A* is abusing data structures and proper heuristics.

You’re only using ArrayList, that’s pretty poor performance.

That’s not profiling, that’s very loose benchmarking.

Take a look at JVisualVM (ships with the jdk, it’s in the bin directory), or if you use netbeans, it’s embedded in it.
Profile both what uses CPU and memory. You’re allocating stuff in a (fairly) tight loop, not usually a good idea.

E.g. min() should be replaced by just the expression it represents:

``````Math.min(Math.min(Math.min(topleftF, topF), Math.min(toprightF, leftF)), Math.min(Math.min(rightF, bottomleftF), Math.min(bottomF, bottomrightF)));
``````

or similar. Varargs and the enhanced for loop both allocate memory needlessly in this case.

EDIT: also, why are you logging info in the loop?
EDIT2: I’m betting good(), available() and swap() turn up pretty high on the profile. All of those are O(n) and called frequently. Consider a HashMap for `closed` at least.
Also, good god why is available() so redundant? Each of those contains() is expensive for a List, man! Both of the if statements can be eliminated and will likely double the speed of that method.

I at first was having a buggy pathfinding loop (ie. it sucked more than it does now :P) and I needed to know how long it was taking.

CopyableCougar4

Well I feel stupid. I took out a silly method and printing every tile and it was clocking at 8 or 9 ms for a ~180 tile path…

Code:

``````package testing.ext;

import java.util.ArrayList;

public class Pathfinder {

private TileMap map;

public Pathfinder(TileMap map){
this.map = map;
}

private ArrayList<ITile> open = new ArrayList<ITile>();
private ArrayList<ITile> closed = new ArrayList<ITile>();
private ArrayList<ITile> possible = new ArrayList<ITile>();

private ArrayList<int[]> oldTiles = new ArrayList<int[]>();
private ITile topleft, top, topright, left, right, bottomleft, bottom, bottomright;

public Path getPath(int tx, int ty, int sx, int sy){
long before = System.currentTimeMillis();
int ax = sx, ay = sy; // active tiles
Path path = new Path();
ITile active = map.getTile(ax, ay);
while(!(ax == tx && ay == ty)){	// continue until the path has been completed or there are no possible tiles left
active = path.last();
// Get the surrounding tiles
topleft = map.getTile(ax - 1, ay - 1);
if(topleft != null) topleft.setParent(active);
top = map.getTile(ax, ay - 1);
if(top != null) top.setParent(active);
topright = map.getTile(ax + 1, ay - 1);
if(topright != null) topright.setParent(active);
left = map.getTile(ax - 1, ay);
if(left != null) left.setParent(active);
right = map.getTile(ax + 1, ay);
if(right != null) right.setParent(active);
bottomleft = map.getTile(ax - 1, ay + 1);
if(bottomleft != null) bottomleft.setParent(active);
bottom = map.getTile(ax, ay + 1);
if(bottom != null) bottom.setParent(active);
bottomright = map.getTile(ax + 1, ay + 1);
if(bottomright != null) bottomright.setParent(active);
// Swap out the tiles
if(good(topleft)){
}
if(good(top)){
}
if(good(topright)){
}
if(good(left)){
}
if(good(right)){
}
if(good(bottomleft)){
}
if(good(bottom)){
}
if(good(bottomright)){
}
// Get their values
double topleftF = F(sx, sy, tx, ty, ax, ay);
double topF = F(sx, sy, tx, ty, ax, ay);
double toprightF = F(sx, sy, tx, ty, ax, ay);
double leftF = F(sx, sy, tx, ty, ax, ay);
double rightF = F(sx, sy, tx, ty, ax, ay);
double bottomleftF = F(sx, sy, tx, ty, ax, ay);
double bottomF = F(sx, sy, tx, ty, ax, ay);
double bottomrightF = F(sx, sy, tx, ty, ax, ay);
// Determine the lowest step
possible.clear();
double min = Math.min(Math.min(Math.min(topleftF, topF), Math.min(toprightF, leftF)), Math.min(Math.min(rightF, bottomleftF), Math.min(bottomF, bottomrightF)));
if(check(min, topleftF) && available(topleft)){
}
if(check(min, topF) && available(top)){
}
if(check(min, toprightF) && available(topright)){
}
if(check(min, leftF) && available(left)){
}
if(check(min, rightF) && available(right)){
}
if(check(min, bottomleftF) && available(bottomleft)){
}
if(check(min, bottomF) && available(bottom)){
}
if(check(min, bottomrightF) && available(bottomright)){
}
ITile possibleTile = null;
for(ITile tile : possible){
if(tile.tx == tx && tile.ty == ty){
possibleTile = tile;
break;
}
if(possibleTile == null){
possibleTile = tile;
} else {
if(G(tx, ty, tile) < G(tx, ty, possibleTile)){
if(oldTiles.size() >= 2){
int[] twoAgo = oldTiles.get(1);
int[] oneAgo = oldTiles.get(0);
// Check for paths that go bad diagonally
if(twoAgo[0] == oneAgo[0] && tile.ty == oneAgo[1]){
path.removeLast();
}
if(twoAgo[1] == oneAgo[1] && tile.tx == oneAgo[0]){
path.removeLast();
}
}
possibleTile = tile;
}
}
}
active = possibleTile;
// Remove the tiles
ax = active.tx;
ay = active.ty;
swap(topleft);
swap(top);
swap(topright);
swap(left);
swap(right);
swap(bottomleft);
swap(bottom);
swap(bottomright);
if((ax == tx && ay == ty)){
break;
}
}
long after = System.currentTimeMillis();
path.setOverall(map.getTile(sx, sy), map.getTile(tx, ty));
path.setMillis(after - before);
return path;
}

// -------------[ UTILITIES ]-------------

private void swap(ITile tile){
if(tile == null) return;
open.remove(tile);
}

private boolean good(ITile tile){
if(tile == null) return false;
return !closed.contains(tile) && tile.weight < 100;
}

private boolean available(ITile tile){
if(tile == null) return false;
return open.contains(tile) && !closed.contains(tile);
}

private boolean check(double min, double F){
return min == F;
}

private double G(int tx, int ty, ITile tile){
double dx = tile.tx - tx, dy = tile.ty - ty;
double G = (dx * dx + dy * dy);
return G;
}
private double F(int sx, int sy, int tx, int ty, int ax, int ay){
if(ax < 0 || ay < 0 || ay >= map.getHeight() || ax >= map.getWidth()){
return 1000;
}
int dx = tx - ax, dy = ty - ay;
double G = (dx * dx + dy * dy);
double H = Math.abs(dx + dy);
double F = G + H;
if(dx == dy){
F = Math.max(0, F - 2); // diagonal
}
return map.getTile(ax, ay).weight + F; // Add to the weight of the tile
}

}

``````

Any more optimizations would be nice

CopyableCougar4

Look at my edits above for some. Also note the time complexity of various ArrayList methods: http://stackoverflow.com/a/322742/3517265

E.g. remove(Object o) (used in swap()) is O(n) and requires a potentially large memory copy, but if the list doesn’t have to maintain ordering can be made faster:

``````
private void remove(ArrayList<T> list, T obj) {
int index = list.indexOf(obj); // still O(n), can't get around that
if (index >= 0)
list.set(index, list.remove(list.size() - 1)); // replaces element we are removing with last element, no large copy like in remove(Obj o)
}

``````

EDIT:

That’s why you profile. It would of told you that.

``````private void swap(ITile tile){
if(tile == null) return;
remove(open, tile);
}

private void remove(ArrayList<ITile> list, ITile obj) {
int index = list.indexOf(obj);
if (index >= 0)
@@	        list.set(index, list.remove(list.size() - 1)); // replaces element we are removing with last element, no large copy like in remove(Obj o)
}

``````

But I just get IndexOutOfBounds exceptions. Is their something else that I have to change?

Stacktrace:

``````Exception in thread "main" java.lang.IndexOutOfBoundsException: Index: 4, Size: 4
at java.util.ArrayList.rangeCheck(Unknown Source)
at java.util.ArrayList.set(Unknown Source)
at testing.ext.Pathfinder.remove(Pathfinder.java:172)
``````

Line 172 is highlighted.

CopyableCougar4

Derp.

``````
private static <T> void remove(List<T> list, T obj) {
int index = list.indexOf(obj);
if (index >= 0) {
@@			if (index == list.size() - 1)
@@				list.remove(index);
else
list.set(index, list.remove(list.size() - 1));
}
}

``````

Overall it has fluctuated from 9-16 ms (it’s at 16 ms now), but the remove() method didn’t seem to help for the overall time.

CopyableCougar4

A pair of currentTimeMillis() means very little. JMH is virtually a requirement for benchmarks. Don’t trust those timings, esp. if you aren’t doing a warmup or anything. It also varies greatly depending on the path.
I guess it may not make much difference since the swapped tiles will always be near the end of `open`, but it should be slightly faster.
That said, you could replace the indexOf() with a manual search starting from the end of the list (instead of the front, like indexOf) and take advantage of that fact.

But if you still haven’t profiled, then you don’t know where potential speedups are, so do that.

Also now that I actually think about it, this isn’t A*, it seems similar to greedy best-first search but possibly worse, how does it handle a situation like this?

`s = start, e = end e xxxxxxxx x x x s xxxxxxxxx`

An algorithm based on breadth-first will do this:
`/------------->e | xxxxxxxx | x | x | x s xxxxxxxxx`

But greedy will do something like this:
` /----------->e |xxxxxxxx \--_| x _ / x / x s xxxxxxxxx`

If I’m thinking correctly, since your algo just appends to the current end of the path, it may create even worse paths. (I can’t tell but it may not even terminate due to the oldTile shenanigans) You pretty much need some type of queue, esp. a priority queue if you want to perform well. You have the [icode]open[/icode] list, but you aren’t using it for much, esp. not candidate selection, which is the most important and defining difference between algorithms.

[quote=“BurntPizza,post:5,topic:51112”]
I would be more direct and say "use a hashmap for `closed`". The usage pattern is to add objects occasionally and find them a lot. It should be a hashmap, period.

I would also say `open` should be scanned backwards with an iterator almost certainly although that’s more dependent on the algorithm, which is hard to follow here. The open list is THE major data structure in A* and gets the most hammering! ArrayList is almost certainly not the best choice and will hurt performance. FWIW I settled on a linked list in mine.

The candidate heuristic used in your code is the manhattan distance: double H = Math.abs(dx + dy). Since the code appears to be allowing diagonal travel, manhattan will usually be an overestimate, which is known to slow down A* drastically (it investigates far too many paths). It’s really hard to know here though because in the most recent version G is not square rooted which means it no longer compares to H in the same terms. Thinking clearly about the H and G calculations (and not overestimating H) may well improve the performance.

If you can do it, I also suggest setting up a map GUI which lets you watch the pathfinder algorithm progressing (testing candidates, improving its route, etc.) It’s the only way to be really sure what’s going on IMO. I saw in your other thread there were some visuals - can you slow it down and watch to see if it’s doing silly stuff? It’s not always the best approach to just hack in some code, then run the profiler and tune the bottlenecks it shows

tl;dr: (1) Use appropriate data structures; (2) don’t overestimate H; (3) build a visualiser for the algorithm.

Aside: Cougar I see you didn’t think much of the suggestions for improving and streamlining this class I made last week. I am mortally offended!! :’(