Curious about loopy loops

So while I am not new to programming in general, and am relatively new to game dev, I do have a question on loops.
With everything I’ve learnt about game dev there has always been one part of the design that’s always had me curious.

Say you have a tiled top down 2D game. Where a player character runs around and shoots at zombies.
In order to draw the map you obviously need to loop over every tile and draw the tile to the canvas, that part makes sense to me.
But when you have to loop over all entities in a map to do their update function where they have player tracking code. This part seems to be the less efficient.

lets say there are 10 zombies and 2 players, 12 entities in total.
step 1: the level controller updates all entities in a loop iterating 12 times.
step 2: the zombies will then ask the level class for a list of entities in its range
step 3: the level class will then again iterate over all entities getting their co-ords and seeing if they are close
step 4: the zombie will use the list to iterate and see if there are players in the list
step 5: the zombie will then find the closest player in the list and target it

assuming all entities are within range of each other (the worst case scenario)
This will basically run like a thousand times every update cycle just for this one function.
I know you can maybe optimize the specific scenario by having 2 lists, one for the zombies and one for the players.
But some scenarios need the zombies to interact with other zombies to form a horde, so they need to find the 2 closest zombies and move towards those zombies.

So basically, what I am wondering is simply, is this the right approach to doing this kind of thing?
I mean if you add just 1 zombie that adds almost 20 extra iterations to it… what if you add 50 zombies? 100 zombies?
I’m sure there must be a better way of finding the closest 2 zombies for each zombie?

If I’ve worded anything wrong or you need more info, please let me know.
thanks in advance for any help :slight_smile:

That’s pretty much how I do it. I don’t think it takes a computer very long to calculate the distance between two entities, so I’ve never found it to be a problem. You can always try and optimize it in various ways, e.g. only check for a new “target” every few seconds rather than every frame.

As I said I am just curious about it at this stage,
running an update every few seconds actually seems like a great plan to visually improve performance but taking it to an extreme level like say 10 000+ entities running around the place. Surely you’ll get a massive stutter during the update cycle that does the calculation. To put it this way, I tried to run a stress test on an android device using 1000 entities on the GdxAi library and there was plenty of lag. That’s essentially what spiked this curiosity. I then saw a video on the YouTube where 100 000 entities were path-finding to a single enemy with no visible lag, I think this was the “Totally Accurate Battle Simulator” game but I’m not sure. I get that they probably custom built a lot of the path-finding and things but it got me thinking about basic loops that could be improved in more simple ways.

Another example is my current attempt at a game, on my android phone it runs at a solid 50 or so frames, its by no means the most beefy of phones but its no push over either. And currently I have only 8 entities in a box2D world. I’m only assuming here but if I continue to add more and more entities and features that it will get exponentially slower with each feature. On the same device though I tried to do 250 GdxAi entities and it only then started to have visible performance impacts, like 5-10 fps drop.

there could be plenty of reasons for the examples though so that’s why I didn’t use that exactly. I’m more looking for a simple advanced understanding of strategies that people try to get the loopy loop iterations down for a performance boost. whether its a major or minor boost, I’m just looking for understanding.

Simple answer: Don’t use lists (i.e. linear iteration).

Use specialized data structures to answer concrete questions, such as “what are the nearest entities for entity X?” You don’t want to be running through all of your entities and comparing their distance to find the closest to a given entity X. There are much better ways to do this.
Google: “k-nearest neigbor search”, “2D uniform grids”, “spatial acceleration data structures”, “kd-tree”, “binary space partitioning”.

All of those data structures make a trade-off between these aspects:

  • memory requirements (how much memory do I need to allocate in order to hold N entities?)
  • query performance (how fast is it to answer a question for one entity?)
  • maintenance performance (how much do I need to do when one entity’s position changes?)

I think it comes down to lots of little optimisations. Regarding 100,000 entities all pathfinding, I don’t know what kind of pathfinding it was, but (of the top of my head):-

  • They could share parts of paths. e.g. if one entity has found a path, then if another entity is close to that entity, it only needs to find a path to the close entity, and then append the other entity’s path to its own.

  • Run the pathfinding for each entity in a thread, and don’t run too many concurrent threads. (In fact, even without threads, don’t perform the pathfinding on all entities at the same time Also, maybe some entities don’t need updating as often as others?).

  • If it seems to be using A*, you could “cheat” and just see if a direct route is available first.

  • Group the entities in buckets based on their location, so only other entities in that same bucket need to be checked for collisions. It’s computationally quicker to move an entity from bucket to bucket instead of checking collisions against every other entity.

It’s difficult to be more specific without seeing the game. I’d be interested to see the video though.

Steps 2-5 can easily be combined into one which iterates, filters for players and finds the closest.

Use the profiling -Xprof JVM option to see what’s taking too long. As people often say, premature optimisation is is a trap.
If you identify that the path finding is taking too long there are some handy shortcuts that you can use like measuring squared distance rather than actual distance that uses slow squared roots.

Have you come across the term “Big O notation”?
That’s a conceptual framework that is very helpful for discussing and analyzing exactly these sorts of issues.