Systematic removal from a LinkedList

I’m trying to remove items from a linked list and only going through the list once so I came up with a method of doing so.
But the numbers removed aren’t the correct values. which are removed :confused: and I don’t know why (the first one is removed correctly)
could someone please locate the logic error?
http://www.java-gaming.org/?action=pastebin&id=306

The purpose of this is to make more efficient culling in my particle system :slight_smile:

:cranky:

Cas :slight_smile:

yea,h it’s probably a bad idea to use a linkedlist for particle effects

~33.3% Smiley madness…

I’m happy even when I’m cranky.

Cas :slight_smile:

But an array list has all that array resizing.

If you like none of those, try the GapList, which is supposed to be a very fast list, and I have made good experience with this kind of list.

you can also just use an array with a simple max fixed size I guess

Nah, it has to be dynamic, Could you use a buffer of objects instead?

Why is it that array resizing is reckoned to be slow?

(Starter hint: it’s not, it’s blazingly, blazingly fast)

Cas :slight_smile:

But I’d like to do a lot of remove calls at once sometimes hitting about 70K per update.
Right now I use an array of particles which onced culled are moved to a recyclearray to be reset an reused once you spawn more. If they’re useable you don’t create more and just recycle. I was thinking maybe just having one array and two intbuffer each with indexes to particles one being active and one being flagged. So instead of cycling through the array I can go through the indexes and culling of a particle becomes just moving an int from one buffer to another.

That wouldn’t preserve the ordering of the particles, which might look weird at times, since you would never be able to know if new particles would spawn above or under the particles already existing. It might also be really slow since you’re completely screwing up the CPU’s cache due to your random access pattern.

I was actually about to write a benchmark and create an ArrayList and measure how long it takes to go through n1 to n1million and compair it to rnaodmly picking one million.
Let’s see how that goes.

Something rather interesting happened with the speed of the two as the about of tests increased by 10* ( the percent is how much faster sequential is compaired to random
Cycles Percent
1 198%
10 194%
100 190%
1000 188%
10000 186%
100000 155%
1000000 118%
10000000 104%
100000000 88%
1000000000 89%

They always teach you in science class to increase your sample size to ‘increase data reliability’ but this has broken that… it’s 2x faster with little cycles and slower with a lot. as you can assume sitting there while it did a 2x One billion cycles of a million searches took a long time.

…after my data structures calls I learn a few things. One: never listen to the advice from that particular professor…ever. Two: Stick to arrays/ArrayList for just about everything.

Understanding the impact of memory access, none vs linear memory vs (statistically) random is very important in modern CPUs…by modern here I don’t mean that any of this is new. You’ll still hear people talking about the classic “time vs. space” trade-off which is more or less BS in common cases.