ArrayList vs LinkedList

@Riven - that’s what I meant said better.

@Whomever else:
Classic example is a splay tree.

I know I don’t have to explain it to you :slight_smile:

I was indeed trying to clarify it for jmart.

Experience usually trumps big O; you can disagree with me but it won’t make you right :slight_smile:

If you’re writing a game - which is what I do all day long for a living - you are almost absolutely guaranteed to get the best performance from an ArrayList if you’re using it to store thousands of things that need twiddling every frame. You’ll use less memory, and it’ll be considerably faster than any other solution. By all means use a LinkedHashSet if you want to complicate things for yourself and make it slower and use more memory. Although realistically memory isn’t exactly an issue for most people and neither is performance on the desktop.

To recap:

Use a pair of ArrayLists. Scan sequentially to update each item. Then copy any item that is still “alive” into the second ArrayList. Swap. This is absolutely one of the fastest things you can do on a modern CPU.

Cas :slight_smile:

@Riven - yeah I figured…my replies are generalized to everyone
@Cas - indeed.

The problem is that you are not really backing it up. You are just repeating yourself as if by repeating it you are increasing its truthiness. I am backing my statements up by facts, a search on a HashSet is O(1) and a search on ArrayList if O(n). I mean you really can’t get passed that. Here is another truth, if your ArrayList if full and you add one more spawned object to it, it will result in the whole ArrayList being copied. These are facts. Here is another fact, if you remove an object from an ArrayList that happens to not be the last object, then the objects need to get moved to their correct new locations. These facts do not go away. Call backs, aka notification over polling, has been proven to be a good approach in computer science many many years ago… these are facts. If you are doing call backs, already proven to be better then polling, then you need a search algorithm that is fast, hence hashing make a lot of sense here because it is O(1). Saying, listen i know on paper these things you are saying are right but you still have to take my word for it does not make it true.

just saying

As a random aside so far in my computer games career I’ve only found two places where the actual data structure and algorithm has had a measurable effect on performance:

  1. Deep in the heart of the SpriteEngine there’s a radix sort algorithm that sorts the sprites on about 5 or 6 variables each. Using a Comparator and the built-in Collections sort (tweaked quicksort?) was an order of magnitude slower.

  2. The A* pathfinder in Droid Assault and Revenge of the Titans used a very fast integer BinaryHeap implementation (by @pjt33 of these parts I think). I tried using a few more general structures which were way too slow or generated far too much garbage. Also I ended up using a boolean grid for the closed list rather than the more usual Set approach because, again, a grid with random access is vastly faster than any implementation of Set.

Every other optimisation based on the choice of using raw arrays, ArrayLists, HashSets, LinkedHashSets, etc. has really just been lost in the general noise. Of course if you meticulously optimise everything you gain a few FPS, or in my case, allow the game to run full speed on a computer that’s on average 2 months older than Joe Average’s computer.

Cas :slight_smile:

Haha, these may be facts, but they are not figures. The figures are:

A pair of ArrayLists used this way is approximately 5x faster than a LinkedList, and uses something like a quarter of the RAM. There you go.

Cas :slight_smile:

As I noted earlier: splay tree. Going by Big-O notation you’d never use a splay tree. In the real world a splay tree can smoke a hashtable (regardless of n). The specific case depends on temporal access patterns. splay tree search is log(n) on average and worst case ‘n’. vs. hashing’s ‘1’.

Also in the real world programming it’s important to remember what X (don’t remember whom) said: “All those fancy algorithms are slow when ‘n’ is small…and it’s usually small”.

We can’t get past that indeed.

We are repeating it, because you as of yet failed to grasp that Big O notation tells you absolutely nothing about performance.

I’m not going repeat myself, but I’ll simply refer to the reply I made earlier.

Well LinkedList didnt come up but here is a scenario in which LinkedList is better.

Lets say you have to insert into the back of a queue, e.g. list[0]. In ArrayList this would take O(n), but in LinkedList its O91). Another pesky fact. The bottom line you cannot blindly use ArrayList for everything as is being suggested. You have to take a look. If you collections are being used specifically for certain things, then should be able to hit O(1).

[quote]Big O notation tells you absolutely nothing about performance
[/quote]
Dude that is awesome. Wait till I tell my old Data Structures And Algorithm professor about this one. Let the tests peak for themself… oh wait you already disregarded that test should not count either. So throw away establish O notation that is backed by math and lets throw away tests. We are then left with religion. Is this a religion board or a computer science board.

Big O notation tells you absolutely nothing about performance
Big O notation tells you absolutely everything about performance degradation under growth

If your professor disagrees, he’s misinformed.

Let me repeat myself: SPLAY TREE. Do some web searching and compare Big-O notations vs. competing data-structures. (Of course this is merely ‘an’ example)

Example: say I want to bring a bunch of physical boxes to Iceland, and as I live in The Netherlands, that might take me a few days by my nifty speedboat. The boat is tiny, so I can only bring 1 box at a time.

What is this in Big O notation?
Shipping 1 box: 5 days
Shipping 2 boxes: 10 days
Shipping 3 boxes: 15 days
Shipping n boxes: 5*n days
Answer: O(n)

Now let’s assume I have this huge ship, that will have room for all the boxes I’d ever want to ship to Iceland. It will take 80 days, but I’d bring it all in 1 shipment.

What is this in Big O notation?
Shipping 1 box: 80 days
Shipping 2 boxes: 80 days
Shipping 3 boxes: 80 days
Shipping n boxes: 80 days
Answer: O(1)

Guess what I’ll pick for my shipment of a few boxes? O(n) beats O(1) for that data size!
Guess what I’ll pick for my shipment of many boxes? O(1) beats O(n) for that data size!

Hence: Big O notation tells you nothing about performance.

Y’all do realize that algorithmic analysis goes a little deeper than one single notation, and covers upper and lower bounds too?

But I am getting a laugh out of all these categorical statements like “It tells you nothing! nothing!

Sure, but we were discussing Big-O notation, not algorithmic analysis in general :slight_smile:

@sproingie: Sure. But you have to know the something about the data, access patterns and size. It’s basically an indicator of sanity and not a razor. Average case may have little meaning if your data isn’t random. Worst case will have no meaning if it impossible to fall into worst case. But you know all of this. (Again the person I’m responding to may not be whom I’m speaking to). When I first started to really grok this stuff is after I wrote a custom sort for a (smallish) ‘n’. It beat the crap out of all the standard methods I tried and when I analysed it I was shocked to discover that it was worst case O(n3). Of course the trick is that the data patterns were insured it to never be worst case.

mm. True Big O notation doesn’t say anything about likeliness of best/worst case scenarios.

Dude that is awesome. Wait till I tell my old Data Structures And Algorithm professor about this one. Let the tests peak for themself… oh wait you already disregarded that test should not count either. So throw away establish O notation that is backed by math and lets throw away tests. We are then left with religion. Is this a religion board or a computer science board.
[/quote]
It is neither… it’s an exchange of empirical results board. You write your sprite engine using LinkedLists and I’ll stick to ArrayList. I’ll get twice as many sprites on screen as you.

What your Algorithms and Data Structures professor seems to have forgotten to tell you is that you can’t compare two algorithms using Big O notation. You cannot say O(n) is faster than O(1) or that O(n log n) is faster than O(n2). As Riven is trying to tell you, Big O can only tell you how a particular algorithm will degrade as n increases – not relatively how fast it is compared to a different algorithm. Spoonfeed: an algorithm that executes in O(n2) but is 10,000 times faster than an algorithm that executes in O(n) can comfortably outperform it until n becomes rather large. If it turns out n is not going to be large - and it won’t be - well you get my drift.

So: ArrayList is 5 times faster, in actual use, than LinkedList. Write a benchmark and prove it to yourself!

Cas :slight_smile:

I absolutely agree with you on hashCode and equals. I’m not really a fan of compareTo either. I would rather have a standard where compare(a, b) equals(a,b) and hashcode(a) are implemented in a separate object. But I don’t agree with you on your method of comparing clowns. Its so much more useful to keep the default behavior in most cases. As a rule I only override the equals or hashCode functions if and only if I’m working with an immutable type. It’s fine to leave hashcode and equals for elements of Java Collections. It’s only when you implement Collection and use == or an objects identity hash that the interface’s contract is considered broken. (But don’t ask me why it’s considered “correct” to ignore compareTo if you supply a Comparator. Where is the HashCoderator class?)

[quote=“princec,post:43,topic:39437”]
My method can be done using minimal reads and writes, uses half the space, and probably has similar memory cache properties. It’s not an implementation of a List but no one explained why you would restrict yourself to using a List given the first poster’s requirements.