Best Practices for Creating New Objects

Hi Guys,

I use a typical “Vector2f” type class for storing my entities positions, velocities etc and had a question about best practices when it comes to reassigning the objects. In a game loop which runs at 60 fps, I am updating my cameras position using my players position…


The player.getPosition is returning a Vector2f object and my camera class is then reassigning its position as a new object…

public void setPosition(Vector2f pos) {
    this.position = new Vector2f(pos);

If this is happening 60 times per second, am I generating 60 obsolete objects which will require garbage collection at some stage? If this is the case, should I be getting into the habit of assigning the new position as shown below?

public void setPosition(Vector2f pos) {
    this.position.x = pos.x;
    this.position.y = pos.y;

I doubt this will effect performance in any noticeable way but I would like to make sure that I am developing good habits as I go with programming.

Cheers guys.

Generally speaking… it will improve performance (measurably) if you consistently apply it just about everywhere.

The important thing though is not to go refactoring all your code to do this until it actually proves to be some sort of performance issue. 60 objects a second is absolutely nothing in the grand scheme of things. Even 600 is nothing. Or 6,000. Maybe at 60,000 you might start to see some issues.

Cas :slight_smile:


This definitely is the better solution:

Yes, in this case you won’t notice any performance-effect, but in greater games where this happens about 200 times per frame you will probably notice performance-problems :slight_smile:

You can use object pools for this kind of reusable objects. :point:

Another generalisation here but I’d avoid object pools unless your objects are expensive to construct.

Cas :slight_smile:

I always use object-pools… left-over from my J2ME days :wink:

For simple things like vectors and matrices, I keep a reusable stack (similar to object pool). Then whenever I need a temporary object I pop the objects from it (if the stack is empty, I create the instance). Once the work with that object is done, I push it back into that stack.

Thank you for the response there guys, that puts it into perspective for me!

The short answer: Don’t worry about premature optimization. If you aren’t noticing any performance hits, don’t go looking to fix stuff that ain’t broke. (What princec said.)

The longer answer:


And although you shouldn’t worry about premature optimization, this is indeed one obvious area for improvement. Garbage collection doesn’t really matter on a modern PC- you’d be shocked to see how many Objects are being created behind the scenes by whatever library you’re using and Java itself. But it can matter in places like Android development.

That’s certainly one approach. Another approach would be to make your Vector2f class immutable so you don’t have to worry about copying them (but that might end up creating even more Objects, if you have a bunch of mutator functions).

Do some profiling. Find out how many Objects are being created behind the scenes, and what percentage of those are yours. Find out how much time your code is spending on creating and garbage collecting those objects. That’s the only way to actually answer this question.

As a minor case study in “behind the scenes”:

working on Battledroid last year I was having a few frame rate issues (30fps - yuk).

Now I’d written all my code to that point in the nicest possible style, and the particular case that interested me was this construct:

for (Entity<?> e : entities) { .... }

where entities is a List of some sort (and in fact, almost always an ArrayList).

I was doing this kind of thing so often, and in so many places, every single frame, that I was creating tens of thousands of iterators a second, and this was firstly actually slightly slower than your traditional for loop, and secondly it created tends of thousands of objects in garbage, which caused a noticeable judder every few seconds when they all got collected.

So I replaced them all with the usual:

n = entities.size();
for (int i = 0; i < n; i ++) { Entity<?> e = entities.get(i); ... }

… and this alone was enough to stop the juddering and mostly get my framerates back to 60fps (with vsync on it only takes a tiny amount of time over 17ms to suddenly halve your framerates, see). So a small change with a surprisingly large impact.

Now, why this is relevant is because for a number of years I’d been labouring under the impression that Hotspot was being smart enough to allocate those iterators on the stack with escape analysis and so they’d never create any garbage; and furthermore that they’d be pretty efficient too. Well, it turns out after a bit of profiling, nope.

Cas :slight_smile:

Yep! I thought exactly the same and also saw tons of iterator objects when profiling with visual vm… It’s a pity though! You really shouldn’t have to worry about this kind of optimizations. :cranky:

No no no! (no…!) :point: :-*

The profiler enforces the creation of the Iterator instance, by injecting bytecode that prevents the stack allocation. When you disable the profiler you’re (almost) guaranteed to not have the Iterator created. Maybe you made other optimizations at the same time that yielded your performance bump?

A better way to ‘profile’ memory usage when stack allocation is relevant, is by making heap dumps. But this is normally a Royal PITA.

Hehe, yeah, a PITA indeed ;D

And yeah, I think you already mentioned that somewhere on the forum, right? I vaguely remember that now! :wink: ok ok… will keep that in mind! Thanks for your knowledge! 8)

Well, the one thing I did measure correctly was the jump from 30fps back to 60fps and the removal of the periodic jitter: but I’ll never be entirely sure now…

Cas :slight_smile:

Maybe the jitter was due to the profiler always being enabled? Anyhoo, maybe you’re right and Iterable instances are not reliably optimized away yet. I might do some quirky benchmarks to verify whether it works in non-trivial cases.

Probably no difference between this:

    for (int i = 0, n = entities.size(); i < n; i++)

and this:

    int n = entities.size();
    for (int i = 0; i < n; i++)


I can’t remember what prompted the change any more, but I started using the former version several years ago. It seems to me there would be no particular advantage now, if there ever was any.


There’s no difference in it no, except that n is nicely tucked away in the scope of the for loop in the one-line version, if you’ve no other use for it later.

Cas :slight_smile:

if you go bonkers it makes a difference. in that loop [icode]entities.size()[/icode] is called every iteration - which costs the method call (!).

it’s also better to iterate backwards : [icode]for( int i = entities.size(); i-- != 0; )[/icode] is 0.000000001% faster cos testing [icode]!= zero[/icode] is cheaper then testing [icode]< some value[/icode] - but wont be “unrolled” if that’s available anyway.

what i learned about [icode]new[/icode] is what princec said. using an object pool is only good if the ctor is more expensive than fetching an object from an array which is pretty expensive acutally.

in the end, if all the vm-magic kicks in and things get replaced on-stack it doesn’t matter - on the other hand, this is not really “stable” and can still create “spikes”, not to mention the “warm start”. if that’s an issue the best way to treat objects (at least for me) is not to have any. just use memory.

I think you misread his loop there - n is only initialised at loop start.

Cas :slight_smile:

Re-read what philfrei posted and you’ll see the only difference is the scope of the ‘n’ variable :point: In C this matters, in Java it does not.

If you inspect the native code HotSpot generates, then you’ll see backwards-looping is actually not optimized as aggresively - let alone unrolling. Only iterate backwards if that make sense in the logic.

Object pooling by definition thwarts escape analysis. Stack allocation allows ‘objects’ to be treated as a series of local variables, which potentially means no memory I/O at all - let alone the levels of indirection (likely cache misses) for an array access + field access on retrieved object.

That prevents the extreme performance wins of stack-allocation (‘flattening objects into local variables’). I benchmarked this thoroughly when working on LibStruct, and doing ‘raw memory access’ (either through unsafe-pointers or primitive-array access) was not even in the ballpark of stack-allocation: raw memory access was 4-10x slower than letting HotSpot do its magic, even when ALL data was in L1.