List of jvm options

Is this not a question of scale?

Particles etc. are newed & ‘freed’ in the order of 10s per frame. I would be surprised if this was a significant cost, especially on newer VMs.

In my software renderer, I do use pools for my edge lists, as I have to reuse in the region of 1000s per frame, and the pool allocator also passes back the correct object based on my rendering flags - so it saves GC as well as centralising the flag parsing. Doing both gave me a noticable speed increase (I am still on 1.1 as well)

As has been mentioned many times before though, pre-emptive optimisation is usually wrong - if it aint broke, don’t fix it.

On the other hand, implementing in Java is trickier than C++ as you do not have the requirement to ‘delete’, so finding the points where you should be releasing objects back into the pool can be non-trivial. Failing to track them all could render your whole pooling scheme as useless without you even knowing…

  • Dom

SE generates a lot more particles than that :slight_smile: But really it’s more a question of “expense of construction”. You can easily tune the GC now to cope with your normal object lifecycles - if your particles live for 25 frames and you have on average 1000 particles alive you can tune the new generation size to fit. Using that graphicy tool jvmstat to profile the VM is an excellent way to do it.

Cas :slight_smile:

[quote]There are no pools at all in Super Elvis and it hardly stutters at all
[/quote]
That there is the key statement. hardly is completely unacceptable in a commercial application. It must be no stutter. At frame cycletime difference of 20ms is detectable by the human eye, less if you’re doing high-rate action of change like a roll. In all of these cases, our clients would look at that and call it a failure. If you released a commercial game like that, it would be a failure.

[quote]You can easily tune the GC now to cope with your normal object lifecycles
[/quote]
In a webstart app? What if your application has to run across multiple JVM versions, each acting differently? The JVM internals are so unstable that what works well on one dot release doesn’t work well on another, nor on the same dot release on two different hardware platforms. Professional application development has to be robust and generating highly tuned environment setup is going to come back to haunt you in the end as you’ll have tuned it so well to your development machine that an end user’s one is going to break it completely.

That’s true about the deployment, but life’s too short to strive for 100% perfection :wink: In my video work stuff, I can’t drop a single frame and that’s a different kettle of fish. But in Super Elvis we’re talking about the odd frame every 30-130 seconds. No-one notices.

Cas :slight_smile:

[quote]I’ve seen so much awful code out there precisely because of people making similar assumptions like what you guys are here and having people write off Java as a development language because of it.
[/quote]
I’m surprised by that statement. What assumptions are you talking about? If you meant my advice that GC can make wonders, this is certainly not an assumption. It is a real life testcase that you can try and verify.
If you read (don’t even read carefully, just read), my statement is not 'man, don’t bother, GC makes it for you, for any cases". Your answers look like you understood the opposite of what i and others said. There ain’t perfect solutions but you need to understand the pros and cons of each and use them wisely. Pooling is not the answer to everything.
My test shows clearly that we can rely on GC for some operations. Cas’s do also. Would you share yours so maybe we can refine the conditions where short lived objects can’t be handled efficiently by GC?

I’ve seen so much awful code due to memory pooling. It’s unmaintainable, heavy, brings bugs and neglects big advantages of using Java.

Sure. Go to Sourceforge and look for the DIS-Java project. Don’t have the name/URL of it right now. Mostly run by the people at the Naval Postgraduate School.

That’s the first project. The second project is Xj3D itself, though that’s far harder to quantify. Where we’ve had to deal with silly amounts of object creation and GC is in Sun’s own libraries in java.util. HashSet and HashMap are particularly troublesome as every time you make a query to the classes you create a new Entry instance. As we’re making thousands of calls to the HashSet/Map classes per frame (for various reasons), this was causing large amounts of stuttering in the frame rate due to GC.

[quote] I’ve seen so much awful code due to memory pooling. It’s unmaintainable, heavy, brings bugs and neglects big advantages of using Java.
[/quote]
You guys keep saying that but there’s no reason why it should be. Object pooling is an extremely simple setup. Most ot the time no more than 20 lines of code for all of the management functions. If it is such a problem, the problem is far more endemic than the object pooling itself. It’s an application design issue. Quite simply the person(s) don’t know how to design and implement a clean and modular system. Point the fingers at the coders, not the implementation.

My pooling code:


private static myClass free;
private myClass next;

// Function to allocate a new object
static myClass GetNew()
{
    myClass ret = free;
    if(ret == null)
        ret = new myClass();
    else
        free = ret.next;
    return ret;
}

// Function to return object to pool
void Release()
{
    next = free;
    free = this;
}

// Function to flush all 'free' objects in the pool
static void CleanUp()
{
    while(free != null)
    {
         myClass temp = free;
         free = free->next;
         temp->next = null;
    }
}

Thats 30 lines of code (including the blank lines for formatting nicely :stuck_out_tongue:

The ‘GetNew()’ just replaces new calls, but the ‘Release()’ call is:
a) non-intuitive to people who haven’t used C before.
b) Adds extra code.

  • Dom

PS: Pooling still falls under the category of optimisation, and so using it blindly without checking if it is neccesary still comes under the heading of preemptive optimisation and so is EVIL :wink:

<edit: Code indents appear to have gone a bit screwy, sorry>

[quote]Where we’ve had to deal with silly amounts of object creation and GC is in Sun’s own libraries in java.util. HashSet and HashMap are particularly troublesome as every time you make a query to the classes you create a new Entry instance. As we’re making thousands of calls to the HashSet/Map classes per frame (for various reasons), this was causing large amounts of stuttering in the frame rate due to GC.
[/quote]
What a weird decision. If i understood correctly, you got bothered by sun’s hashmap implementation, and instead of correcting that by doing your own implementation, you took the risk of settling on pooling? I guess that you also had to remove the hashmap calls, or do your own implementation… Well…Any decision can be justified and if this is what was done, i’d like to know why…

Pooling itself is easy, or it can be easy depending on how nice you want to play with the system. Implementing a nice pooling that takes care of memory problems can be more than 20 simple lines. If pooling is to you what CristalSquid just pasted, well, sure, it is easy. It is also very simplistic. Nevertheless, you forget the burden added on the projects or its subparts because of it. When you add pooling, you tend to change architecture to bend to implementation, which is definitly bad. Pooling never comes free.
But, as you said, people might not implement pooling correctly (even if in current case, it can hardly be wrong), which is one of the reasons it is ‘dangerous’ and why it should not be a commonly proposed solution. Recommending pooling to anyone that does not have complete and update knowledge of GC looks to me as recommending the opposite. It can also do more bad than good.
Nevertheless, i’m not saying your decision of pooling was bad. Personnally, i would not have done GoSub one year ago, as Generationnal GC was not as fine as now. I’d be curious to see how your old sources of the projects would run with current VMs and if pooling would still be necessary.
Anyway, the concept of pooling is now what is done by generationnal GC and the interest of doing double caching looks inexistent to me.

[quote] HashSet and HashMap are particularly troublesome as every time you make a query to the classes you create a new Entry instance.
[/quote]
You mean having to create a key wrapper object for each query ? I don’t think that calling get(Object) on map creates any object itself.

Have you looked at trove collection classes ? It has specialized collections for primitive types to avoid wrapper creation.

As far as pooling is concerned, http://jade.dautelle.com/ is probably reasonable solution - author is realy fan of object pools and manages to avoid object creation at all in most cases, in quite transparent way.

Artur

[quote] What a weird decision. If i understood correctly, you got bothered by sun’s hashmap implementation, and instead of correcting that by doing your own implementation, you took the risk of settling on pooling?
[/quote]
Not quite. We reimplemented them using pooling internally for the Entry/MapEntry objects to be pooled. As things were added and removed from the Set/Map, the entry objects were pulled and released back into an internal pool. There is no external access to the pool.

CyrstalSquid - my pooling style is a lot like yours except I never make is accessible to the outside world. If I need something pooled, I don’t do it as part of the interface to the object, I have the class that’s using the object do it’s own pooling. That way pooling is an individual choice of the user. It also keeps the code clean by not exposing pooling to the outside user. Also, your code would fail in a multithreaded access environment as there’s no protection against someone looking in to grab an object, and another releasing it at the same time.

Here’s an example of the (non-synchronised) pooling from the bottom of our hashset class:


private Entry getNewEntry()  {
  Entry ret_val;

  int size = entryCache.size();
  if(size == 0)
    ret_val = new Entry();
  else
    ret_val = (Entry)entryCache.remove(size - 1);

  return ret_val;
}

private void releaseEntry(Entry e)  {
  entryCache.add(e);
}

SciVis applications, at least the ones I’m involved with, do something similar to that if an opportunity exists. A descriptive name for this could be local-temporary-object-recycling. Local because the pooling impact is restricted to a class or a bunch of very related classes comprising a single visual component, and temporary because the pool is destroyed if the visual component is destroyed. The latter is typically done because anticipating what goofy thing the user might request next is quite hard in a scivis application. With this approach, because the parameters defining the visual component are clearly known, it is easier to tune the local pool size if you want to. This kind of local object “recyling”, IMO, may be seen to be a more restricted version of the generic “Object Pooling”.

And again from a scivis perspective, gc pauses before a visual component is created or after it is destroyed is permissible, but certainly not during the course of its animation. But I can understand the difference here with respect to games - you can pre-allocate only so many number of objects. And when do you choose to create/destroy objects on the fly is a much more difficult problem.

Overall, as I might have already expressed it many moons ago, (1) I usually code with a mistrust for the gc; (2) gc fine tuning is futile because of the very cross-platform nature of my application and because I have absolutely no clue about even the Xmx memory size that a user might choose to specify. Having said that, however, with respect to games, if you know your (major) target platform, then using something like jvmstat may be a worthwhile excercise.

Yup. And guess where I mainly work - sci-vis :slight_smile: From a user’s perspective, stutter is not acceptable, but slowing the frame rate down to be consistent is OK. Any pause due to GC is not acceptable. I would be highly surprised if any gamer would accept stutter, even slight ones.

[quote]I would be highly surprised if any gamer would accept stutter, even slight ones…
[/quote]
I guess you haven’t played Doom3 on ‘recommended hardware’… I think you’ll be surprised to find how much gamers would accept if the game is good enough =)

Yes, for example the unreal engine has garbage collection and there is the occasional stutter there. I’m not 100% certain though this is in fact due to unreal’s GC (there’s no way to tell for sure), but still…
Also, 3D games usually have varying framerates anyway (except maybe when run on very fast hw and using vsync)…

Stuttering’s caused by all sorts of things - texture uploads, geometry uploads, sound decodes, frame time misses, some process in the background waking up for some reason - and GC is just another of those little, tiny intermittent things that you barely notice now they’ve added the tuning options, if it happens at all. Mithrandir has a point about portability - but that’s his problem :wink: So long as I get my stuff running peachy on the 3 big 'uns I’m not so bothered!

Cas :slight_smile: