Windows process memory going crazy with Xith and JOGL

Hi there, I know this might sound like more appropriate for JOGL or Xith forum but still it might be a completely different issue.

My Xith based app uses tons of geometry and textures, we are trying to be smart by implementing Texture and geometry managers as the nature of application requires reusing same geometry all over the place (just under different transforms). Xith has a shared copy feature which is supposed to do instancing. After some experimentation it appeared like Xith’s sharedNode feature works fine. We did some tests and using shared nodes proved to be a huge gain when heap size is profiled. On the other side the process size is going crazy with memory usage and barely ever goes down even when complete cleanup is done (all objects destroyed and GCed) the process mem usage doesn’t go down.

My assumption is that the leak like this can only appear somewhere in Xith or JOGL using NIO Buffers. Does anyone know if NIO Buffers can cause this behavior if not managed properly ?

In some examples that we did, our heap usage will be around 750 Mb while Java allocates around 950Mb, the process size goes up to 1.7Gb. After cleaning up all the 3d data and garbage collecting explicitly, Java heap usage is at 125Mb it still has its 950Mb allocated (expected). If we start adding object to the heap again, the process size will go up and eventually crash when process size reaches 2Gb and windows bails out of virtual memory.

Does anyone know if Java 3d handles instancing better and if it has any similar memory leaks. I am willing to switch to other APIs preferably Java3D rather than fixing Xith on a low level.

Once the VM has grabbed memory from the OS, it does not return it until it is shutdown. It probably assumes that if memory usage can get up to a certain level once, it can again.

That’s right, but the process size (on a 950 Mb of allocated heap) becomes 1.7 Gb (almost twice the size) and it keeps growing up even when you clean up the heap and have 800 Mb free. So what happens is JVM has around 800Mb of free allocated heap and its happy, but process will eventually crash as it hits 2Gb and depletes all the available memory on the system.

Are you sure?

I thought there was a limit to the amount of free memory to keep ready beyond the minimum heap size and that the JVM would release memory if certain thresholds were exceeded.

Your both right,

The Sun VM will release memory back to the system. However to avoid thrashing it wont do it right away, it waits a bit to see if the memory requiremenst have really gone down.

…and it’s tuneable, too.

Cas :slight_smile:

… oh I see, its tunable

On a completly different matter: Where does your nick come from? See, I am german and I nearly spit my coffee all over my desk, the first time I realized it. No offence intended, if this is your real name, of course :wink:

Doesnt that mean “The Good Sausage?”

Edit: Wait, Der is “The” Zer is… very? “Very Good Sausage?”

Is that some sort of sexual joke? ;D

Actually it’s “Very Good Escalope”, but totally misspelled, so you only recognize it if you say it out loud :slight_smile:

It’s just that earlier that day I had one Zer Gut Escalope for lunch… :))

Back to the topic… …how would one go about tuning the JVM’s release of allocated heap back to the system (Windows)? We have found out that the problem we are experiencing is related to some geometry that we are creating on the fly as we need to dinamically update lots of geometry, what would be the best approach for allowing lots of geometry updates, using VBO’s, Display lists, etc. ?

I have another related problem I’m currently investigating :

The game starts with min heap size of 80M and max 150M, but never really reached 80 (70 max, 40 normal use).

On the other hand, looking at the java.exe process memory usage, every level played adds its 5M to the total memory used (well +7, then -2 at the end)
So from the java side, GC is fine, and VM profiling shows no memory leak, but from the NIO buffers size… it keeps increasing (up to 300M in my test, but kept growing).

So what’s the ergonomics of GC from the native buffers side ? (I must say I currently call System.gc() after every level loading)
If it’s a memory leak from the Xith side, what strategy could anybody give me to find the bug ?
(I’m already browsing the entire xith rendering codebase to find the possible leaks)

Is it a good practice to have many small nio buffers used when calling OGL functions (can they cause memory fragmentation preventing gc ?)

Any help will be most appreciated !

Lilian :slight_smile:

direct NIO buffers have tiny java-heap objects, which are unlikely (low prio) to get GCed when they survived eden-space.

A full GC should take care of them though.

Unless there are references kept to these java-objects in Xith, I’d assume it is the “general problem” of NIO-buffers.

Better to do pooling (and for small buffers slice() a larger buffer into tiny pieces)