Checking for memory problems with long runs ...

Probably the way I’m doing this is the wrong approach. I let my game run for a a long time, maybe run it while I’m going to work, to see if it’ll crash when I get back. Sometimes I find that it crashes within a few hours. Then I’ll know to adjust the heap size or put some =null in places. Does anyone test like this or am I the only one? Is there a better equivalent method?

Thanks!

If youa re getting a crash after a long run then it means you have an object leak-- one or mreor eferences to obejcst that you should be cleaning up that you aren’t.

The best way to find these is with a decent Java profiler. With a profiuler yo ushouldnt have to run all that long, runa few minutes then break and inspect your heap to see what objects are still alive and why.

But really if you are running into such errors it means youa re doing soemthign VERY wrong. As you get more exprience with Java you should rarely to never see such problems. Thats why we have a garbage colelctor after all.

Well I use tools like profilers to find memory leaks since just let it running for very long and wait until a crash inst really reliable.

However I have leaks only very, very seldom … I am (especially when writing server software) a lot more scared about it than I would need to be … however better waste some hours beeing scared than breaking down some companies buissness :wink:

lg Clemens

Actually it depends very much on the nature of the crash. If you get an OutOfMemory condition the VM will report it as one. If the game locks-up, then it is usually graphics drivers or Java’s use of DirectX/OpenGL.
If you get a application fault - it is a bug in the VM or if you are using JNI libs like JOGL or LWJGL it could be them.

I have found that there are serious bugs in the JVM that will cause a hard crash on long-term tests. Possibly related to the GC algorithm that you choose. It is difficult to isolate, but it was introduced after 1.5.0_01.

Good points by SWP.

If you have any native code, you COUDL also be leaking memory there. Thats not something a Java profiler will show you.

In general, the more used a native library is, the moe likely it is that leakes have been found and fixed but its no
gaurantee.

I suppose if you want to track down native leaks, you might want to

  1. ensure that every native resource allocated is owned by some java object
  2. give that object some kind of unloadResources() method
  3. in the finalize() method, simply check to see if the resources have been unloaded and emit a warning or exception

Relying on finalize() for native cleanup is pretty sloppy, so I would just use it as a last-check to find java objects that are getting GC’d without having released their resources.

Its more then sloppy, its not gauranteed to work. The finalizer gets called when the object is actually collected. There is no gaurantee when if ever an object will be collected.

Over-user of finalizers can also give you GC problems as it forces obejcts to persist beyonf the eden-space regardless of their actual lifetime.

Finalizers are bad news and really a deprecated feature. If you really need post-mortem cleanup then PhantomReferences are nominally better though even there there are no gaurantees of execution and they suffer similar GC issues.

In terms of finding native leaks there are some natvie tools you could run the VM process under to find them, such as Purify, but they arent cheap.

[quote=“Jeff,post:7,topic:24283”]
I think you can get a free 30-fay eval of Compuware’s Bounds Checker which might help.