Interesting proposals: Java 9 and beyond

the cleaner is supposed to be called by the finalizer/gc, not really by hand. GC is issued every time you create a new directbuffer without having enough “free” memory available. that’s the last call for the cleaner if it didn’t run already. if you dont want to use reflections then, don’t use it. everything should still work and hopefully not cause too many memleaks. why not just use icode.cleaner().clean()[/icode] ?

i guess if deallocation is really an issue then using directbuffers is the wrong idea in the first place. in that case any un-smart way to release is fine :wink:

one workaround which works for me is using [icode]Unsafe.allocateMemory[/icode] + [icode]Unsafe.freeMemory[/icode] and the private [icode]DirectByteBuffer(long addr, int cap, Object ob)[/icode] constructor, which also sets the cleaner to null. that way it’s possible to bypass [icode]Bits.reserveMemory(long size, int cap)[/icode] which is absolutely useles unless you plan compatibility to heap-buffers. you end up with reflections again tho’.

I fully agree with you however the “way to free” a DirectByteBuffer was meant to be the Java way (using a Finalizer). It sucks and it doesn’t really help you in any way - I know that problem a lot, too. I still have no idea how to make a the free-memory command visible in the VarHandle API since VHs don’t offer such a method in the interface. It is not designed yet for what they would like to offer it as a replacement for. The reason I haven’t really (during the Unsafe-War ;-)) mentioned it to be a full replacement, it just is none. Just another reason why this can’t be designed internally at Oracle, those guys have no idea of off-heap, native memory interop, whatever.

I guess the real solution might be Project Panama which has to offer a nice solution to that for interop with native libraries that expect pointers to be passed around.

But exactly here’s the problem, Unsafe::allocateMemory and Unsafe::freeMemory will disappear eventually. That means there is no way to do this anymore. As pointed out I guess there will be a solution but it is not available as of now :slight_smile:

The garbage collector is called when you risk to run out of memory on the Java heap, not when you risk to run out of memory on the native heap. Moreover, there is a reason why this cleaner will remain available in Java 1.9: we are numerous to face the same problem and we are numerous to understand that releasing the native resources is up to the programmer.

Look at what I’ve just explained above.

I already have to use -XX:MaxDirectMemorySize just to be able to launch my game.

It’s a completely naive approach. Java 1.9 won’t allow the use of sun.nio.ch.DirectBuffer. Moreover, you can get a direct NIO buffer that doesn’t implement this interface, for example ByteBufferAsFloatBufferL, when you call ByteBuffer.asFloatBuffer(). I advise you to read the source code of those classes in OpenJDK and you’ll see what I mean.

It doesn’t make me laugh. It’s quite common to use direct NIO buffers since Java 1.4 to communicate in several Java bindings for performance reasons. In my humble opinion, trying to put a circle into a square is silly, I can accept that it’s up to me to manage the native memory but I would appreciate to have a clean API to do it and it’s better than providing something completely fair-fetched so that the unmanaged memory behaves like the managed one. Java can’t know that the non-Java code still wants to access a part of this native memory.

They will be gone in Java 1.9.

well, thanks for the XasYbuffer heads-up but i would advise you instead to look into the [icode]Bits.reserveMemory[/icode] method and when it is called. it should become clear to you that the API does not want the user to care about deallocs at all.

I know that it calls System.gc() when the first try fails, it explains why the creation of direct NIO buffers can take a long time (100 ms as a bonus…) when the JVM is about to run out of memory. If you don’t keep any references on useless direct NIO buffers (which is a good practice even for managed objects), this mechanism might be enough but keep in mind that System.gc() is an hint (“Calling the gc method suggests that the Java Virtual Machine expend effort toward recycling unused objects”), this call doesn’t force an immediate garbage collection and if you risk to lack of memory on the native heap but not on the Java heap, I don’t see why this part of code would be enough not to think about deallocation. This mechanism is better than nothing but it’s very weak. When System.gc() doesn’t cause a garbage collection, no cleaner is called, no native memory is released, the second try 100 ms later fails and you get an OutOfMemoryError… which can be avoided by calling some cleaners of useless direct NIO buffers. We’re on Java-gaming.org but this kind of problem can occur in CAD softwares, particularly when you manipulate very large meshes, when the memory on the Java heap is a lot less solicited than the memory on the native heap.

There is a more clever mechanism implemented elsewhere in Java (maybe in JavaFX??) and in some third party libraries (Ardor3D and JogAmp’s Ardor3D Continuation) based on weak references and phantom references. When a direct NIO buffer is no longer referenced in your Java code, this case is detected, the cleaner is called. It’s not perfect because one must be sure that your native code (OpenGL for example) isn’t trying to access it (you risk a crash of the JVM) and it can slow down the JVM a lot when releasing a lot of native memory. My last option consists in choosing the appropriate moment to call the cleaners, typically when going to another level. Imagine some lags in a first person shooter when opening fire on numerous enemies, it’s not very fun :stuck_out_tongue:

Not sure why you’ve got such a pressing need to dispose of native buffers so urgently…? Apart from mapped files (a known bugbear) the general usage pattern of direct buffers, particularly in regard to games, is to map one big one at start up and then … never touch it again until your game exits.

Cas :slight_smile:

Well for our use case we register a huge memory space and manage it ourselves. It is a different use case however I kind of agree with gouessej that there should be a way to clean a native memory region at a user defined point in time. It doesn’t even need to immediately GC the heap wrapper but it has to deallocate the native memory potion it is occupying.

Wait. Looking at the info from Aleksey Shipilev…HotSpot calls malloc? Ha ha ha!! You’re all fired!

[quote=“princec,post:27,topic:55436”]
This is just trading one problem for another. You need a (sub-)allocation strategy for the master buffer and what you end up with is having to write a custom memory allocator. That’s a hard problem to solve. I don’t think there are many people that can write a custom malloc without horrible malloc/free efficiency, horrible fragmentation issues, or horrible concurrency/contention issues. I know it has worked for you in your games (would be interesting to hear details btw), but other games may have very different allocation needs.

The solution for LWJGL 3 users: jemalloc

jemalloc is indeed a great solution, though as you say, I’ve no need of it particularly… I just allocate a giant VBO to render everything into and that’s all I ever need for the duration of the game. Well actually that’s not quite what I do… I allocate VBOs in large chunks (4mb or so at a time) and fill them up one at a time during a frame, and if I run out of space, I allocate another one, and never release them.

Now I’m curious… other than the known irritation of MappedBuffers… how are others using buffers in their game code?

Cas :slight_smile:

How’s that situation different to the default behaviour (if possibly more likely to be encountered)? Surely either way you need to keep the reference alive if it’s being access by native code?

You ever get straight to the point? :stuck_out_tongue:

I know how to slice the buffers but this is used differently in numerous scenegraph APIs, especially in LibGDX, JMonkeyEngine and JogAmp’s Ardor3D Continuation. I agree with Spasi’s first paragraph, it just moves the problem but it can be better for you if you know better how to write an efficient custom allocation/deallocation system.

I advise you to read the page about PhantomReference in the Java documentation. The garbage collector can detect that an object is useless but release its resources later. The PhantomReference can be used to execute pre-mortem cleanups in this case. The deallocator of the cleaner is quite robust, it is able to detect whether it has already been called, there is (almost?) no risk of double free.

I’ll just write the dirty code supporting Java 1.6, 1.7, 1.8 and 1.9. In my humble opinion, it would be better to implement a free() method in java.nio.DirectByteBuffer (like in Apache Harmony) so that the developers don’t need to manipulate the cleaners and it would allow to prevent the use of sun.misc.Cleaner too. Then, I would have to rely on package protected classes in java.nio but I would no longer have to rely on Sun internal classes. I find this solution acceptable especially if there is no clean API to replace it in Java 1.9, it would let some more time to write it, maybe in Java 1.10?

Edit.: princec, look at Bits.reserveMemory() and maybe you’ll understand a bit my position.

That’s not what I meant! You suggest with that approach you need to keep a strong reference to the direct buffer to ensure it’s not freed during native access. I said that’s no different to the default case.

Incidentally, surely in this scenario it’s better just to remove GC and references from the equation? eg. using the ability for native code to allocate a direct buffer without a cleaner and handle disposal manually?

The code in Bits.reserveMemory is indeed completely terrible and cringeworthy, but we have to remember where the original Sun engineers were coming from when they designed native buffers: the problem as they have succinctly put it is that if a thread deallocates a direct buffer manually, then some other thread with a reference to that buffer can effectively read/write into that memory even though it is no longer owned. The solution, as they put it, was you’d need a check for dealloc on every call in ByteBuffer… which would break any optimisation and slow everything down to the point where it no longer offered any advantage at all to use them. Of course us game developers couldn’t care less about forcing checks but the JVM has uses outside games which involve security and guarantees.

So while I did once rant and spit and curse and scratch about why we can’t force buffers to deallocate… I eventually came to understand that if we’re trying to allocate/deallocate them rapidly enough that it’s a problem we were just using them in a manner that was not intended nor even strictly necessary in 99% of use cases. The only one left that bugs me is mapped files.

Relatedly, does anyone have any idea about the latency/jitter/timeliness of reference queues?

Cas :slight_smile:

It’s already done in JOGL (and probably in some other third party libraries). It stops keeping such references when the buffers are deleted.

but I have to communicate with Java, I can’t do everything in native code.

That’s not what I said either! :wink: Check this code in JNA for example. It returns a direct byte buffer that doesn’t have a cleaner (see the constructor in DirectByteBuffer) and has to be freed manually.

Points are boring little zero dimension things. Meh.

Half joking, half serious. Joking because that choice shouldn’t matter since they shouldn’t be called that much. Half serious because the default malloc/realloc/free are multi-threaded general purpose heaps with runtime configuration options and can often be completely patched-out by the user. General purpose heaps generally suck at everything…they just suck less than special purpose allocators when the programmer is breaking the expected usage patterns.

So they depend on compiler (including version), OS (including version) and any insane things that a given user on the given system might have mucked around with.

Ok I can do something similar with JNI by calling NewDirectByteBuffer, can’t I? Now I see what you mean.

I advise some developers here to have a look at this document, I have found it very helpful:

Yes! I assume it’s how LWJGL 3 is making use of jemalloc as per @spasi’s post earlier. Be a good utility for JOGL if it’s not in there already?! A library just providing that element of LWJGL with the different allocators looks like it could be useful too?

If you wanted it should still be possible to manage this via (Phantom)References rather than forcing the end user to manually free memory, while still staying away from any internal JVM code.