On disk NIO buffer

Hi

I have to use an API to export some meshes in a particular format. It uses NIO buffers in input. The problem is that the memory footprint of JFPSM is already very high. I would like to use a NIO buffer whose content is on disk and that loads some parts of it into memory only when needed so that it lowers my memory footprint if I avoid using a relative bulk get() call on the whole buffer. I think I can’t do that with MappedByteBuffer. Is there already a class doing that? I think that EhCache is a bit too much for my needs, isn’t it?

Best regards.

MappedByteBuffer seems perfectly capable of doing that (until you go over ~2GB)

JFPSM cannot yet be used on low end 32 bits cheap laptops. I don’t want to set Xmx or XX:MaxDirectMemorySize to more than 256 MB which is impossible with my main level. MappedByteBuffer uses a direct NIO buffer and there is no way of controlling the size of the data really resident in physical memory. I would like to have only at most 4 KB resident data in physical memory at a time.

MappedByteBuffers let the OS manage the memory for you. If the OS is stressed for memory, it will simply release the memory in the MappedByteBuffer. That’s all done behind the scenes, mainly because you don’t need to know, and the OS is smart enough to do the mosts efficient thing, most of the time.

TL;DR:
Use MappedByteBuffers and stop worrying :point:

Alternatively why not just use a random access file? Why the strange need for NIO buffering?

Cas :slight_smile:

The API uses NIO buffers, I can only pass NIO buffers…

I will try to reduce the memory footprint of my whole application before trying to use on disk buffers.

Nothing prevents you from pushing a byte[] into a ByteBuffer.

As said, the OS manages the memory for you, there is no need to worry about it.

The data sets will be too big to fit into a single buffer. The file in input occupies about 250 MB but the meshes in output might occupy tens of GB (Paris metropolitan region in 3D). MappedByteBuffer uses a direct NIO buffer under the hood, it means that I will have to destroy it by myself if the Java heap is not stressed enough, it’s a known problem but I’m even not sure such a buffer could be allocated in some cases. I have found several things to optimize in Ardor3D:
http://www.ardor3d.com/forums/viewtopic.php?f=10&t=5905#p16504

I have to map only a part of the data into physical memory at a time and to avoid trying to get a backing array or the whole content of the buffer.

The file in input is here. You can try to convert it into a more appropriate format for rendering by using OSMToWorld.