Massive internal JEmalloc/Nvidia driver memory leak?

I think I just found a massive bug in JEmalloc that is fairly complicated but 100% consistent to reproduce for me.

I’m allocating “pages” of vertex attribute data each frame and deallocating them again at the end of each frame. I have this little test program comparing my old and new rendering system, and if I start off at a specific one and switch to another one memory usage quickly rises until I’m out of 16GBs of RAM (takes around 15 second). The Java heap memory is constant as nothing is being allocated after the rendering loop has started. Since the memory usage goes far above my 2GB Java heap size limit, it must be JEmalloc.

I tried enabling the debug allocator which prints memory leaks on program exit. This is the result of a run where it runs until it crashes. Note that the OutOfMemoryError is thrown by my own code when je_malloc() returns 0.

In other words, the only memory I’m leaking is internal memory allocated by the GLFW callbacks (I didn’t clean up my windows and callbacks properly since it died from an exception escaping main()).

  • I can only reproduce it when I start the program at a specific render test and switch to another one. Both use JEmalloc in the exact same, and I only get leaks when using them in this specific order.
  • If I start on a different renderer I can’t reproduce it.
  • If I start it on the first renderer and switch to the second one correctly, then back to the first one again the increase permanently stops even if I switch back to the one that triggered the leak before.
  • No leaks are reported by the debug allocator.

Will do more investigations and post the program so you guys can reproduce it soon.