I made a working endless, non-repeating campfire sf/x, and it runs on the OpenAL that comes with LWJGL 3. First get it to work, then improve it, right?
If I understand correctly, there are basically four “layers” that require handling for sound:
device
context
source
buffers
I was puzzling out what should be handled automatically by the class and instance, and what should be provided by the programmer. I’m figuring, since the “source” is given a 3D location, maybe it should be accessible to the programmer (sometimes audio cues need to be moved around). Also, possibly the programmer may be organizing sources into various “contexts.”
I’m thinking the cue instance should require the programmer to provide these values as arguments to the constructor. The constructor would then handle setting up the streaming buffers so that the game developer wouldn’t ever have to deal with that level of detail, they would just start or stop the CampfireSFX.
But it seemed to me that this could be a lot to ask of the game developer, especially if they had signed up for a library to help shield them from managing these details.
So…I thought the thing to do would be to see how the various game engines handle “device” and “context” and “source”. First look was at a Slick audio example and…full stop: it is using LWJGL 2, not 3.
A quick look at our JGO “OpenGL” forum shows the first entry as LWJGL 2.9.2.
Does Libgdx also still use 2.9.2?
Does JMonkeyEngine also still use 2.9.2?
If so, are there plans to migrate to 3?
It is not clear to me from the documentation what versions of LWJGL these engines are using. I guess I just need to go ahead and download them and see what I get…