Converting my game to Libgdx is clearly a big task, and I’m disappointed that my custom-generated sounds would have to go through an audio device that only plays mono and has a large latency (via “AudioDevice” – Playing PCM Audio http://code.google.com/p/libgdx/wiki/AudioDevice).
Am I misreading this, or does sound.play and music.play suffer from the same latency issues cited above? I’m pretty sure at least with sound.play and music.play one has control of volume and panning settings, so stereo must be possible, unlike with AudioDevice.
The coding of AudioDevice and sound.play and music.play all seem to rely directly on LWJGL implementation of OpenAL. I’m downloading source now to poke around and see if I can’t rig up a way to play audio from the functional equivalent of a TargetDataLine and get stereo and decent latencies.
Whatever.
The main question is this: is it reasonable to go about rewriting my game’s graphics via LWJGL’s implementation of OpenGL, and include whatever I can conjure up to my satisfaction for the audio, as a first step, and then, once that is working, go about doing whatever is needed to make an Android-playable version of the game?
I can see that if Libgdx becomes necessary at the second stage, will I at least be able to keep any working LWJGL/OpenGL graphics already coded, and mostly only have to deal with the headache of rewriting the input interface and dealing with screens? Will there be other paths to recoding my MouseMotionListeners and etc without using Libgdx? How much pain are we looking at?