Question on javax.sound and java.nio

Hi, I’m finally starting to dig into adding sound to my applications since there’s been some improvements in the 1.4 API. I made a simple program to read a streamed .ogg file (which i converted from a .mp3) and that worked very well. Basically, it just converts the ogg stream into a PCM_SIGNED encoded stream, and reads the bytes from the stream and writes it to the line.

By itself, this program runs fine. However, I have a very old game that I helped someone with and decided to add a soundtrack to (Rage Against the Machine’s Renegades of Funk if you are curious), and I’m definitely seeing some jerks in the app from pauses…I’m not sure if the jerks are a result of GC pauses because i’m reading from a stream or if that the separate thread that is playing the music (reading from the stream and writing to the line) is causing problems.

Which brings me to my main question: I’m assuming that directly allocated ByteBuffers will result in better perfomrance/less lag, and I am wondering if anyone knows if the javax.sound classes (such as AudioInputStream) utilize the java.nio classes (or direct ByteBuffers) for performing it’s work…or if it is possible to use a Channel object to wrap the AudioInputStream in a channel and use a ReadableByteChannel to read from the stream (in a hopefully more efficient manner) and then write the bytes to the line from the ByteBuffer.

Has anyone done anything like this? I’m thinking that part of the problem could be the conversion from the OGG stream into the PCM_SIGNED stream for playback. Or it could be that my timer to make the animation fluid is out of whack and now that I have another thread eating cycles, i need to use the HighRes timer or something. So many questions. If anyone has experience with streams and the java.nio package wrt Java Sound, lemme hear from you.

-Chris