What JVM ecosystems support an audio line out?

What JVM ecosystems support PCM audio out? (Or PCM directly converted to audio bytes according to the format required by the line.)

I’m thinking of the equivalent of Java’s SourceDataLine, in other words, a blocking line that can be fed raw audio bytes programmatically.

A class like Clip doesn’t qualify. Audio data has to be entered from a file and there is no access to the data that is held in memory.

Does JOGL? LWJGL? JMonkeyEngine?

Android does have this capability, with the AudioTrack class.

LWJGL just uses OpenALSoft. It’s a decent enough API, dead easy to use.

I’m not finding detailed documentation. I see this following line which seems hopeful: “It also facilitates streaming audio” in the blurb on https://github.com/kcat/openal-soft

Do you have a link to the API?

I’d like to make a versions of AudioCue and AudioDicer that operate in these other ecosystems. These classes ship out a stream of PCM or bytes formatted for audio playback.

openal expects PCM data, so any decoding of audio formats will need to be done by you.

Documentation for openal is here

1 Like

This is very helpful. Thank you abcdef!

If I’m reading this correctly, section 4.3.5 is key.

The part that threw me before is figuring out how to block and resume at the Java end. If the buffers on the OpenAL side are currently full (audio stream generation usually far outpaces the rate at which the stream is consumed), what is the usual pattern for having the data-producing stream pause or block and then retry?

I’m guessing this also might occur with graphics data? Or not?

I beleive you can “stream” data in chunks, and then check how much is processed. So like you have a game loop for graphics you can have a loop for audio too and you can check how many chunks have been processed and appropriately top it up when when required by decoding more data.

Here some code form a very log time ago that shows how you could check things in the audio loop. It checks how much data has been processed and replaces it with new data if there is any.

if (soundStream.hasStreamData())
        {
            // remove any buffers already played
            int currentBuffersProcessed = sound.alGetSourcei(soundStream.getSoundId(), sound.AL_BUFFERS_PROCESSED);

            System.out.println("Current Buffers Processed [" + currentBuffersProcessed + "]");
            for (int i = 0; i < currentBuffersProcessed; i++)
            {
                int buffer = sound.alSourceUnqueueBuffers(soundStream.getSoundId());
                sound.alDeleteBuffers(buffer);
                buffer = sound.alGenBuffers();
                if (soundStream.hasMore())
                {
                    System.out.println("Sound Stream Has More");
                    ByteBuffer sample = soundStream.readNextSample();
                    sound.alBufferData(buffer, (soundStream.getNumberChannels() > 1 ? sound.AL_FORMAT_STEREO16 : sound.AL_FORMAT_MONO16), sample, soundStream.getSampleRate());
                    sound.alSourceQueueBuffers(soundStream.getSoundId(), buffer);
                }
            }
        }
1 Like

The handler would be called once every gameloop? The buffers would then be configured to about the equivalent of what gets played in 1/60th of a second (plus a margin for safety)? I was assuming something more like an independent thread that was optimally scaled/tuned to the audio buffer scheduling needs.

OK, then, some sort of loosely coupled ScheduledExecutorService–that would work.

In other words, have the line for audio processing still ends with a write() method that blocks and flips a boolean that tells the scheduledExecutorService that an array is ready. Schedule a concurrent executor to something like 1/2 or 1/3 the time it takes for a buffer to play and but only execute when the boolean says there’s something to transfer. Something like this could keep the latency down while not costing a lot of cpus.

I will put this on my queue. (Might be a couple weeks before tackling it.)
Thanks for helping me get a grip on this. Was having trouble visualizing the design pattern.

The handler can be called once every game loop yes, easy way to call the streaming code then or you could maintain in its own loop. Personal choice really and doesn’t matter.

With regards to the buffer size, this is kind of up to you. For ogg vorbis you decode the data into packets so unless you split a packet up or decode multiple packets per alBuffer update (makes sense to just do a single packet at a time), for wav you can take the sample rate, bits per sample and channel size and choose a buffer size to be played. You could feed in the entire sound data in one go and then let openal play it using its own logic. As long as you can pump in data faster than it can play then the sound output will be normal. I haven’t looked in to this in a while but I would guess the minimum size is the size of a single sample. If the sample rate is 44100 hertz then its 44100 samples per second, if each sample is 8 bits and there are 2 channels then thats 2 bytes minimum which is the equiiv of 1/44100 of seconds worth of sound. There is a lot for you to choose and decide :slight_smile:

Thanks for confirmation!

Main issue with processing multiple streams on the same thread is that a hiccup on any of the streams ends up making all the streams share that hiccup. Anyway, that and a general bias for decoupling is why I like multiprocessing.

JOAL uses both OpenAL and OpenAL-Soft as far as I remember. If you need to use another API from Java but in C, you can use JNI, JNA, JNR, GlueGen, libffi or jdk.incubator.foreign (optionally with jextract).

1 Like

you mitigate this by sending more (buffered) data to your audio-source.
if you continuously keep your buffers filled with over, say, 500ms of audio, your hiccup has to be over 500ms for it to be noticable… ofcourse this adds 500ms latency, so it’s always a tradeoff. 50ms may be a better value, for a cpu it’s an eternity, and for us it’s only a few frames of lag.

in the real world, lengthy hiccups can be avoided, except for GC-pauses, but you should design your application with GC-pauses in mind anyway.

2 Likes