[EDIT: jar with code is here: http://hexara.com/pfaudio121212.jar
Notes on usage are on post #5 of this thread.]
I’ve restarted the project of putting together a simple sterep audio mixer. So far, it supports volume and panning settings for wavs and clips. Output is to a single stereo SourceDataLine.
My hope is that if this java code sits on top of something like libgdx, it can route this line to whatever it is that Android supports for playback, making it look to Android like a single outgoing wav or something. (Haven’t tested this yet. Am in process of learning my way around Linux–have Ubuntu now–and plan to set up an Android emulator there in the next week or two.)
Unlike the previous version, this one never processes a frame (sample) of sound from a track unless it is the current frame. I iterate across the tracks and only read one sound sample from each, rather than a buffer’s worth. There were doubts expressed that this would cause all sorts of performance problems, but tests seem to indicate all is fine. Last night, I ran 16 wav files simultaneously, and in another, 32 simultaneous clips all at different pan positions. No problems to report.
I basically have a wav wrapper and a clip-type wrapper supported so far. The clip is in two parts: a class that stores the clip data in RAM, and another that manages a set of cursors for multiple playback. There’s a nifty non-blocking queue used for storing cursors that are ready to play (when they finish, they are re-entered into the queue automatically).
clip.play(speed, volume, pan); // of course, there is some setup first
speed is a multiplier–for example 2 will play the sound twice as fast, 0.75 will slow it down some (no I didn’t support negatives–but it should be quite easy to add, actually, just a matter of adjusting the start point to the end of the clip!)
volume goes from 0 to 1, a multiplier. (I plan to add a VolumeMapping function so that 0.5 actually sounds like it is at half volume.)
pan goes from -1 to 1, with 0 as center.
All the tests have been using Thread.sleep() increments to space things out and the response is pretty good, probably fine for most game applications. There is a bit of variability that is probably directly related to the size of the cpu slices. I wouldn’t want to use it for reading a musical score that requires playing a series of clips in perfect time. I just started working on an event-reader that is accurate to the frame (e.g., 1/44100th of a second). Will report progress on that. It makes use of nsigma’s advice to handle these events in a single audio thread shared with the mixer, to avoid blocking problems.
No support for ogg or mp3. If you want to load an ogg or mp3 into RAM though, for clip playback, that should be easy to add.
Biggest drawback is perhaps that I haven’t written in the ability to add or drop tracks yet while the mixer is running. Currently,the mixer iterates through all tracks for each frame, skipping those that are not “running”. But once the audio-event-reader works, it should be doable to add this as part of that process.
Reinventing the wheel, again. There are a lot of great audio tools already in existence! But I want something for my games and that can play my Java FM synth sounds, and I want to learn about audio programming.