Line pooling

Is there any downsides to converting the wavs to the same format upon loading? That’s the only problem I see with pooling SourceDataLine’s so far.

The goal is to be able to load a wav, and play it x times concurrently, along with all the other sounds. Fastest thing I can think of is to just create as many lines as I want max sounds playing, and just reuse them for whatever is currently playing. So you could play the same axe hit sound 2-3 times at once.

You could potentially be using more memory by upconverting streams. Other than that, it’s actually more of an advantage than disadvantage.

Apparently no acual conversion stuff is implemented at the moment.

Neat?

[quote]Apparently no acual conversion stuff is implemented at the moment.
[/quote]
Errmm… Not sure what you mean here. You can use various formats for SourceDataLines that will all be mixed correctly before being output to the sound card. Is that what you mean?

No what I mean is, if I load a 10000hz 8 bit mono wav file, I can’t use the API’s to convert it to say, 12025hz 8bit stereo before opening a line for it.

The methods are in there to DO it but it’s not acually in on the implementation level, i.e. the methods don’t seem to acually do anything at the moment. I read this in 2 places as well just to confirm it.

The problem I’m having is I don’t want to be creating/opening/etc lines to play sounds. I want 8-16 lines for the max number of sounds that can play at once, and just use those for everything, but you have to specify formats at creation or open() (and the apis say you cant close/open on some machines) so I was going to just convert all the wavs to a like format and just make the lines use that.

But now I either have to get another plan of attack, or force someone to convert them to the same format outside the APIs. Which would be kinda annoying.

Edited: to clarify one thing

Well, I’m no expert on sound data management, but I do believe it should be pretty easy to upsample the data. In it’s simplest form, all you need to do is “stretch” the data much in the same way you’d stretch an image. It’s easy with standard multiples (i.e. 11hz, 22hz, 44h) since the sound is always even after conversion (i.e. 11hz to 44hz would just require that you quadruple each byte). With non-standard sample rates, it’s better to use a float to count out the streaching. For example:


double multiple = 2.5;
double count = 0;

byte[] olddata = getSoundData();
byte[] newdata = olddata * multiple;

for(int i=0, j=0; i<newdata.length; i++)
{
    newdata[i] = olddata[j];

    count++;
    
    if(count > multiple)
    {
        j++;
        count -= multiple;
    }
}

Downsampling is just as easy. You just grab every so many bytes and throw away the rest. Of course, for more professional sounds (Whatcho’ talkin’ about? This is a game! I pitty the foo that get fancy! ;)) you’d probably want to “smooth” the sounds by performing averages on the resized data. For upsampling, this would mean that you would take the difference between the last byte and this one and add “in between” values. With downsampling, you’d average the X number of bytes to create the one byte for each sample. (i.e. 44hz to 11hz would require averaging 4 bytes of the original to produce one byte of the result.)

Does anyone else have any comments on this?

I’m still amazed it’s not in there, the java sound stuff just seems very… unfinished?

Common Sun, get to it bitches. :stuck_out_tongue:

[quote]you’d probably want to “smooth” the sounds by performing averages on the resized data.
[/quote]
A better sounding way to do that is not to average the data, but to use a steep low-pass filter with the cut-off frequency at the original wav’s sample frequency.

Erik

Oops forget that.
This only applies for upsampling.

For downsampling you should also filter before you down-sample. This to avoid aliasing which causes distortion.