After 6 years+ working with Java I once again find myself a total newbie; I’ve started implementing sound Fx for our latest project which is introducing me to a whole new world of interresting abstraction.
All very nice from a functional perspective, but I’m a little lost on the performance implications of the various options provided by the JavaSound API, for example:
-
Do I load all my sound effects (a couple of megabytes, probably 2-300 little samples) into Clips and play them when I need to, or do I reserve a constant pool of N Clips and open() my preloaded samples when I need them?
-
Some (most) effects may be active in several instances simultaneously (possibly, likely even, with different attenutation, panning and offsets) - do I create multiple Clips for the same buffer or is this something I need to avoid?
-
Alternatively, am I better off using a single SourceDataLine and do mixing and panning myself?
-
Preferred audio format? Both run-time size and decoding-speed considerations are important. (I.e. OggVorbis is small, but I don’t see 50 Ogg decompression threads running simultaneously as a viable path for sound fx ). Wav is simple and high quality, but huge.
I guess what I’m looking for is info on the “weight” of e.g. Clip and SourceDataLine - can I throw these around in large quantities or are there a large amount of hardware resources allocated and attached to each Line instance?
If anyone out there has views on this, sample code, links to documentation, tests and benchmarks, or even experience :o, please do tell