I just posted AudioDicer to GitHub.
This is a single class (lots packed into it) that can take a short audio cue (e.g., five seconds of a recorded brook) and produce a continuously varying audio stream by selecting random fragments from within the cue and stringing them together.
I haven’t made a demo video or recording yet, so you’ll have to download to try it. But the project does include a bare bones Swing GUI that has controls for all the parameters, and a couple sample sounds.
The most tricky aspect of coding this: when a cross-fade occurs between the random fragments, if they are too close together, comb-filtering can occur. So there has to be some code to ensure the contiguous fragments are spaced far enough apart to prevent this (25 millis is the most common threshold mentioned for comb filtering).
The tool now has real time volume and pitch implemented, which adds interesting potentials for dynamic changes based on game state. I write a bit more about this in the project README.MD.
From the README.MD for the project:
AudioDicer is a Java Class built to efficiently produce a continuously varying sound stream from a small sound asset. In creating soundscapes, there is often a need for a continuous sound, for example, a flowing brook, or a crackling campfire. Continuous playback is typically achieved by looping a sound asset. If the loop is too short, the repetition can become annoyingly apparent to the User. To prevent this, longer sound files are used, but the memory costs of doing so can quickly add up.
With AudioDicer , small portions of audio are randomly selected from a cue and strung together to provide continuously varying audio. A scant handful of seconds of data can serve as the basis for a continuously-playing cue with no discernible looping.