I was dabbling with using a
TargetDataLine to get sound from a microphone and turn it around for real time use. It was a little tricky, but what I came up with has the
TargetDataLine on one thread, and the
SourceDataLine (for speaker output) on another thread.
On the mic input thread, a buffer is read from the TDL, converted to PCM and stored as an array in a
On the playback side, the buffers are polled from the queue on an as-needed basis, then the real-time effect can be applied here to the PCM, then the data converted back to audio bytes and shipped out via the
I’m playing around with various buffer sizes, especially the internal buffers for the TDL and SDL, but also the size of the reads and accompanying arrays stored in the
There are some strong audio coders that occasionally peruse this site. I’m wondering what you might think is a reasonable latency to aim for for a laptop. I’m using an i3 with 2.4GHz and 8GB RAM. The first experiments have left me with about 1/3rd of a second latency. Below that: dropouts become a problem.
It would be nice to cut it closer, but so far, the pacing of the two threads seem to be a little too variable to make any of the buffers smaller. My first attempt, putting both mic input and speaker output on the same thread, had worse dropouts. Both TDL and SDL employ blocking queues and these blocks work efficiently for the respective lines, but when ganged, each one’s blocks also blocks the other. Working concurrently (putting each line in its own thread) seemed to be a big step in the right direction.
Pondering possible tests. Maybe monitoring how much the concurrent queue varies in length with the different buffer sizes would be a good objective measure to use for testing…hmm.