audio dsp (now on sourceforge)

The problem with doing that is that you’d need multiple wave-forms (possibly one per octave), otherwise when you pitch the table up you’ll bring in aliasing, but when you pitch down you’re missing more and more of the harmonics that give the sound its richness.

Oversampling and filtering is definitely an approach among many. As well as the BLEP approach @BurntPizza mentioned, I’ve been wondering about BLIT (band-limited impulse train). I know those approaches are related in some way, but not sure of the pros and cons of each. What I do know is that The Synthesis Toolkit has implementations of a BLIT Saw and Square wave algorithm in C++, which shouldn’t be too hard to port. There is also some related code within the Music DSP archive. This article (using Reactor) seems one of the easiest to understand the approach - it’s not exactly in my comfort zone! ;D

I’ve also seen a few posts suggesting that suitably optimized real-time generation of waveforms might beat wavetables, again on the basis of cache misses - not sure how that pans out in practice.

I’m intrigued by why you find a callback API trickier than a blocking one? Also slightly concerned what you’re meaning by “synchronize” things - I’m assuming not in the sense of locks! Either way, one article I’d highly recommend reading around low-latency audio programming is http://www.rossbencina.com/code/real-time-audio-programming-101-time-waits-for-nothing He’s written a few other interesting articles around communication with real-time audio that might be worth a read too.

[quote]The problem with doing that is that you’d need multiple wave-forms (possibly one per octave), otherwise when you pitch the table up you’ll bring in aliasing, but when you pitch down you’re missing more and more of the harmonics that give the sound its richness.
[/quote]
Yes, in hindsight it was a bit silly to even go that way ::slight_smile:

[quote]I’m intrigued by why you find a callback API trickier than a blocking one? Also slightly concerned what you’re meaning by “synchronize” things - I’m assuming not in the sense of locks!
[/quote]
It’s not really trickier in itself; in fact it’s easier. And yes, I meant synchronization in the sense of locks.
But the way it works now is that I have an object called ‘Context’ where DSP units are registered that need to be updated (for example to generate the next sample of an oscillator).
Then there is a thread that updates the Context. This was fine for blocking I/O such as javax.sound, but with Asio this needs to be synchronized with Asio’s thread.

It might be cleaner to refactor this a bit so that the updating of the Context is regulated by the audio I/O units themselves instead of this separate ‘I-know-nothing’-thread that blindly updates the Context. That way, no synchronisation/blocking is necessary anymore.
On the other hand, it works fine as it is now so I’m not exactly in a hurry there :slight_smile:

Yes, don’t do this! Run everything off the callback thread. It’s possible to wrap a blocking API to provide a callback API. It’s not possible to do the opposite without adding overhead, and potential threading issues. You’d be better building a callback system on top of JavaSound. Feel free to have a look at this code which does just that, and has a few other tricks to improve performance (timing loop, etc.)

At low CPU usage you might get away with it, but it will eventually bite you! :wink:

Are you suggesting that eventually using all those delicious CPU cores for fancy real-time audio processing will never really work and we’ll be stuck in single-threaded land forever? :persecutioncomplex:

I’ve read the “Time Waits for Nothing” article and wondered the same thing. It is difficult for me to understand how having one thread do the work of two is more performant than having the two threads that work in parallel but just occasionally have to synchronize.

One thought is that a modern cpu/compiler can more efficiently figure out what, in a single thread, can be handled via a dual process, than when that same work is in two threads that have to interact at certain points. But I don’t know if that is a sufficient explanation.

What I’ve done in response to reading this article is the following:
(1) made a study of functional programming and made an attempt to use things like immutables when possible (I’m thinking of the EventSystem I wrote, where the constituent “AudioCommands” and frame times of AudioEvents are final);
(2) in some instances, programmed out some flexibility that would have required synchronizing or making use of a synchronized collection (e.g., my mixer can only have tracks added or taken away when it is not running);
(3) but also making use of synchronized collections when interacting with the main audio thread: e.g., the collection that holds the Event schedule is a ConcurrentSkipListSet which allows me to add to it without danger of throwing a ConcurrentModificationException);
(4) making use of volatile variables for all “instruction” or “settings” changes to the various effects and synths;
(5) optimized for speed of execution of all code in the main audio loop.

Now, a volatile variable, or a ConcurrentSkipListSet will also block. But the overhead or amount of blocking is going to always be less than the use of Synchronized? I don’t know if that is necessarily true.

It is very easy in this business (as with many things in life) to glom onto a principle and overuse it. I wish I had a better understanding of synchronizing and parallel computing, but despite reading “Java Concurrency in Practice” I feel like there is a lot that I am taking on faith.

One thing I’m interested in trying with the audio mixer: fork/join for the separate tracks. But in truth, so little is done in a given frame, that the overhead is probably not justified. This might be a solid argument, though, for having the audio mixer increment by a buffer’s amount of frames rather than by single frames.

I’m not saying that multi-core audio processing isn’t doable, and there is obviously software already that does it. I would say it shouldn’t be done naively, lots of core library stuff in the JVM is probably unsuitable, and it requires a deep understanding of what’s going on and whether it’s worth it. In particular don’t assume that having multiple threads will instantly make things more performant than being single threaded considering the overheads of managing that. Also don’t assume that performance (throughput) is what matters most - the point of that article is that guaranteed execution time is essential. eg. Praxis LIVE always runs with the incremental garbage collector, and a few people on here recommend it for stable video framerates as well as doing audio - this GC has less throughput. We are basically trying to get close to real-time semantics (and AFAIK there is not much in the way of hard real-time stuff that supports multiple cores!).

Don’t assume that sharing data between threads requires synchronization in the synchronized / blocking fashion either - there are various lock-free / wait-free ways of doing that too.

This collection is not synchronized in a typical Java sense - it is non-blocking. I would question whether you need to order events within the same collection that handles cross-thread communication. I’m generally in favour of a single access point to the audio thread using something like ConcurrentLinkedQueue

Volatile is non-blocking but there are problems with using it like this (as opposed to passing in Runnables as above). While non-blocking, they are a memory barrier which means caches may be flushed when hitting one, and certain optimizations regarding reordered instructions may not happen. They also suffer from a lack of atomicity (you can’t guarantee two instructions happen together), and they reduce some possibilities for optimization (eg. makes it harder to switch off elements off a processing graph that aren’t required). I wrote some more about this with regard to Praxis LIVE architecture here if you’re interested.

This seems to be similar to the way some pro-audio software approaches this. The important thing in parallelizing would be ensuring that the different cores do not rely on data from each other, so separate mixer tracks would be a logical way to do it. You’d probably want to write a specialized fork/join mechanism that tracks the processing time required for each mixer track to try and spread them across available cores, and not have more threads than cores running. You’d probably want to look at an efficient non-blocking communication model from the worker threads, and probably have the processing threads aware of time in the stream - thinking that if processing completes close to the time the next audio buffer is available you’d want to spin-lock rather than let the thread be descheduled.

@erikd - apologies if this is diverting your (forum) thread somewhat. :persecutioncomplex: With specific regard to this project, I’d recommend sticking with what I said earlier about running everything off of the primary callback thread. If you get to look at running multiple DSP graphs at once (ie. without dependencies except on final mix) then splitting on to worker threads might be worth it. Be aware of one JVM specific issue though, which is probably a consideration with ASIO (it is definitely the case with JACK), in that the callback thread into the VM has priority settings that are not possible to achieve from within Java without resorting to JNI. It would be important that any worker threads also gain those priority settings - I haven’t tried creating a Thread from the callback yet to see if the settings get inherited.

[quote]@erikd - apologies if this is diverting your (forum) thread somewhat.
[/quote]
Not at all! :smiley:
I think it’s all very interesting and I learn a lot from these discussions.

To be honest, multi-threading isn’t really a concern for the time being but I can imagine at a later stage it might become useful to fork certain heavy tasks that don’t require inter-thread communication. For example having complete voices of a polyphonic synth spread across multiple cores.
It might not be worth it now, but there is this trend of CPUs getting more and more cores so it’s an interesting subject.

For now, I’ll follow your advice and simply run everything off Asio’s thread.
I do understand the implications of real-time audio, and my project has largely been developed with these idioms in mind, but to be honest I’m not really experienced with doing real-time audio stuff multi-threaded.

Anyway, I have made some progress in other areas :slight_smile:
It’s now possible to save a sub-selection of a patch as a ‘meta’-unit. This might make things a bit more manageable when your networks get more complex.

I’m also busy with creating a higher-quality version of my main ‘oscillator’ unit to reduce aliasing.
What it does extra now is oversampling and filtering. Though rather expensive, it already sounds a lot better but at higher frequencies there’s still a little bit of aliasing so there’s room for improvement.

Ok, I’ve followed your advice and made the audio single-threaded. It seems to behave better when doing ‘expensive’ stuff in the GUI, so that’s good.
Anyway, I think it’s better this way because it’s simpler and that’s usually a Good Thing :slight_smile:

I’m currently playing around creating a vocoder with this thing.
It’s a good test of using MIDI, and Audio I/O together.
I don’t have a proper spectrum analysis unit yet, so I’m working around that for now with band-filters and envelope followers, but it’s starting to sound quite cool 8)

Anyway, I think I’ll move this to SourceForge or something soon.
I’m a bit nervous about it because I know there are issues in the code that many software architects would snuff at, and it’s all very much just a WIP of a hobby project so I hope it won’t attract too much attention at this point ;D
The ‘core’ part of it (the pure DSP part that doesn’t depend on J2SE) I’m quite happy about though: It’s simple and relatively clean.

small disaster :emo:

So I decided to upload my project to SF and created a project there.
I chose ‘share project’ in Eclipse, which then decided to completely destroy my workspace and remove 75% of everything.
So thank you Eclipse GIT team provider >:(

My last backup is a few days old, so it’ll take me a day or so to get me back to where I was.

Ok, FWIW it’s here: https://sourceforge.net/projects/jmodsyn/

Disclaimer: I know it’s all rough around the edges and obviously it can be improved in many ways. At this point it’s probably not all that useful for most people. Many will say the code is incredibly dirty.
It’s basically my toy-project, but just see it as a bunch of code that might be useful one way or another. If you want to know how you can implement a filter, pitch-shifter, chorus, reverb, etc: there’s lots of code here.

If you want to try the editor, try running class org.modsyn.editor.PatchEditor. It will create a folder called ‘ModSyn’ in your user.home directory. You can copy the files in ModSyn-j2se/example-patches there for a few examples.
But please be aware: these examples depend on ASIO, so that’s Windows-only and depend on you having an ASIO driver installed (if you don’t: install ASIO4ALL, it works surprisingly well), and some patches use MIDI.
It’s on my todo list to remove the distinction between Asio-IN/OUT and JavaSound-IN/OUT, and resolve the best audio implementation based on platform, whether or not you have ASIO installed (or another audio host that doesn’t suck as hard as JavaSound) and configuration. Currently, JavaSound (Audio-IN/OUT in the editor) doesn’t work at all in the editor.

With all that said, I’m actually using this for my own band so for me it’s already useful :). It’s quite nice to have this box of tools available if your own synth or effects pedals or whatever don’t quite support the sound that you’re after. And it’s quite easy to add more DSP objects to this box of tools.

[quote]It’s on my todo list to remove the distinction between Asio-IN/OUT and JavaSound-IN/OUT, and resolve the best audio implementation
[/quote]
This is done now; if Asio is unavailable for whatever reason (not running on Windows, not having Asio drivers installed), it should fall back to JavaSound.
But again, if you happen to run Windows, install Asio4All or the Asio driver that came with your audio hardware; it’ll be better.