The problem with doing that is that you’d need multiple wave-forms (possibly one per octave), otherwise when you pitch the table up you’ll bring in aliasing, but when you pitch down you’re missing more and more of the harmonics that give the sound its richness.
Oversampling and filtering is definitely an approach among many. As well as the BLEP approach @BurntPizza mentioned, I’ve been wondering about BLIT (band-limited impulse train). I know those approaches are related in some way, but not sure of the pros and cons of each. What I do know is that The Synthesis Toolkit has implementations of a BLIT Saw and Square wave algorithm in C++, which shouldn’t be too hard to port. There is also some related code within the Music DSP archive. This article (using Reactor) seems one of the easiest to understand the approach - it’s not exactly in my comfort zone! ;D
I’ve also seen a few posts suggesting that suitably optimized real-time generation of waveforms might beat wavetables, again on the basis of cache misses - not sure how that pans out in practice.
I’m intrigued by why you find a callback API trickier than a blocking one? Also slightly concerned what you’re meaning by “synchronize” things - I’m assuming not in the sense of locks! Either way, one article I’d highly recommend reading around low-latency audio programming is http://www.rossbencina.com/code/real-time-audio-programming-101-time-waits-for-nothing He’s written a few other interesting articles around communication with real-time audio that might be worth a read too.