Hurrah!
Cas 
Hurrah!
Cas 
Update: added LFO 
[EDIT]: and FM Synthesis
Hmmmm… a little webstart demo wich plays some random sounds would be nice 
And sequencer… eventually I could support you with some graphics there… after I’m through my scary schedule 
I’ll create a little webstart demo when I have some time 
[quote]And sequencer… eventually I could support you with some graphics there… after I’m through my scary schedule
[/quote]
Thanks :), but I’m thinking about loading an existing format like MIDI files and/or ‘tracker’ files. Would save lots of time, and access to existing tools (and music).
Another update:
A rather useless(?) feature, but cool nevertheless; I added support for audio input. Running your voice through a ring modulator is freaky, hehe
* erikd talking through microphone, hearing an alien chat back through the speaker * ;D
[quote]I’ll create a little webstart demo when I have some time 
[/quote]
Sweet 
[quote]Thanks :), but I’m thinking about loading an existing format like MIDI files and/or ‘tracker’ files. Would save lots of time, and access to existing tools (and music).
[/quote]
Hmyea… I guess that makes sense. Btw is there a good free midi editor?
[quote]Another update:
A rather useless(?) feature, but cool nevertheless; I added support for audio input. Running your voice through a ring modulator is freaky, hehe
* erikd talking through microphone, hearing an alien chat back through the speaker * ;D
[/quote]
Webstart webstart 
I haven’t used it myself (I use Cubase SX & Reason), but I heard this one is pretty good:
[quote]I haven’t used it myself (I use Cubase SX & Reason), but I heard this one is pretty good:
http://www.hitsquad.com/smm/programs/MusicStuProd/
[/quote]
Hadn’t much fun with it. Open/import/whatsoever didn’t work… however I could drag midis into it… and 10mins later a total crash. 20mins of rendering down the drain… oh well.
I guess I should just give up making midis 
But really, there’s no reason to give up on MIDI. A good MIDI sequencer is so much better and versatile than ‘trackers’ (and MIDI files are usually a lot smaller since there’s usually no sound data in it).
EDIT: a MIDI sequencer in java:
http://www.sam-con.com/dj_gopz/toc.html
Little update:
I’m doing a major change in the API 
I used to process little buffers of 32 samples. This was because I eventually need to send a buffer of a given size to the soundcard to play, so I thought I’d just create those buffers and mix, filter etc along the way and finally send the same buffer (after converting it to bytes) to the sound card.
Every 32 samples (or whatever you set the buffer size), the controllers (Volume settings of tone generators, Envelopes etc.) were updated (adding a tiny bit of ‘zipper noise’ in the process).
Bad Idea :-/
In the process I created unneeded overhead, because of all the arrays everywhere. Java’s bounds checks do take a performance hit (in the client anyway). The need to copy arrays in certain places (in panning for example which has to split a mono signal to stereo) doesn’t help there either.
Also the API became a little bit more complex than needed because I had to have AudioInput/AudioOutput interfaces (writing arrays) and ControlInput/ControlOutput interfaces (those not being compatible of course, Controls write just a float).
So, I got rid of all those little arrays and now have only arrays where absolutely needed (sending samples to the sound card, getting sound from the sound card etc).
And, I can get rid of the control interfaces, because there’s now just one type of signal that can be used for both audio and controllers, which means more flexibility in the connections you can make.
So in short, the score is:
</diary_update>
It sounds from the above that you are running control signals at the audio rate.
You might want to have a look at SuperCollider for ideas. SuperCollider has a separate audio rate and control rate. Control rate changes are interpolated at the audio rate to remove zipper artifacts…
I’ve not had the chance to dig into your project yet, but that might be something to look into if possible…
Yes, I am running control signals at the audio rate now. I did think about interpolation (I used to have some interpolation here and there; there’s still an interpolator class which I used previously), but I think I’m going to keep it this way for a while.
Control signals are most of the time cheap operations anyway (not much more expensive than interpolation I guess), and getting rid of the separation between controls and audio gains so much more flexibility and simplicity… I get to delete some classes, and get a more powerful api at the same time 
But let’s see how it turns out.
Maybe I’ll have to make some changes in controls again later, if control signals turn out to be a performance bottleneck. But then I think I would change something in the implementations of some controllers to speed them up, not the interfaces / API design.
http://www.mycgiserver.com/~movegaga/SynDemo1.jnlp
A little web started demo with some sliders to play with.
When you get distorted sound, turn down the ‘gain’ a little.
I noticed that in this demo I sometimes get an ArrayOutOfBoundsException in an oscillator. Which should never happen, according to the code. Maybe some threading issue or something :-/
Ooooooh neatness ;D
weeeoooob weeeeooooob weeeeoooob haha great 
Here’s another one: 
http://www.mycgiserver.com/~movegaga/SynDemo2.jnlp
Make sure you select your microphone for recording in your Volume Control -> Options -> Properties -> recording (windows).
Then talk in your mic and hear HAL gone crazy ;D
Yeah… the demo is cool… Keep it going…
Make sure that you can construct patches programmatically!
All of my GUI work with Scream should work great with your project. Plus eventually you might consider using OSC, Open Sound Control, to get your synth running via the network; Scream already has a client based OSC framework; I’ll be extending it with server capability after J1; should be as easy as defining a protocol for your synth and interfacing with Scream…
I tipped off someone who might be interested in contributing to your project… So who knows… Have you made a web page for it yet?
for some reason when I was messing with the sounds, almost every one of them reminded me of Excite Bike for the Nintendo 
Good job, keep up the work
[quote]Make sure that you can construct patches programmatically!
[/quote]
This is some code of the first demo:
Oscillator osc1 = new Oscillator(WaveTables.SAWTOOTH);
Oscillator osc2 = new Oscillator(WaveTables.SQUARE);
LowPassFilter lpf1 = new LowPassFilter();
LowPassFilter lpf2 = new LowPassFilter();
LFO lfo1 = new LFO(WaveTables.SINUS);
LFO lfo2 = new LFO(WaveTables.SINUS);
Amplifier amp1 = new Amplifier();
Amplifier amp2 = new Amplifier();
PanPot pan1 = new PanPot();
PanPot pan2 = new PanPot();
Mixer mixL = new Mixer();
Mixer mixR = new Mixer();
ToJavaSound out = new ToJavaSound(ToJavaSound.STEREO, 32, 8196);
mixL.connectTo(out.inputL);
mixR.connectTo(out.inputR);
mixL.addChannel(pan1.outputL);
mixL.addChannel(pan2.outputL);
mixR.addChannel(pan1.outputR);
mixR.addChannel(pan2.outputR);
amp1.connectTo(pan1);
amp2.connectTo(pan2);
lfo1.connectTo(lpf1.cutOffControl);
lfo2.connectTo(lpf2.cutOffControl);
lpf1.connectTo(amp1);
lpf2.connectTo(amp2);
osc1.connectTo(lpf1);
osc2.connectTo(lpf2);
Is this what you mean?
[quote]All of my GUI work with Scream should work great with your project. Plus eventually you might consider using OSC, Open Sound Control, to get your synth running via the network; Scream already has a client based OSC framework; I’ll be extending it with server capability after J1; should be as easy as defining a protocol for your synth and interfacing with Scream…
[/quote]
Scream looks fantastic. 
Is OSC a java framework?
[quote]I tipped off someone who might be interested in contributing to your project… So who knows… Have you made a web page for it yet?
[/quote]
No web page yet. I’m thinking about either a sourceforge.net project or a java.net project.
[quote]Here’s another one: 
http://www.mycgiserver.com/~movegaga/SynDemo2.jnlp
Make sure you select your microphone for recording in your Volume Control -> Options -> Properties -> recording (windows).
Then talk in your mic and hear HAL gone crazy ;D
[/quote]
Hehe absolutly odd ;D
Open Sound Control is an application protocol (network based and otherwise).
A good overview can be found here:
http://www.cnmat.berkeley.edu/Research/NIME2003/NIME03_Wright.pdf
The main page is here:
http://www.cnmat.berkeley.edu/OpenSoundControl/
As far as my Java implementation I started with Chandrasekhar Ramakrishnan’s code, but extended it for NIO and efficiency; packet conservation.
http://www.mat.ucsb.edu/~c.ramakr/illposed/javaosc.html
Checking into OSC will give you an idea about being able to configure your synth programmatically.
The audio engine I am using, SuperCollider3, which is covered in the OSC PDF can only be addressed via OSC.
The goal of a programmatic interface is language independence. I am able to fully configure and control SuperCollider3 from Java over the network.
Essentially, it would be awesome to have an interface that exposes the functionality of your synthesis system without having to know the implementation details.
The example you listed of being able to connect your synth is programmatic, but still tied more to OO programming than a generic interface. For instance you will probably want to break down your Oscillator class to particular implementation SinOscillator, SawOscillator, etc.
Spending time to review SuperCollider3 could be very beneficial in gaining some ideas on your project. SC3 is under GPL, but you should be able to convert the code for basic unit generators (oscillators, filters, etc.) to Java and not break GPL… I’d like to see your synth be released under BSD if possible.
I’d be glad to continue discussing all of this… Time permitting I would also like to help out with implementation.