An Audio Control helper tool using a FIFO buffer

Hot off of debugging! :slight_smile: The following code is used to control the variable speed playback of audio files as accomplished by this applet:
http://www.hexara.com/VSL/VSL2.htm
If you don’t have a .wav to use as a test, you can use this pad from Hexara: http://www.hexara.com/Audio/BFPad.wav
But you will need to download it first. Also, an overlap of 20000 works best. To use the applet, load a wav file (drop down menu, top left). Then, hold down the mouse and drag: horizontal = speed of playback, vertical = volume.

In the code below, the “SpeedEvent” object is just a package for a long (System.nanoTime()) and a double (the control value). I am packaging mouse events prior to storing and popping them in a LinkedBlockingQueue. The “RateController” is placed within the core loop of the SourceDataLine read/write, so that it is consulted once per frame. If there are items in the queue, it pops them and calculates the needed “smoothing” changes based upon the timing, as well as the time until the next pop. I’m assuming a frame rate of 44100 fps when I “reconstitute” the mouse event sequence.

[EDIT: There is an improved version on post #7.]

import java.util.concurrent.LinkedBlockingQueue;

public class RateController {
	private static final double FRAMES_PER_NANO = 0.0000441;
	private final LinkedBlockingQueue<SpeedEvent> speedEvents;
	private SpeedEvent curSpEv;
	private long elapsedFrames;
	private long frameSpEv;
	private double speedIncrement;
	private double speed;
	private long nanoStartTime;
	
		
	// C O N S T R U C T O R
	RateController()
	{
		speedEvents = new LinkedBlockingQueue<SpeedEvent>();
	}
	
	// Methods
	public void init()
	{
		elapsedFrames = 0;
		frameSpEv = 0;
		nanoStartTime = System.nanoTime();
//		System.out.println("rc init, nanoStart:" + nanoStartTime);
	}
	
	private long convertNanoToAbsFrame(long nanoSpEv)
	{
		return (long)((nanoSpEv - nanoStartTime) * FRAMES_PER_NANO);
	}
	
	public void addSpeedEvent(SpeedEvent spe)
	{
		speedEvents.add(spe);
//		System.out.println("se added:" + spe.toString());
	}
	
	public double tick()
	{
		/*
		 * 	Key service.
		 */
		elapsedFrames++;
		
		if (elapsedFrames >= frameSpEv)
		{
			curSpEv = speedEvents.poll();
			if (curSpEv == null)
			{
//				System.out.println("pollnull");
				frameSpEv += 256; // arbitrary
				speedIncrement = 0; // no changes
			}
			else
			{
				// calculate new increment
				frameSpEv = convertNanoToAbsFrame(curSpEv.getNanoTime());
//				System.out.println("popped: " + curSpEv.toString());
//				System.out.println("current framecount:" + elapsedFrames);
//				System.out.println("frame of SpEv:" + frameSpEv);
				
				if (frameSpEv < elapsedFrames)
				{
					speed = curSpEv.getDesiredSpeed();
					elapsedFrames = frameSpEv; // readjust starting point
				}
				else
				{
					speedIncrement = (curSpEv.getDesiredSpeed() - speed)
						/(frameSpEv - elapsedFrames);
//					System.out.println("current Speed: " + speed);
//					System.out.println("speed Incr:" + speedIncrement);
				}
			}
		} 
		speed += speedIncrement;
		
//		System.out.println("elapsed fr:" + elapsedFrames);
//		System.out.println("sp:" + speed);
		return speed;
	}

	public static void main(String[] args)
	{
		RateController testRC = new RateController();
		testRC.init();
		testRC.addSpeedEvent(new SpeedEvent(System.nanoTime(), 1.1));
		for (int i = 0; i < 100; i++)
		{
			System.out.println(testRC.tick());
		}
	}
}

So, I’ve been figuring out a few things about sound controls. I’ve been posting questions here and there, asking about issues related to the GC and to how the JVM switches threads, and I think I’ve touched bottom, so to speak.

Below is a diagram to show how events occur. The first line illustrates a series of MouseControl events, but could very well be readings from a JSlider. I’m just numbering them as if they occur steadily in time (a gradual slide of the slider).

The second line shows how the JVM switches on and off the processing of the SourceDataLine. Here, I’m wanting to focus on the core loop, where one reads from a data source (I have a TargetDataLine in my example program) and writes to the SDL. Note that this loop runs ahead of playback time.

The third line attempts to show how, in the playback, the Mouse Control values or Slider values will lag.

http://www.hexara.com/VSL/AudioControlTiming.jpg

In this example, in the second SDL processing block of time, only the value of “4” is present and is used while SDL Processing handles a chunk of future sound. When the SDL processing block resumes, the current Mouse Control value is 9 (skipping 5-8).

What strategies are there to make the transitions from one control value to another smoother?

  1. One can impose a maximum amount that a control value can be allowed to change. In this scenario, the Mouse Control is a “desired value” or target value, and the actual control value is incrementally moved in that direction. Best case scenario (for smoothness) is if you can do this update once per audio frame. This takes a fair bit of tuning, and is subject to “staircase” effects or slowness of response, depending upon whether you make your maximum delta too large or too small.

  2. Spread the desired change out over the course of the current buffer being processed. You should know the current value and the desired value, and the number of frames in the buffer being processed. But this only works if you know that a single buffer’s worth of data is going to be run by the JVM. I tried this approach but kept having problems. When I tested my app, the core SDL read/write loop (one buffer’s worth of data) was usually run twice by the JVM. So the control value would transition in one read/write loop and just sit there in the second, creating the aforementioned “staircase” effect. When I made the buffer size 10 times smaller, instead of consulting the other threads more often, the JVM ran the core read/write loop approximately 20 times, making the staircasing even worse! (My JVM seemed to want to run the loop about 2 to 7 msec, then go away and come back 125 msec later. OS = WinXP. Probably other OS/JVM combos will do this differently.)

  3. Spread the desired change out via storing and popping Mouse Control values in a FIFO array of some sort. The programming is not trivial, but at least if you get it right, the control moves smoothly at a known minimum amount of lag.

For controlling volume, I recommend using approach (1), and am doing this in the example program. The biggest concern with volume changes is that they shouldn’t be so large that they cause clicks. However, “staircase” effects are not easy to hear with volume. So, one can use a relatively simple solution.

For controlling the variable playback speed, the staircasing is very audible, so I am using approach (3).
[Edit: Now using approach 3 for both rate and volume. See post #7.]

Wow! You really come with a different angle in each of your threads. It’s always really interesting to read more about music in Java and the example looks very nice.

I’ll maybe bug you with a message when I want to do more about music in my game :slight_smile:

Mike

Hi Mickelukas - sure, send me a line anytime! I’ll do my best to answer.

I think some of the code I presented might be kind of a mystery. I’m so caught up in the problem it is hard to sit back and explain it to folks that aren’t trying to do things directly with javax.sound.sampled libraries. Unless you are really into it, it’s probably most efficient to use a package like that made by “paulscode” which sounds good and a lot of people vouch for!

I’m also a composer, and I really want to get this nailed and to write some of my own tools and use them for game-compositions and alternatives to mp3 linear compositions. Really learning Java’s sound library still seems like a good way to go for this.

The “RateController” is a lousy name for the above class. Am open to suggestions. The tool does this:

Takes “real time” control events and stores them for consumption by a process that does NOT work in real time, but produces output that is viewable/audible in real time. Also, it provides a linear smoothing of the data. (It could also be redone to forego the smoothing, and this would reduce the lag considerably. I’m going to have to try this out.)

I suppose it could be adapted for “game loop” frames rather than an audio frames, but afaik, game loops purport to operate in real time. They don’t “work ahead” the way that the Java Sound SourceDataLines consume data in chunks that are somewhat in advance of when it is to be heard. Or do they?

Ugh–also in my excitement I forgot the that I still have to fix the way the sound is stopped in my Vari-Speed app. It usually is smooth but sometimes clicks. Have to get to the bottom of that intermittent problem…

+1 for the concurrent FIFO buffer approach. There’s a great blog post here - http://www.rossbencina.com/code/real-time-audio-programming-101-time-waits-for-nothing - that I’d recommend for anyone doing audio in Java. In fact, I’d say there’s some pretty good arguments for high framerate graphics using the same advice.

In terms of the transition glitches you’re still having with this, I’d suggest the most important thing is to improve the latency between control movements and effect, and to lower the buffer size you’re working with. I’ve now got the audio server from Praxis up as a separate library - more details in this thread http://www.java-gaming.org/index.php/topic,24574.0.html. You could try hooking up the JavaSound server to your code - you should be able to achieve far lower latency and buffer sizes than you’ve got.

Other approaches that might help include smoothing the changes over a series of audio buffers, though this is usually useful only if your audio thread is updating more frequently than your GUI, which I don’t think it is according to your diagram(?). Also, something I’ve got in the Praxis architecture, although haven’t needed to make use of yet, is to run the GUI ahead of time slightly - I’m not talking metaphysical(!), just use a time slightly ahead of System.nanoTime() in the event, so that you guarantee a constant lag between GUI and audio.

If you’re interested at all, I can make you a quick patch in Praxis that mimics your applet and see if it suffers from the same transition glitches on your system.

Best wishes, Neil

Thank you Neil! I always look forward to your comments, given your obvious audio expertise. Yes, I’ve been reading “Java Concurrency in Practice” and am trying to correctly incorporate what I am learning from that. It’s not the easiest of reads, though.

The (intermittent) glitches I get when ending a sound, I’m pretty sure, are not buffer-related, but due to an overly large volume delta. I still have to give my volume-handling code a going over, as I haven’t looked at it since starting the concurrency book. I think that even though I “tried” to set the volume to rapidly descend to zero before stopping the playback, I did this incorrectly according to good concurrency practices, and sometimes the playback stops before the volume has finished decrementing. I should have a chance to deal with this within the next week.

I think the buffer you and the first article you cite refer to is, in effect, the capacity of the SourceDataLine to accept write() data. Yes? The loop that I write can pump data into the SDL in various size increments, but to refer to THAT size as the buffer size is undoubtably incorrect, despite the JavaTutorials using “buffer” in the naming this array. Because the loop processes until the SourceDataLine blocks, I am thinking the blocking point is a true indication of Java’s audio buffer. As I mentioned, I ran a 1024 & a 10240 bytes per inner write loop test, and in both cases, approximately 20480 bytes (5120 frames) were accepted running about 125 msec ahead of the playback. I don’t know of any way to change the size of the number of bytes accepted by the SourceDataLine before blocking, so I’m currently putting my attention into ways to work with that rather large latency rather than fight it. I’m not trying to write tools for live musical performances! I leave that to the native coders.

It will be interesting to look at your audio server for Praxis. I hope it is not over my head.

Is that you on comment 21 on the Ross Benicia blog you cited?

I hear that the Java sound libraries were rewritten for Java 7 and are vastly improved. I hope it is true.

Er, I don’t know about that - I’m just standing on the shoulders of giants. :slight_smile:

But your applet is a live audio tool! And you don’t need to be a native coder, Java is quite capable of being used for live musical performance - that’s one thing I wrote Praxis for.

The buffersize I’m referring to is pretty much what you think is incorrect! Most audio software processes buffers of audio at a time, usually with fixed size arrays, some with variable. There is also the external buffersize in the audio card itself. Sometimes this is the same as the software’s internal buffersize, and sometimes it is larger (usually a multiple). For best fidelity between control signal and audio you want the internal buffersize to be low; for best latency you want both to be low.

You can select the size of the output buffer using SourceDataLine.open(format, buffersize); Using this you should be able to get much better performance than you’ve got. To get very low latency (probably more than you need) a key trick is not to actually block on the write to the SDL - this is one of the options used here http://code.google.com/p/praxis/source/browse/audio.servers.javasound/src/org/jaudiolibs/audioservers/javasound/JavasoundAudioServer.java

If you think it’ll be over your head you’re expecting too much! ;D This is one of four libraries used in the Praxis audio pipeline. All it does is provide your client (which is a simple interface) floatbuffers of samples to read from and write to. It abstracts out all the setup and timing, but that’s all. It should work on Windows fine, though I haven’t tested recently, Linux and Mac seem to be OK. On my Linux setup I can get the buffer size down to about 64 samples using Estimated timing mode.

Yes, that’s me, doing a bit of shameless self-promotion! Actually, I had been having a conversation with Ross on a mailing list talking about some of these issues recently. I’d say that Ross is the author of the best piece of audio software ever written (AudioMulch), and the only thing that kept me dual booting Windows for about 5 years.

The only thing I’ve seen is the addition of Gervill, which is the alternative MIDI synth implementation - haven’t noticed anything about improvements to other things - got a link? JavaSound isn’t perfect by any means, but I think it gets a lot of unjustified bad press too. The direct mixers are pretty good if you use them correctly.

Well, it ‘works’ live but not with a good enough latency.

Very good point about the buffer! I totally forgot that the buffer size of the SDL can be set, and have been just using the default buffer size, and varying my read() buffer independently of the SDL buffer size. Rookie mistake!

So, maybe there ARE hopes to tighten up the latency to something decent!

Meanwhile, I’m working on trying to apply the above code to my volume controller, and while the “in flight” volume changes are smoother than ever, I’m still having trouble with the starts & stops & timing of events to create proper tapers. I think some of this is due in part to the klugey way I handle events that are lagging rather than preceding the corresponding frame number…

Oh, my source on the improvements to the sound library came from a speaker replying to a question of mine at the Java 7 launch. She said that the implementing code for the libraries (javax.sound.sampled) had been wholly rewritten. She didn’t offer specifics, but said there were lots of improvements and bug fixes over the previous implementation. The API remains the same, though (my main concern). I pretty much have been ignoring the sequencer side of things and am not paying much attention to Gervil. The average client sound card MIDI implementation still sounds pretty awful to my ears, often worse than silence. Prejudiced!

EDIT: Totally frustrating day trying to get to bottom of timing bugs when trying to use this FIFO for controlling the volume, to add some tapering to prevent clicks. It’s so wiggy I can’t even figure out how to isolate or make test cases. >:( :’( ??? :’( >:( :stuck_out_tongue:

OK! The demo on the web site is now VERY responsive. (Links on post #1 above.) It was really hard to figure out what the optimum “lag” amount should be between the GUI event’s timestamp and corresponding “frame” of the sound player. But I had the idea of letting the code itself figure out its own optimum, and it seems to be doing a very good job. The lag readings I’m getting on my WinXP are about 5000 frames, which comes to just under 120 milliseconds.

The self-adjusting code burns “late” GUI events to manage the lag amount, but in practical terms (a) it doesn’t seem to matter if we drop a very small percentage, and (b) I don’t know what exactly to do with a late arriving event anyway. I think, as it is set up, only something like 1 in 400 or 500 on average is sacrificed. And the all important starts and ends are guaranteed.

Also, I put in some code to allow two types of smoothing. Either one can use a GUI event directly, as with the X-axis used to control the rate. Or, one can put in a ramping at the beginning and end of the control streams, as I do with the Y-axis to control volume. This is important with volume, so that we don’t get clicks when we start or stop.

Since the usage is more generalized now, I changed the name from RateController to RTESmoother (Real time event smoother).

It occurs to me, if you have a recording of a steady tone, if you load it, it can be used, via this applet, as the source sound for a Theremin. though, my original conception was to have a way to take a car engine effect or some machinery effect and move it up or down based upon game activity.

import java.util.concurrent.LinkedBlockingQueue;

/*
 * Concurrency design: there should only one consumer per Smoother. 
 * Multiple producers is plausible but I don't have such usage in 
 * mind and haven't tested that.
 */
public class RTESmoother {
	private static final double FRAMES_PER_NANO = 0.0000441;
	private final double maxValue;
	private final double minValue;
	private final double maxDrop;
	private final double maxRise;
	private final boolean ramping; 
	
	// plausible concurrent access
	private LinkedBlockingQueue<RealTimeEvent> realTimeEvents;
	private volatile long nanoStartTime; 

	private volatile boolean starting, ending; 
	private volatile RealTimeEvent newRTE;  
	private volatile long tickFrame; 
	private volatile double smoothedValue; 
	private volatile double originValue;
	private volatile double startTargetValue;
	
	// restricted "Worker" thread variables, reused.
	private long tickTarget, lagAdded;
	private double targetValue, incr;
	
	public double get() {return smoothedValue;}
		
	// C O N S T R U C T O R
	public RTESmoother(double minValue, double maxValue, 
			double maxDelta, double originValue, boolean ramping)
	{
		this.minValue = minValue;
		this.maxValue = maxValue;
		this.maxRise = maxDelta;
		this.maxDrop = maxDelta * -1;
		this.originValue = originValue;
		this.ramping = ramping;
		realTimeEvents = new LinkedBlockingQueue<RealTimeEvent>();
		tickTarget = 0;
		
	}
	
	public RTESmoother(double minValue, double maxValue)
	{
		this.minValue = minValue;
		this.maxValue = maxValue;
		this.maxRise = maxValue - minValue;
		this.maxDrop = maxRise * -1;
		ramping = false;
		originValue = 0;
		realTimeEvents = new LinkedBlockingQueue<RealTimeEvent>();
		tickTarget = 0; // documenting starting value.
	}
	
	// public Methods
	public void init(double startTargetValue){
		ending = false;
		starting = true;
		this.startTargetValue = startTargetValue;
		if (!ramping) smoothedValue = startTargetValue;
		tickFrame = 0;
		tickTarget = 0;
		lagAdded = 0;
		nanoStartTime = System.nanoTime();
	}

	public void add(double eventVal)
	{
		realTimeEvents.add(
				new RealTimeEvent(System.nanoTime(), eventVal));
	}
	
	public void close()
	{
		ending = true;
		if (ramping)
		{
			while(smoothedValue > (0.0001 + originValue))
			{
				try {
					Thread.sleep(1);
				} catch (InterruptedException e) {
					e.printStackTrace();
				}
			}
		}
		realTimeEvents.clear();
		tickFrame = 0;
	}

	/*
	 * 	Key service.
	 */
	public double tick()
	{	
		if (tickFrame >= tickTarget)
		{
			
			newRTE = realTimeEvents.poll();
//			System.out.println("newRTE:" + newRTE);
			
			if (newRTE == null)
			{
				// self-adjust for less lag
				tickFrame++;
				lagAdded--;
				
				// how soon poll again?
				tickTarget += 2; 
 
				incr = 0;
			}
			else
			{
				tickTarget = getAbsFrame(newRTE.getNanoTime());
				targetValue = newRTE.getDesiredValue();
				
				if(tickTarget <= tickFrame)
				{
					// "late" RTE. Ignore RTE's value
					// and add more lag.
					lagAdded += tickFrame - tickTarget;
					tickFrame = tickTarget;
					incr = 0;
//					System.out.println("new lag added:" + lagAdded + " this:"+ this);
					
				}
				else
				{
					// smooth to next RTE target value
					incr = (targetValue - smoothedValue)
						/(tickTarget - tickFrame);	
				}
			}
		}
		if (ending && ramping)
		{
			incr = maxDrop;
		}
		
		if (starting && ramping)
		{
			incr = maxRise;
			if (startTargetValue - incr < smoothedValue)
			{
				starting = false;
				incr = startTargetValue - smoothedValue;
			}
		}
		
		smoothedValue += incr;	
		smoothedValue = minMaxScreen(smoothedValue);
		
		tickFrame++;
		return smoothedValue;
	}

	// private methods
	private long getAbsFrame(long nanoSpEv)
	{
		return (long)((nanoSpEv - nanoStartTime) * FRAMES_PER_NANO);
	}
	
	private double minMaxScreen(double val)
	{
		return Math.min(maxValue, Math.max(minValue, val));
	}
	
}

Another update. Should I just delete the old code via editing?

Biggest change: I’ve dropped the LinkedBlockingQueue and put in a double[] and long[] in its place.

Biggest flaw corrected this way: we are not creating new Objects to store in the LinkedBlockingQueue, and we are not destroying Objects every time we poll from the queue. It suffices to just manage a single index to the two primitive arrays, overwriting values.

Some concurrency management was lost, but it is restored via making a ‘lock’ Object and using it to synchronize the add() and the read sections. These sections are pretty small and relatively infrequent, so I don’t think they present much of a danger. Yes, no?

package jTheremin;

/*
 * Concurrency design: there should only one consumer per Smoother. 
 * Multiple producers is plausible but I don't have any such usage  
 * in mind and haven't tested that.
 */
public class RTESmoother {
	private static final int SIZE = 256;
	private static final double FRAMES_PER_NANO = 0.0000441;
	private final double maxValue;
	private final double minValue;
	private final double maxDrop;
	private final double maxRise;
	private final boolean ramping; 
	
	// plausible concurrent access
	private final double[] rteValues;
	private final long[] rteTimes;
	private volatile int readIDX, writeIDX;
	
	private volatile long nanoStartTime; 
	private volatile boolean starting, ending; 
	private volatile long tickCurrent; 
	private volatile double smoothedValue; 
	private volatile double originValue;
	private volatile double startTargetValue;
	
	private Object lock;
	
	// restricted "Worker" thread variables, reused.
	private long tickTarget;
	private double targetValue, incr;
	
	// C O N S T R U C T O R
	public RTESmoother(double minValue, double maxValue, 
			double maxDelta, double originValue, boolean ramping)
	{
		this.minValue = minValue;
		this.maxValue = maxValue;
		this.maxRise = maxDelta;
		this.maxDrop = maxDelta * -1;
		this.originValue = originValue;
		this.ramping = ramping;
		lock = new Object();
		rteTimes = new long[SIZE];
		rteValues = new double[SIZE];
		readIDX = 0;
		writeIDX = 0;
		tickTarget = 0;		
	}
	
	public RTESmoother(double minValue, double maxValue)
	{
		this.minValue = minValue;
		this.maxValue = maxValue;
		this.maxRise = maxValue - minValue;
		this.maxDrop = maxRise * -1;
		lock = new Object();
		ramping = false;
		originValue = 0;
		rteTimes = new long[SIZE];
		rteValues = new double[SIZE];
	
		readIDX = 0;
		writeIDX = 0;
		tickTarget = 0;
	}
	
	// public Methods
	public void init(double startTargetValue){
		ending = false;
		starting = true;
		this.startTargetValue = startTargetValue;
		if (!ramping) smoothedValue = startTargetValue;
		else smoothedValue = 0;
		tickCurrent = 0;
		tickTarget = 0;
		readIDX = 0;
		writeIDX = 0;
		nanoStartTime = System.nanoTime();
	}

	public void add(double eventVal)
	{
		synchronized(lock)
		{
			rteValues[writeIDX] = eventVal;
			rteTimes[writeIDX++] = System.nanoTime();
			writeIDX &= 0xff;
	
			if (readIDX == writeIDX)
			{
				writeIDX--;
				writeIDX &= 0xff;
				System.out.println("RTESmoother overflow");
			}
		}
	}
	
	/*
	 * Purpose of this method is to pause until the output  
	 * has reached a desired ending value before 
	 * allowing the calling method to proceed.
	 */
	public void close()
	{
		ending = true;
		if (ramping)
		{
			while(smoothedValue > (0.0001 + originValue))
			{
				try {
					Thread.sleep(1);
				} catch (InterruptedException e) {
					e.printStackTrace();
				}
			}
		}
	}

	/*
	 * 	Key service.
	 */
	public double tick()
	{	
		if (tickCurrent >= tickTarget)
		{	
			/*
			 *  No need to synchronize. If writeIDX increments
			 *  we'll catch it next time around.
			 */
			if (readIDX == writeIDX) // test if RTEvent in queue
			{
				/*
				 *  Move counter a bit into the future,
				 *  as part of the strategy for creating
				 *  a self-correcting amount of lag time.
				 */
				tickCurrent++;
				tickTarget += 2; 
 
				incr = 0;
			}
			else
			{
				// get RTEvent
				synchronized(lock)
				{
					tickTarget = getAbsFrame(rteTimes[readIDX]);
					targetValue = rteValues[readIDX++];
					readIDX &= 0xff;
				}
				
				if(tickTarget <= tickCurrent)
				{
					/*
					 *  Late-arriving RTEvent! 
					 *  Ignore RTE's value and move tickCurrent
					 *  into the "past" (adds lag).
					 */
					tickCurrent = tickTarget;
					incr = 0;	
				}
				else
				{
					/*
					 * Set value to an increment that will
					 * reach the next RTE target value at the 
					 * target time.
					 */
					incr = (targetValue - smoothedValue)
						/(tickTarget - tickCurrent);	
//						incr = Math.min(
//							maxRise, Math.max(maxDrop, incr));
				}
			}
		
		}
		if (ending && ramping)
		{
			incr = maxDrop;
		}
		
		if (starting && ramping)
		{
			incr = maxRise;
			if (startTargetValue - incr < smoothedValue)
			{
				starting = false;
				incr = startTargetValue - smoothedValue;
			}
		}
		
		// ALWAYS DONE:
		smoothedValue += incr;	
		smoothedValue = minMaxScreen(smoothedValue);
		tickCurrent++;
		return smoothedValue;
	}

	// private methods
	private long getAbsFrame(long nanoSpEv)
	{
		return (long)((nanoSpEv - nanoStartTime) * FRAMES_PER_NANO);
	}
	
	private double minMaxScreen(double val)
	{
		return Math.min(maxValue, Math.max(minValue, val));
	}
}