Noise (bandpassed white)

Main/Procedural content/Noise

[h3]Overview[/h3]
The purpose of this page is to give an overview of how noise functions in this family work and the various tradeoffs that can be made in implementation choices. How to use noise to create and modify content is a huge topic…BLAH, BLAH add links.

WebGL demo: http://glsl.heroku.com/e#7967.0
Top to bottom: value noise, gradient noise and value noise. Right-to-left: single sample, 3-sample fBn, 3-sample turbulence. Cells are purposefully aligned.

[h3]Introduction[/h3]
This family of noise functions are incredibly useful tools for creating and modifying content. According to CG industry lore it was informally observed in the 90s that “90% of 3D rendering time is spent in shading, and 90% of that time is spent computing Perlin (gradient) noise”. Regardless of the truth of this observation, this family of noise functions are certainly one of the most important techniques not only in procedurally generated content but in CG as a whole. Increases in CPU speed and the relatively new addition of GPU computation allow for runtime evaluation of the cheaper of these methods in realtime graphics.

Attempting to give any detailed descriptions of how to “use” noise functions to create or modify content is well beyond the scope of any short description. The goal here is to outline some basics of core generation techniques and to provide links to more detailed information in specific areas of interest.

For the local discussion, we’ll assume that noise accepts floating point input for a sample coordinate and returns a floating point value (usually either on [0,1] or [-1,1]). It will provide some sketches of 2D implementations to (hopefully) aid in understanding.

Noise functions are evaluated in some number of dimensions (typically 1,2,3 or 4). This is simply to say that you provide some input coordinate and noise returns the corresponding fixed value at that position, just like any other multi-dimensional function. From a signal processing perspective this family can be described as an attempt to approximate band-pass filtering of white noise. Perhaps a simpler description would be that they are attempts at coherent pseudo-random number generators (PRNG).

Regular PRNGs attempt to create a fixed sequence (from some initial state data…frequently termed the ‘seed’) of values that appear to be statistically independent. White noise can be created from a PRNG as in the following sketch (in 2D):


float eval(float x, float y)
{
  long seed = mix(x,y);         // map the input coordinate to a seed value
  prng.setSeed(seed);           // set the PRNGs seed to the mix
  return prng.nextFloat();      // return the result
}

Unfortunately raw white noise is of very little use. If you were to create a 2D texture from white noise, regardless of how you walk through the ‘noise’ function the result would be virtually identical. The result would be like what you’d see on an old broadcast TV tuned to a channel without a signal. What’s really needed to be useful are random values that are coherent: which roughly says that sample points far apart are like PRNG values, appear to be independent, and the set of all sample points close to one another vary continuously (or smoothly in less formal speak).

[h3]Value noise[/h3]
Value noise is the one of the original attempts at this style of noise generation. It is very often miscalled Perlin noise. Evaluation is very cheap, but it burden with serious defects and is very poor at band-pass filtering. Quality can be improved, but even the most basic improvements make it more expensive than gradient noise. So a general guideline for using this technique is to only use a very cheap version and only when some existing content can be minorly modified by one or two evaluations.

Value noise is computed by forming a regular grid, computing random values at each vertex and blending the values to produce a result. Sketch in 2D:


float eval(float x, float y)
{
  // lower left hand corner of cell containing (x,y)
  int ix   = (int)Math.floor(x);
  int iy   = (int)Math.floor(y);

  // offset into 'cell' of (x,y). dx & dy are on [0,1)
  float dx = x - ix;
  float dy = y - iy;

  // generate a random value for each vertex of the cell
  // based on its integer coordinate.
  float r00 = mix(ix,   iy);
  float r10 = mix(ix+1, iy);
  float r01 = mix(ix,   iy+1);
  float r11 = mix(ix+1, iy+1);

  // use some interpolation technique to get the sample value.
  return blend(r00,r10,r01,r11,dx,dy);
}

So to compute value noise in ‘n’ dimensions, the work required is related to 2n (1D = line segment or 2 vertices, 2D = square or 4 verts, 3D = cube and 8, etc). The problems with value noise stem from the fact that at each evaluation point, the result only depends on blended data interior the cell that its within. This results in sample points close to one another, but in different cells, to not vary continuously. This results in very obvious defects along cell boundaries. Early attempts to fix this major problem included visiting further away cells and using more complex blending functions…which drastically increase complexity. The introduction of gradient noise made these solutions obsolete.

However value noise is far from useless. Its very cheap computational cost can make it a good choice when many noise samples are required and is what you will most often find used in “demoscene” style shaders.

References

[h3]Perlin gradient noise[/h3]
Created in 1983 by Ken Perlin, this Oscar award winning technique is a clever way to minorly modify value noise to drastically improve the output quality. Usually when one is (correctly) calling a noise function “Perlin” noise, this is the technique being discussed. The clever addition is to choose a vector associated with each vertex (gradient vector). Then to calculate the vector from the vertex to the sample point. The dot product between these two vectors gives a weighting to modify the value at each vertex. It was quickly noted that this last step is not really useful and that the dot product itself is more than a sufficiently random value (dropping one multiply). Next the dot product results at the vertices are interpolated to generate a final result. Although the dot product drastically reduces defects along the cell boundaries, it introduces a new defect. Specifically the dot product will always approach zero as the sample point approaches one of the cell vertices.

Notice that like value noise, the output entirely depends on the evaluation of a single cell and has the same complexity in the number of dimensions. The difference here is that the random vector helps to smooth out values across neighboring cells…much in the same way that Gouraud shading improves over flat shading. Sketch in 2D:


float eval(float x, float y) 
{
  // lower left hand corner of cell containing (x,y)
  int ix = (int)Math.floor(x);
  int iy = (int)Math.floor(y);

  // offset into 'cell' of (x,y). dx & dy are on [0,1)
  x -= ix;
  y -= iy;

  // generate a random value for each vertex of the cell
  // based on its integer coordinate.
  int   h00 = mix(ix,   iy);
  int   h10 = mix(ix+1, iy);
  int   h01 = mix(ix,   iy+1);
  int   h11 = mix(ix+1, iy+1);

  // some function that uses the random number to 'dot'
  // against a random vector. Not the '-1' compared to the '+1'
  // above.
  float r00 = dotRandVect(h00, x,   y);
  float r10 = dotRandVect(h10, x-1, y);
  float r01 = dotRandVect(h01, x,   y-1);
  float r11 = dotRandVect(h11, x-1, y-1);

  // convert the offest into the cell into a weighting factor
  // so-called ease or s-curve    
  x = weight(x);
  y = weight(y);

  // blend to get the final result
  float xb = lerp(x, r00, r10);
  float xt = lerp(x, r01, r11);

  return lerp(y, xb, xt);
}

Note that there have been numerous improvements made to gradient noise over the years, so some references may be referring to older versions. And, of course, authors may make minor tweaks (for better or worse) to their specific implementation.

Variants of note:

  • Originally the vectors were randomly generated unit vectors. Creating these on the fly is rather expensive. In days of yore a precomputed table of random vectors was an option (less so today given memory access overhead). The reduced number of random vectors introduces some very minor directional defects.

Perlin later noted that using a small set of vectors (all the permutations of vector components of zero and +/-one, but not all zero) drastically reduced computational cost. Specifically this drops 1 multiply per dimension per vertex (24 in 2D, 38 in 3D). This significantly increases directional defects (SEE: Defects below). Some GPU implementations use more mathematically complex selection to address this issue.

  • Two ease functions: Perlin uses a weight function which he terms either ease or s-curve. The original function was: 3t<sup>2</sup>-2t<sup>3</sup>. This function is continuous but its derivative is not. This was later replaced by the more expensive: 10t<sup>3</sup>-15t<sup>4</sup> + 6t<sup>5</sup> which is C2 continuous.

References Yeah…add tons of stuff here

[h3]Perlin simplex noise[/h3]
In 2002 Ken Perlin introduced a new noise function that is a drastic change in direction. The purpose was to create a function which could be cheaply implemented in hardware and addresses some of the defects in gradient noise. Although designed for hardware it is a better fit for modern CPU and GPU architectures.

The first major change is how cells are formed. Instead of regular breaking up of space, the input is skewed into a simplex (SEE: Stefan Gustavon’s paper for details). This drops the number of vertices needed from 2n to (n+1), where ‘n’ is the number of dimensions.

The second major change is instead of calculating values and each vertex and blending the results to compute the final result, the result is instead a summation of contributions from each. This lowers the dependency chain and can increase throughput. For example in 2D, in value and gradient noise, one might first blend in “X”: the top edge, then the bottom (these two are independent), then take those results and blend in “Y” to get a final result. In 2D simplex noise, the contribution from the three vertices are independently computed and summed to produce the result.

As a rule of thumb, if you need noise (of this variety) in three or four dimensions, then simplex noise is the way to go.

References

  • [url=http://mrl.nyu.edu/~perlin/]Ken Perlin’s homepage
  • Stefan Gustavson: good description of simplex noise and code in java and GLSL.
  • Wikipedia: pretty basic ATM.
  • philfrei: tool for creating textures
  • roquen: java source code

[h3]Defects[/h3]
Noise is one of those area’s where science and art collide. As such the various listed defects only really have meaning if they have a negative impact on the desired result.

  • hash function:
  • gradient vector selection:
  • aligned cell structure:

The cheapest way to attempt to hide these defects is to insure that the grid structures of multiple noise evaluations are not aligned with one another. BLAH, BLAH

[h3]Isotropic and anisotropic[/h3]
Isotropic is math-speak for uniform in all directions and anisotropic is, well, not…the thing in question isn’t uniform in all directions. The goal of all the above noise functions is to be isotropic. All, however, have directional defects which make this not quite true. Getting anisotropic results from isotropic noise simply involves applying a non-uniform scale factor when sampling.

[h3]Periodic noise[/h3]
The sketches above are for noise functions without a period. It is commonly desirable to have noise be periodic, or in other words to wrap at specific boundaries. Well, there’s good and there’s bad news. The first bad news is that most “methods” to make noise periodic are very expensive and don’t really work (SEE: Matt Zucker’s FAQ above for an example). The first good news is that it’s simple to perform cheaply, assuming that wrapping at integer boundaries and in particular power-of-two boundaries is an acceptable limitation. Making a minor modification to the vertex computation allows this to happen…masking in the case of power-of-two and “faking” an integer modulo in other cases. This requires modifying the base noise function (having special cases, dynamic code generations, etc.) Another option is to use a noise function in (potentially) a higher dimension higher than desired and to “walk” that space in such a way that you reach the same coordinate at boundary points. This later happens someway naturally if computation is performed at runtime on the GPU. As an example to apply noise to a sphere (or any other 3D object), one simply samples a 3D noise function at a scaled and/or translated coordinate of the object’s surface (or 4D function if the noise is to be animated in time).

[h3]Optimizations[/h3]
Noise functions tend to be expensive as many calls are usually required to create a specific effect. As such speed is pretty important when computated at runtime. Given the nature of noise it is a very good candidate for running on the GPU…blah, blah

[h3]Pre-computation vs. runtime[/h3]
Blah, Blah.

[h3]Other noise functions[/h3]
There are many other noise functions, many of which are too complex to be evaluated at runtime, but may have game usage for pre-generated content. BLAH BLAH:

  • Anisotropic noise
  • Gabor noise: not the same family, but can generate similar results.
  • Sparse convolution noise: realtime variants potentially reasonable on the GPU
  • Wavelet noise

References

OK. Does my first pass at the theory and why I’ve put it there make sense to someone that knows zero about signal processing?

I’m getting the general gist of it, but I have to admit I know a thing or 2 about signal processing.
But my general feeling about the article is that it kind of covers too many things at once at a purely theoretical level without being very practical.
Perhaps you could try targeting it to a developer with a specific need, for example procedural texture generation, hight-map generation, or some other procedural content generation. And then explaining why a certain noise algorithm would make sense in that particular case.

I’m trying to deride your article (in fact I’m very interested in the subject), but I feel it covers too many areas to be useful in just one wiki article.

I think people that don’t already know the subject will be lost. What’s the purpose of bandpass filtering? Isn’t the goal of certain noise algorithms to achieve a subjective aesthetic effect?

To someone who knows a bit about signal processing, it raises a number of questions. E.g. did you intend your description of decomposition in terms of basis functions to be broad enough to include Taylor expansion? Would it be worth defining “signal”? Does it make sense to talk about Fourier analysis of non-periodic functions in an introduction to noise?

Someone who doesn’t know anything about signal processing is guaranteed to not know what you mean by the frequency domain. They may also pick up on the Gibbs phenomenon in the pictures about creating a square wave, and wonder whether it contradicts what you’re saying. And they won’t have a clue what “band-pass filtered” means.

Well talking in a hand waving kinda way about signal processing is tricky. I guess the more important question is if it’s even worth talking about at all?

I’ve seen some sensational shaders making use of perlin noise to make some really convincing wood textures, and for scene generation it helps to have a little taste for different ways of producing controlled modulation.

Though it would help if we could arrange the structure a little to be more helpful in some way because this is a very useful topic to be covered.

I’m trying to get better acquainted with using noise in textures right now, grappling with it conceptually, having just had my first working experience with calling Simplex noise to assemble a cloudy texture.

I’m not at all clear that getting into Fourier analysis is helpful. I can see where using ‘harmonics’ can make the coding neater, and it seems to work well with the mathematics of fractals, but it doesn’t seem to be entirely necessary. One can add noise that has energy at frequencies that are unrelated to the base frequency with no problem. Visual textures are not like sound waves, where one deals with the prevalence nodes and anti-nodes and standing waves in the “real world,” and the ear and hearing portions of the brain has evolved to make use of data in this form.

So, is it simpler, instead, to describe noise as having components that are at various periods, and not worry about the Fourier analysis, at least, at the “beginner” level? Or are techniques to analyze textures to determine their strongest component frequencies in use and an important part of creating textures?

I’m thinking, for a dimension, given a length L and a value “n” along that length, a “basic” unit of noise might be of length n/L. Since n can go from 0 to L, the result of this fraction is 0 to 1. One can multiply this value by different factors to get different degrees of scaling. Obtaining noise with (n/L * K) will be K times more detailed than noise at n/L.

But we are free to make K whatever we want. K can be a float or double. It doesn’t have to be an integer or a progression of integers.

(We might also talk about how to “relate” the periodicity of one type of noise to another via a scaling factor that is applied to the n/L (O to 1) results of the different noise generation techniques? Maybe this is already done?)

Then, there is total latitude with what we do with the output noise values (which range from -1 to 1), whether to sum them or lerp them, or use them in trig functions. It seems wide open, as long as the function results in a legal Color value for a pixel.

(It occurs to me, one could also talk about a more concrete value the periodic nature of noise by finding a number of pixels that corresponds to an average of one swing in the random number. But I think I am getting into fuzzy thinking, as I don’t know how to describe a “period” of randomness, and the way in which we relate the numbers to pixels on the screen is so fluid.)

As I said, I’m a beginner with using noise, and am happy to be corrected on any point.

Doing octaves is exactly construction that’s obvious in the frequency domain and one reason why I thought this might be useful. I’m thinking about a complete different track that describes as coherent pseudo-random numbers and walking though the historic progression of how they work. Detailed usage is a ton of work and was thinking that provide a bunch of links would be reasonable for a first pass. (plus I’m too lazy to make pictures)

Hmm, when generating textures like clouds and terrain it helps to know a little about the effects of noise colouring, distortion and how it can be fun to mess around with to produce different results.

And thinking back to clouds, isn’t the very reason you see clouds down to the low frequency bias, with the overtones creating the fuzziness.

I think it would help to show the differences visually though, or even better yet, produce an app to demonstrate it. I learned a lot just by tweaking my 2d noise generator. This could be the difference between generating really mundane levels or elaborate worlds that feel like they’ve been really well designed IMO.

I agree with what philfrel said about having components at various periods, and experimentation is key. When I was at uni I remember toying around rending series upon series of sine waves, some directional, some radial, until (I swear) it looked like plasma. Of course that wasn’t the memorable part, the memorable part was when I made a few small modifications and … err … it stopped looking awesome and had to investigate and figure out what on Earth was going on :smiley: Still don’t know how I did it :frowning:

In terms of good, I think it’s key to remember that these random values can be feeding into game behaviour. This could be what gives flavour to your map generation algorithm. The more applicable and relative to games the examples are the more that people will digest and be able to use it.

No doubt experience is all important for creating effects and the theory is of marginal use. My thinking is more geared toward choosing what set of base generators to use. Precomputed effects is easy: improved gradient if you want to quickly bang stuff out based on other peoples work (as pretty much everything is written to gradient noise) and/or simplex noise. The ‘better’ noise methods are too expensive for fast turn around to be useful IHMO. The trickier part is for runtime generated stuff.

OK. I made a first pass at a second pass. Any better or still wankery?

Many improvements! :slight_smile:

Fixed some typo’s. Completed “brief” and added a sketch for gradient noise.

Would love to get some more feedback as to what can be done to make the visualizer I started [http://www.java-gaming.org/topics/simplex-noise-experiments-towards-procedural-generation/27163/view.html] something you’d consider adding as a link on the main page of this wiki.

P.S., I’m seriously looking at “open sourcing” the project on GitHub, making the emphasis more on helping devs write and test a wider range of textures. (Just figured out “perspective” but haven’t integrated it yet.)

This is almost always a very good idea! Do this, (I’d be intrested too :slight_smile: but mind licensing!) ;D

Hey, it’s a wiki…do it yourself! Seriously I was thinking this is getting about as long as is reasonable and that talking about basics of using noise should be on another page and have code-snippets like your tutorial. There no reason why your tool shouldn’t be linked from both.

Doh! Looky there. A “modify” button on the first post.

Tossed together a quick WebGL demo (link in overview).

Shouldn’t it be 2^n? I’m not sure that’s why I’m asking…


1D = 2^1 = 2 vertices
2D = 2^2 = 4 vertices
3D = 2^3 = 8 vertices
4D = 2^4 = 16 vertices
...

Yeah. Pretty sure now.