Math.random()

Hey.
I have a question I asked myself today.
It´s nothing to really worry about but let me ask this:
If I need some random Doubles for a game, f.e. for the looks of a map etc.
Would it optimize the performance to store lets say 10.000 random values
from the Math.random() method at the initialization of the game and if needed take them from an array
and not use the random()-method ingame?

(I know I had to much free time today ;D )

Depends on how much random values are worth to you.

If you store 10000 random values at initialization, and then use some simple formula to ‘pick’ values from that set, you could have faster code, but if it aint the bottleneck, you won’t even see the difference, even if it’s million times faster.

You could look into the source code of the Math or Random class and see, what it actually does. They should be pretty fast at all. You probably have to call random a lot to see an advantage when using a lookup table.

But generally the idea is true. Games only need a really limited amount of randomness, so storing some values and using them would be enough and is really faster. Its enough to store maybe 1000 values and if you used all, begin with the first again. You wont notice it.

-JAW

Math.random() is (java.util.) Random.nextFloat() btw.

https://uncommons-maths.dev.java.net/

The Mersenne Twister is generally a good choice. If you need even more speed you could try the Cellular Automaton one.

However if you don’t need a few megabytes of random numbers per second (or more), chances are very slim that you’ll see any performance improvements. I’d say that most games spend less than 0.001% in random number methods/functions.

High throughput is only necessary for things like procedural content creation or specific steering behaviors (well, only if there are lots of agents).

Math.random() is (java.util.) Random.nextDouble() AFAIK

Whoops… ye, it’s of course nextDouble().

I’m a little late to the party but I think I should make the following notes for any future readers.

java.util.Random is a linear congruent generator (using longs). LCG is considered to be a ‘bad’ generation method, but is more than good enough for virtually all gaming purposes (other than some hard-core simulations). You want fast, then inline a LCG based on 32-bit integers instead. This will be much faster (and better quality) than looking up in an array. Using ‘good’ generators like the twister is overkill in about 99.9% of the time for our purposes here. I’d also suggest avoid using doubles. Here is an implementation:



  // seed this if one wishes
  private static int randSeed;

  /**
   *  Returns a pseudo-random number.
   */
  public static final int random()
  {
    // this makes a 'nod' to being potentially called from multiple threads
    int seed = randSeed;
    
    seed    *= 1103515245;
    seed    += 12345;
    randSeed = seed;

    // NOTE: hi bits have better properties
    return seed;
  }
  
  /**
   * Returns a random number on [0, range)
   */
  public static final int random(int range)
  {
    return ((random()>>>15) * range) >>> 17;
  }

  public static final boolean randomBoolean()
  {
    // hi-bit is the most random
     return random() > 0;
  }

   public static final float randomFloat()
   {
     return (random()>>>8) * (1.f/(1<<24));
   }


NOTE: never call random() and divide or mod.

At least use prime numbers!

Now the last digit of random() will always be a multiple of 5

My previous post was too brief…I was being lazy. I was trying to be helpful, but without your response I would have failed.

I should have stated that only the high order bits are reasonable and you want to limit yourself to no more than (roughly) 15-17 bits. This is why the boolean version checks the top (sign) bit. I also should have noted that the ranged version is only good for inputs on [0,0x8000]. Notice that the float version uses 24 bits, but this should virtually never be a problem.

Basically you want to do: (random() >> (32-n)) to get n binary digits…not mask off the lower n.

Your suggestion to change to using a prime number is reasonable on the surface, but exactly the wrong thing to do. That would make the generator have very bad properties. LCG’s properties depend on the multiplier and the additive values to be relatively prime to one another.

Comparing the code I posted with what you’d get from Sun’s java.util.Random: Slightly lower quality (since 32-bit instead of 64) with a shorter cycle (number of values generated before it starts to repeat itself) and on the up side: 32-bit operations, less memory accesses and no atomic locks.

That’s a pretty darn fast prng. It’s also a pretty darn bad one, but like Roquen said, that probably won’t matter for games.

Using the fancy algorithm “r=nextInt(256), g=nextInt(256), b=nextInt(256)”, I generated a 256x256 image with four different random algorithms:

http://www.mojang.com/notch/misc/out.png

Fast random is the code Roquen posted in this thread. I’d use it in a heartbeat.

The time is after warm-up, client jvm, time taken per pixel.

I like the Mersenne Twister. Just saying its name makes me happy. I pretty much use it in all my projects that require any kind of randomness, just because I found a really nice source code for it that lets me pull booleans and bytes, etc, easily, and get the randomness I desire.

dont remember where i picked this one but here it is another :

/**
	 * Brut noise generator using pseudo-random
	 */
	public double noise(int x,int y)
	{
		x=x + y * 57;
		x=((x<<13) ^ x);
		double t=(x * (x * x * 15731 + 789221) + 1376312589) & 0x7fffffff;
		return 1-t*0.000000000931322574615478515625;
		
	}

EDIT : for input just increase x

I like pseudo-random generator as they have a cupple of nice advantage :

  • fast
  • always the same series
  • enable identical procedural generation (ex: for two networks client it will generate same terrain)

the fact they give the same random numbers is also good because when playing twice a game players will have exacly the same chance to win=> perforing exacly the same input on the game will produce exacly the same result even if the game use some “randoms” in the logic

Here’s one I use.


static long seed;

public void initSeed(long firstSeed)
{ 
	seed = firstSeed;
}

public float nextFloat()
{
	return (((seed= 1664525*seed + 1013904223)>>16) / (float)0x10000);
}

Typically you start it like this:

Random.initSeed(System.nanoTime());

Then it’s seeded using the time that the game was first executed. Also that gives you random numbers where (0,1] i.e. it can be 0 but it can’t be 1. If you want it to be one as well, just change 0x10000 to 0xffff.

As this is just magic bit shifting it really isn’t very random but it will work just fine for pretty much everything. Plus it’s super fast and very simple.

Eli

I guess you meant [0,1) :wink:
and with 0xffff it’s (0,1]

Thanks for the helpful methods, they are good alternatives
to what the Math.random() generates.

@Epitaph64: What’s in a name? It’s true that very few PRNG has sexy names. There’s “Mother” (short for “Mother-of-All”) and “KISS”, but pretty much everything else has names like: MRG31k3p…not hip. Seriously if your generator works and you don’t have performance issues, none of this matters…Twist Away!

@Demonpants: This is another LCG (like the one I posted above and the JDKs). Longer period since 64 instead of 32 bits. There are some issues with your code: the seed is 64-bits and your float conversion is assuming 32, also the shift should be unsigned.

I took this over from C and hastily converted it to Java, so yeah I removed a couple unsigned, added some publics, but I forgot to put a double instead of a float.

MT is a bit heavy for games I think. What about m sequences?

An m-sequence has very cool math properties. Does not have the hyperplanes problem and is blindly fast in a shift xor implementation. Since a m sequence has a period of 2**64-1 you can combine this with “unrandom” sequence of period 264. Then have a total period of 2128-1. I need that for simulations, but you could drop that bit.

Note that nextDouble() is the main method I use. The default calls nextBits() twice while this does not. The main method is for using with die harder.

It passes all die harder tests. The default java one doesn’t. MT however does also (well close enough).

I use this in production simulation code.–Oh should I put code here? And I put in the public domain etc…


import java.io.BufferedOutputStream;
import java.io.DataOutputStream;
import java.io.IOException;
import java.io.OutputStream;
import java.io.PrintStream;
import java.lang.reflect.Field;
import java.util.Arrays;
import java.util.Random;

/**
 * a linear random method based on xor shifts--which is a fast way to do LFSR --ie one clock per bit
 * is slow. This is faster per step that java Random.
 * 
 * This does better than LCC generators (ie passes the monkey tests and DNA tests where LCG dont).
 * In other words it does not have the hyperplane problem. 
 * 
 * This has a period of 2**128-1. This is quite easy to prove as follows.
 * 
 * the counter can be shown to be a LFSR with period 2**64-1. However we have a simple counter in
 * the stepped variable. That is after 2**64 counts stepped mod 2**64 == 0. Hence the phase is
 * shifted by one and the period of stepped and counter are relatively prime. We combine them with
 * Addition, which is slightly nonlinear due to carry. Of course we could just use a simple ++ counter.
 * But thats boring
 * 
 * We could use * for this as well and have a stronger condition for non lineararity.
 * 
 * We speed up the nextDouble function as well.
 * 
 * The main method dumps random data to standard out. this can be used with the dieharder tests. However
 * there seems to be a problems with dieharder tests when using pipes.
 * 
 * @author bob
 * 
 */
public final class Random64 extends Random {
	private static final double LONG_2_DOUBLE =1.0 / (double)(1L<<53);
	private static final long MASK_53=(1l<<53)-1;
	private static final long serialVersionUID =-6678124822567014769L;

	private static final long PRIME =0xd4d6712ee634312dl;
	private long counter ;
	private long stepped ;

	public Random64() {
		super();
                setSeed(System.nanoTime());
	}

	public Random64(long seed) {
		super(seed);
		setSeed(seed);
	}
	
	private void step(){
		counter ^=(counter << 21);
		counter ^=(counter >>> 35);
		counter ^=(counter << 4);
		stepped +=PRIME;
	}
	/**
	 * could use all 64 bits over 2 calls?
	 */
	@Override
	protected int next(int bits) {
		step();
		// Hate the dumb mask
		return (int) (((counter + stepped) >>> 31) & ((1l << bits) - 1));
	}

	@Override
	public void setSeed(long seed) {
		counter =seed;
		if (counter == 0)
			counter =1;
                stepped=0;
                step();
                step();
                step();
                stepped=0;
	}

	/**
	 * uses only 32 bits of precision.
	 */
	@Override
	public double nextDouble() {
		step();
		return ((counter+stepped) & MASK_53)*LONG_2_DOUBLE;
	}
	

	@Override
	public long nextLong() {
		step();
		return counter+stepped;
	}

	
	
	public static void main(String[] args) {
		Random rand =new Random64();		
		try {
			byte[] buffer =new byte[1024 * 1024*8];
			for (;;) {
				rand.nextBytes(buffer);
				System.out.write(buffer);
			}
		} catch (Exception e) {
			e.printStackTrace();
		}
	}
}

Edit to fix a seed bugs and mix up seeding just a little.

Notes on delt0r’s post (some suggestions below):
LFSR = Linear Feedback Shift Register (MT is LFSR + tempering step)

delt0r’s generator is based on “Xorshift RNGs” by George Marsaglia. (As a side note: some of Doug Lea’s code was using this generator, so It might be being hidden in the current JDK in the concurrency stuff)

In real life when I need a good generator I use a variant of this presented by Richard Brent in “Some long-period random number generators”.


More RNG comments:

When performance is an issue I’m attempting to promote using the worst fastest (edit) generator possible that’s suitable for the task, and this is reasonable because all pseudo-random number generators are flawed. You simply choose one that has “faults” that either you don’t care about or are not pronounced enough to cause obvious defects. In my limited experience most RNG problems are usage based rather than having a bad generator.

Why would you want a long period? From the random texture example above it’s obvious that a period shorter than the “calculation” (the table random) is bad. The general rule-of-thumb for good properties is something like: “a single calculation should not need more than sqrt§ values”, where P is the period. (cube root if birthday-spacing) So if you have an RNG with period 2^31: you should be good for ~46K samples, for period 2^63 about 3 billion. But in most cases you don’t even care about that. Creating a 128x128x3 random texture is more than 46K, but using if you use two generators from the same family with 2^31 & 2^63 (or higher) periods, you won’t be able tell which is the better of the two.

Another big issue with PRNG’s is Marsaglia’s “fall mainly in the planes”, where they have lattice structures in higher dimensions. The wikipedia page has an animation of an LCG generating random points in 3D here which demo’s the problem. But, look again at texture’s by Markus_Persson above, which is another 3D sampling. Visually few people will see the “defects” of the two LCGs vs MT. And most games are constantly in motion, so the player doesn’t have time to ponder minor defects anyway.

Use a fast generator. If you don’t see defects, the quality is fine. If you do, make sure your not doing something funky.

As mentioned above I use a variant of above posted generator (in the rare instances I need something better than LCG). It can have good properties and be very fast. To make it fast, I use a longer period generator (which requires a seed array) and precalculate a “round” of values (MT does the same thing for it’s LFSR part). The main issue here is that each step has a dependecy chain the slows calculation if performed one at a time. Since it is a good quality generator, I also “cache” a single value and keep track of how many bits are left in the cache. When I need an n-bit number, the method grabs from this cached value.

This last sentence makes an obvious, but important point. Call (whatever) random less. If your PRNG creates ‘n’ good bits, use them as a collection if you easily can.

@delt0r: You might want to think about adding a decorrelation step to the seeding process.

nice precisions

@Roquen

Yea. Nice summary of the xor generators. I used the orginal papers… So my references were not so suitable for here really. Also MT is a generalized LFSR. This is important since the theory is not as well developed (and much harder to do) as plain LFSR (ie GF(2^n)). ie Lagged Fibonacci generators.

I have changed the code to “seed” better and most importantly fix a bug. YMMV.