Which is faster? :P

So um I was thinking about this code and I’m really wondering which is faster.

Random random = new Random();
int rand = random.nextInt(2);
if(rand == 0) {
	// DO SOMETHING
}else if(rand == 1) {
	// DO SOMETHING ELSE
}

Or this.

// Let's say this array is filled only when the program starts and is never changed.
Action[] actions = {
new Action() {
	public void doAction() {
		// DO SOMETHING
	}
},
new Action() {
	public void doAction() {
		// DO SOMETHING ELSE
	}
}};


// Now let's say we do this in a loop everytime we update the game or something
Random random = new Random();
int rand = random.nextInt(2);

acionts[rand].doAction();



interface Action {
	public void doAction();
}

The “new” dominates the computation. That’s followed by nextInt(2). If you only need to do one of two fixed things with equal probability then:


// rng already exists and if you can be JRE 7+ is a threadlocalrandom instead of random...or a custom PRNG

// calling nextInt because nextBoolean is kinda borked, but you could use that instead.
if (rng.nextInt() < 0) {
   // do one
}
else {
  // do other
}

I’m asking which is faster. Not which is faster to write. Let’s say we had 1000 actions to check. I guess array would be faster.

take the firt version,
for once because its much more intuitive, and also I doubt that the bycode for the second runs any faster.

Also reuse the Random object.

if you want to speed it up, rather fill an array with random booleans at startup, and iterate over it.
This saves you to run .nextInt

like:

if(bools[counter++]) doMyThing();
else doTheOther();

or

bools[counter++]?doMyThing():doTheOther();

make shure to not run over the arrays limit.
(reset the counter and/or reinitialize the array with new booleans)


a general rule for optimization:

if you want more speed, you need to use more memory.
if you want to use less memory you have to sacrisfy some speed.

For form sake: the likehood of this being statistically important is zero.

That’s the question I answered.

That’s slower and makes random very not random.

“If you want to get from the UK to the US as fast as possible, then book a ticket on the Titanic”.

This statement is the same as the “memory vs. speed” rule of thumb. Both of these ships sank a very long time ago. It might be still true if your programming a 10 year lego mindstorm…otherwise forget it. In general the opposite is true.

A is much faster; it requires nothing more than a conditional branch.

B’s implementation on the other hand ( array[index].method() ) requires:

if(array == null) throw NullPointerException
if(index >=array.length) throw ArrayIndexOutOfBoundsException
if(array[index] == null) throw NullPointerException
Method call using invokeinterface (which requires a search of the method table, creation of a stack frame, etc etc).

Though as previously said; you shouldn’t be considering performance at this level, simply write your code in the clearest way possible.

[quote]That’s slower and makes random very not random.
[/quote]
well, try to proove that it slower…

Besides, its as random as you want it to be,
picking a number from a list of random numbers (never from the same list position) qualifies as a random number…

If it where not random then it would be deterministic.
Whats the formula then supposed to be?

I don’t need to. It walks more memory.

Pseudo random is deterministic. That’s a good thing.

Could you guyz stop giving the stupid advice like “reuse the random object”
I will now try to re-make my question.
Which way would it be faster to do it, if we have 10000 actions to check?

Unless this is really a bottleneck in your application, stop worrying about the speed.

The second is better OOP. if you liek OOP, take it.
The first is procedural programming, if you like that, take it.

To me, the second solution looks better, but only because I like OOP. I still might implement the first one at any time if I feel lazy.

The answer is the same; you shouldn’t care which is faster, you should be writing your code so that it’s readable & maintainable.

Case in point; for 10000 actions, a switch statement with sequential case values would be fastest… and simultaneously the least maintainable.

For 10k actions I’d consider determining the commonality of the actions, and making it data-driven.

Here some code to play around,
using a list is about 4 times faster on my system.

Having 100.000 discrete random numbers should suffice for normal gamelogic (like AI descisions).


import java.util.Random;

public class TestSpeed
{

	static int value=0;
	
	static final int RANDOMLISTSIZE=100001; //use more if more random needed
	static boolean[] rans=new boolean[RANDOMLISTSIZE];
	static int counter=0;
	static Random ran=new Random();
	
	public static void main(String[] args)
	{
		//preperation at initialization
		//extra time does not impact the gameloop
				
		for(int i=0;i<RANDOMLISTSIZE;i++)
		{
			rans[i]=ran.nextBoolean();
		}
	
		//start timekeeping
		long start = System.currentTimeMillis();
		
		for (int i=0;i<100000000;i++)
		{
			//randomFromList();  //FAST
			randomFromRandom();	 //SLOW		
		}
		
		//stop timekeeping, evaluate
		System.out.println(System.currentTimeMillis()-start+" ms :: "+ value);
		
	}
	
	static void randomFromList()
	{
		if(rans[counter++]) inc();
		else dec();			
		if(counter>=RANDOMLISTSIZE) counter=0;
	}

	static void randomFromRandom()
	{
		if(ran.nextBoolean()) inc();
		else dec();
	}
	
	static void inc()
	{
		value ++;
	}
	static void dec()
	{
		value --;
	}

}


In the end of the day, to know how fast something is compared to something else,
you need to MEASURE it…

If you want reasonable advice, then you should pose the question you’re asking and not one which you aren’t.

@Damocles: your benchmark is flawed.

Of course the output is not as random for HUGE amounts of random number,
but for up to RANDOMLISTSIZE sompletely random.

I made so many iterations to have an actual impact on speed to measure the effect.

Point was to proove that picking something from a list is simply faster (even in 2013)
than running a method of Random.

The approch to use depends of course what the aim is.
If you only need a random number every gameframe, it makes no sense.

If you need to have a LARGE amount of random numbers (millions), you
also should not take this approach,
But then, this is something you wont need in realtime anyhow (eg terrain generation or such)

The problem (and also the reason why you don’t get the answers you want) is that there are many external dependencies which influence the speed. How clogged is your memory bus already? Is the code small enough to fit into the CPU cache? Will the hotspot compiler kick in and optimize the code?= If yes, how optimizable is the code? How big are the actions actually (the amount of code in each action)?

That’s also why it is hard to becnhmark, since a benchmark can’t simulate e.g. a clogged memory bus, and most likely is small enough to run in the cache and also be optimized. Which is not neccesarily the case in your real scneario.

@Damocles: It’s slower to look up in a table. Your benchmark is flawed.

care to elaborate? Not what I measure…

Roquen is right. A L3 cache miss can cost 100s of cycles, even L2 cache is many cycles, you can calculate a lot in that time. LUT are something that worked only a long time ago. Memory access is the dominate feature in performance on modern CPUs. Cache coherency is more or less everything.

Microbenchmarks are even worse with LUT at missleading people. Cache performance when doing the same things a million times back to back is not the same as doing something else in between each lookup.

Finally who cares. The thing you do in your branch is going to cost more cpu than this branch.

@op: If you just want an honest response, the less times you use ‘new’ the better. Everything else has such a minimal effect on speed that it doesn’t matter.

Why don’t you do 10,000 iterations and time it? I don’t get what you are doing here. Are you trying to get someone else to do it for you? If you have the code, then it’s not hard to figure out.