Floating point error

This is something that’s annoyed me for ages:

public class FPError
{
    public static void main( String[] args )
    {
        float f = 0.0f;
        for( int i = 0; i < 10; i++ )
        {
            f += 0.1f;
            System.out.println( f );
        }
    }
}

gives

0.1
0.2
0.3
0.4
0.5
0.6
0.70000005
0.8000001
0.9000001
1.0000001

Where does this error come from? Is it really too much to ask that simple addition works?

It comes from the fact that numbers are stored in binary in a finite sequence of bytes. If you could only remember numbers in decimal notation with a predetermined amount of digits, you would see the same problem when you try to do things with amounts like 1/3:


// using 4 digits:
1/3 = 0.333 (not bad)
1/3 + 1/3 = 0.666 (still OK)
1/3 + 1/3 + 1/3 = 0.999 (looks fine for now)
1/3 + 1/3 + 1/3 + 1/3 = 1.332 (hey, wait a minute! shouldn't it be 1.333?)

shmoove

That sounds reasonable for fractions like 1/3, as you can’t express that exactly no matter how many bytes are used.

But why for 0.6 and 0.1? AFAICS, there’s no problem expressing them exactly in a 32-bit float.

Here’s an even more concise test case:

public class FPError
{
    public static void main( String[] args )
    {
        float f = 0.1f + 0.6f;
        if( f != 0.7f )
        {
            System.out.println( "wtf? " + f );
        }
    }
}

gives:

wtf? 0.70000005

It has to do with what base you use for your numbers.
In base 3, 1/3 is expressed as exactly 0.1. No fractions.

In base 2 (binary), it becomes very tricky to exactly express 1/10. It’s something like 0.0001100110…
I haven’t worked out if it’s infinitly long like 1/3 is in decimal (base 10), but I’ve got a feeling it just might be.

You’ve got a point there.

[quote]AFAICS, there’s no problem expressing them exactly in a 32-bit float.
[/quote]
Apparently, AFAICS isn’t terribly far ::slight_smile:

It appears there’s nothing to be done except weep for the fact that a number commonly occuring in our decimal-based world is a pathological case for floating-point representation :-[

if you do have a big problem with it, you can use classes like BigFloat and such, right?

Or make your own RationalNumber class (unless you need irrational numbers, then you’re screwed :)).

shmoove

Yo uwant to have even more floating point fun?

How many starts does this method print?


public method printStars(float start){
    float end  = start + 100;

    while (start++<end) {
        System.out.print("*");
    }
}

The answer is its indeterminate and based on the value of start.
Because floats are an aproximation whose accuracy changes with
the value, if start is high enough start++ becomes a nop lost iin
the rounding and its an idiot loop!

For this reason i personally believe that allowing ++ on floats was a syntactic error on the part of the original Java designers.

If so, +=1 shouldn’t be allowed on floats either, since it’ll fail for very large floats.

I guess it boils down to if you read i++ as “next” or as “+=1”

This is probably just a problem of internal representation vs. output / formatting. Just keep in mind that the internal representation is much more precise than you need your output to be (in most cases, that is; if not, then using float/double primitives is a bad idea anyway).

For example, this works nicely:


public static void main(String[] args)
{
      NumberFormat nf=new DecimalFormat("0.00");
      
      float f = 0.0f;
      for (int i = 0; i < 10; i++)
      {
            f += 0.1f;
            System.out.println(nf.format(f));
      }
}

[quote]If so, +=1 shouldn’t be allowed on floats either, since it’ll fail for very large floats.

I guess it boils down to if you read i++ as “next” or as “+=1”
[/quote]
Yup good point.

To me though +1 is ana rtihmatic operation while ++ is an ordinal increment.

Maybe im just weird :slight_smile:

[quote] Where does this error come from? Is it really too much to ask that simple addition works?
[/quote]
You’re laboring under some misapprehensions here;

  1. It’s not an error
  2. It does work

This is how floating point numbers are defined to work.

The alternatives are fixed point and binary coded decimal. Floating point is used more than these today because of faster implimitations in hardware and better general purpose flexibility.