It doesn’t suck. Floating point numbers are not supposed to be used for exact calculations! This is the first thing anyone should know about them.
The reason is that internally computers work with the binary system. A fraction expressible with finitely many decimals in the decimal system may not be thus expressible in binary. You have the same problem in the decimal system.
If you want exact calculations, you will have to write (or borrow) some library which deals with rational numbers. Just converting to integers generally won’t help you, since you will still suffer from infinitely repeating fractions now and then.
By the way, floating point numbers are extremely powerful. Integers would use 32 bits to express all mathematical integers from minus two billion to plus two billion, which is fine. A floating point number has two parts:
- the mantissa (say, M), which generally uses most of the bits, which expresses decimals purely.
- the exponent (let’s call it q), which determines how large the value actually is.
A number is expressed by
M * 2^q,
and thanks to this notation, floating point numbers can be exceedingly large. With 32 bits distributed meaningfully between mantissa and exponent, you can express numbers as large as 10^38, though many adjacent mathematical integers in that range have the same floating point representation. With 64 bits you can reach something like 10^308.
(edit: minor corrections)