storing limited range floating point numbers efficiently?

impressive! your solution easily out performs both of ours :slight_smile:

oops you are correct, it was a relic of a previous version of the algorithm which compared an int version of the input value instead of a real value.

I think you need to back up and explain why you think you can, or should, invent a more efficient
way to store floats than the native float type. Some very smart people designed the IEEE floating
point representation.

One should always be thinking of better solutions to problems! be that as it may, i am not attempting to make a “better” float, rather i have specific requirements which storing as a native float is not a good fit.

  1. the number has a fixed maximum number of digits. e.g. 6
  2. the number needs to be represented with as few bits as possible.
  3. the encoded number should represent the actual number as close as possible.
  4. the number has a maximum number derived from the maximum number of digits… e.g. for 6 digits, the max number is 999999.
  5. likewise, the number has a minimum number derived from the maximum number of digits… e.g. for 8 digits, the minnumber is -999999.
  6. the precision of the number is determined by the number of digits used in the “integer” part of the number with the remaining digits are used to represent the “fractional” part of the number… e.g. representing the floating point number 812.633445: it uses 3 digits for the integer part leaving 3 digits for the fractional part giving the number 812.633

how ok, I better understand your needing now

Ok, but where do these arcane specifications arise?

There can be reasons - for example in financial programs, you can never use normal floating point to
represent money, because rounding errors would cause your numbers to not add up properly. In this
case, it soulds like your specifications arise from a communications protocol - ie; a 6 digit field, where
the contstraints are on the representation, not on the underlying numbers.

I think you would do best by using regular floating point to represent your numbers, and meet your
constraints by controlling the conversion to and from floating point.

I have an idea for a video codec. I can control image quality / file size by reducing the accuracy of certian input “real” numbers.

This is what is what all the methods so far do…

save to file

float --> integer representation --> bit stream --> file

read from file

file --> bit stream --> integer representation–> float

If this is a programming exercise “see how good a codec you can design” then of course go for it. But be aware
that real codecs are designed by very smart people with a lot of science and math at their disposal… You are unlikely
to do better. There could be exceptions, if you have a particular video application in mind and you can apply domain
specific knowledge in ways that a general codec couldn’t.

If you haven’t already done so, read the specs for existing codecs such as JPEG and MP3.
It’s really interesting reading.

On a similar note, if you read the IEEE floating point spec, it’s fairly obvious how to use the design
but reduce the size of the exponent and matissa so they can only represent numbers in the ranges
you’ve specified. You’d end up with a few less bits - maybe 24 instead of 32 bits

i am not sure whether you are meaning to come across as a little insulting but one could read your replies that way.

Yes i do not think for a second that i will create a mpeg-4 or h264 killer codec :slight_smile: i simply had an idea and want to implement it.

For my final year thesis I did develop a lossless codec which was specifically designed for animated video or similar which was able to out perform ( in terms of compression) the readily available codecs.

I wanted to try my hand at a lossless codec but trying a totally new approach. I do not think it will even get close to the compression from the current state of the art codecs but i have to start some where no?

I agree, the IEEE floating point spec can be readily converted to use less bits. I just thought that it would be a performance overhead to implement my on IEEE floating point class when another solution better suited to my problem could be deleveloped which would be faster.

[quote]Yes i do not think for a second that i will create a mpeg-4 or h264 killer codec i simply had an idea and want to implement it
[/quote]
you should! it is always good to beleave in yourself :slight_smile: at least I do :wink:

@ddyer : you cant imagine all stuff that havn’t been done yet , so doing computer research is always good, and is especially fun/interrisitng to do even when you fall with something useless. I love reinventing the whell as I think this is the only way to find real new stuff. My personal feeling about that is that reading paper and applying them dont bring to new stuff as papers “format” you. A good example is the DIVX format that have been created by someone that was playing to do a new codec :slight_smile: lucky him :wink:

My attitude is completely different if you are trying to solve a practical problem, or if you are just experimenting
to learn and see what develops. I’m all in favor of experimentation.

Yes. However, my solution uses 1 extra bit to store the number, making the total 32 bits.

As it has become clearer in the discussion that followed my post that the point of this exercise is compression, then 1-bit inefficiency may make this method unusable.

Cheers, Tim.

There is little gain in compressing individual numbers.

Compress groups of numbers (either in 1D or 2D), and suddenly you’ll be able to save much more than 1 bit per number

granted, however storing floats in 3 bytes instead of 4 bytes will help compression of lists of numbers.