How does that work? Is it just because double uses more bits? I’d imagine for the same number of bits, you can store more ints than doubles (assuming you want the ints to be exact values).
No, it has an exponent component: https://en.m.wikipedia.org/wiki/Double-precision_floating-point_format
No, I get that. I’m sure the programming language design people know what they are doing. I just can’t grasp how a double (which has to use at least 1 bit to represent whether or not there is a fractional component) can possibly store more exact integer vales than an integer type of the same length (same number of bits).
It just seems to violate some law of information theory to my novice mind.
It doesn’t. A double is a 64 bit value while an integer is 32 bit. A long is a 64 bit signed integer which stores more exact integer numbers than a double.
It doesn’t store more values bit for bit, but it can store larger values.
I would need to look into the exact difference of double vs integer to know, but a partially educated guess is that they are referring to Int32 vs double and not Int64, aka long. I did a small search and saw that double uses 32 bits for the whole numbers and the others for the decimal.
I’m going to guess here (cause I feel this community is for learning)…
Integers have exactness. Doubles have range.
So if MAX_INT + 1
is possible, then ~(MAX_INT + 1)
is probably preferable to an overflow or silent MIN_INT
.
But Math.ceil
probably expects a float, because it is dealing with decimals (or similar). If it was an int, rounding wouldn’t be required.
So if Math.ceil
returned and integer, then it could parse a float larger than INT_MAX, which would overflow an int (so error, or overflow). Or just return a float