You are viewing a single thread.
View all comments
105 points

Makes sense, cause double can represent way bigger numbers than integers.

permalink
report
reply
33 points

Also, double can and does in fact represent integers exactly.

permalink
report
parent
reply
19 points

Only to 2^54. The amount of integers representable by a long is more. But it can and does represent every int value correctly

permalink
report
parent
reply
-1 points

*long long, if we’re gonna be taking about C types. A long is commonly limited to 32 bits.

permalink
report
parent
reply
27 points

Also because if you are dealing with a double, then you’re probably dealing with multiple, or doing math that may produce a double. So returning a double just saves some effort.

permalink
report
parent
reply
9 points

Yeah it makes sense to me. You can always cast it if you want an int that bad. Hell just wrap the whole function with your own if it means that much to you

(Not you, but like a hypothetical person)

permalink
report
parent
reply
5 points
*
Deleted by creator
permalink
report
parent
reply
2 points

A double can represent numbers up to ± 1.79769313486231570x10^308, or roughly 18 with 307 zeroes behind it. You can’t fit that into a long, or even 128 bits. Even though rounding huge doubles is pointless, since only the first dozen digits or so are saved, using any kind of Integer would lead to inconsistencies, and thus potentially bugs.

permalink
report
parent
reply
2 points

doubles can hold numbers way larger than even 64-bit ints

permalink
report
parent
reply
1 point

How does that work? Is it just because double uses more bits? I’d imagine for the same number of bits, you can store more ints than doubles (assuming you want the ints to be exact values).

permalink
report
parent
reply
3 points
5 points
*

No, I get that. I’m sure the programming language design people know what they are doing. I just can’t grasp how a double (which has to use at least 1 bit to represent whether or not there is a fractional component) can possibly store more exact integer vales than an integer type of the same length (same number of bits).

It just seems to violate some law of information theory to my novice mind.

permalink
report
parent
reply

Programmer Humor

!programmer_humor@programming.dev

Create post

Welcome to Programmer Humor!

This is a place where you can post jokes, memes, humor, etc. related to programming!

For sharing awful code theres also Programming Horror.

Rules

  • Keep content in english
  • No advertisements
  • Posts must be related to programming or programmer topics

Community stats

  • 3.3K

    Monthly active users

  • 1K

    Posts

  • 38K

    Comments