[ACCEPTED]-Weird outcome when subtracting doubles-double
This is because
double is a floating point datatype.
If 10 you want greater accuracy you could switch 9 to using
The literal suffix for 8
decimal is m, so to use
decimal arithmetic (and produce 7 a
decimal result) you could write your code as
var num = (3600.2m - 3600.0m);
Note 6 that there are disadvantages to using a 5
decimal. It is a 128 bit datatype as opposed to 4 64 bit which is the size of a
double. This makes 3 it more expensive both in terms of memory 2 and processing. It also has a much smaller 1 range than
There is a reason.
The reason is, that the 23 way the number is stored in memory, in case 22 of the double data type, doesn't allow for 21 an exact representation of the number 3600.2. It 20 also doesn't allow for an exact representation 19 of the number 0.2.
0.2 has an infinite representation 18 in binary. If You want to store it in memory 17 or processor registers, to perform some 16 calculations, some number close to 0.2 with 15 finite representation is stored instead. It 14 may not be apparent if You run code like 13 this.
double num = (0.2 - 0.0);
This is because in this case, all binary 12 digits available for representing numbers 11 in double data type are used to represent 10 the fractional part of the number (there 9 is only the fractional part) and the precision 8 is higher. If You store the number 3600.2 7 in an object of type double, some digits 6 are used to represent the integer part - 3600 5 and there is less digits representing fractional 4 part. The precision is lower and fractional 3 part that is in fact stored in memory differs 2 from 0.2 enough, that it becomes apparent 1 after conversion from double to string
Can't explain it better. I can also 2 suggest reading What Every Computer Scientist Should Know About Floating-Point Arithmetic. Or see related questions 1 on StackOverflow.
Change your type to
decimal num = (3600.2m - 3600.0m);
You should also read 1 this.
More Related questions