# [ACCEPTED]-Use float or decimal for accounting application dollar amount?-accounting

Should Float or Decimal data type be used 35 for dollar amounts?

The answer is easy. Never 34 floats. *NEVER*!

Floats were according to IEEE 754 always 33 binary, only the new standard IEEE 754R defined decimal 32 formats. Many of the fractional binary parts 31 can never equal the exact decimal representation.

Any 30 binary number can be written as `m/2^n`

(`m`

, `n`

positive 29 integers), any decimal number as `m/(2^n*5^n)`

.
As binaries 28 lack the prime `factor 5`

, all binary numbers can 27 be exactly represented by decimals, but 26 not vice versa.

```
0.3 = 3/(2^1 * 5^1) = 0.3
0.3 = [0.25/0.5] [0.25/0.375] [0.25/3.125] [0.2825/3.125]
1/4 1/8 1/16 1/32
```

So you end up with a number 25 either higher or lower than the given decimal 24 number. Always.

Why does that matter? Rounding.

Normal 23 rounding means 0..4 down, 5..9 up. So it 22 *does* matter if the result is
either `0.049999999999`

.... or 21 `0.0500000000`

... You may know that it means 5 cent, but 20 the the computer does not know that and 19 rounds `0.4999`

... down (wrong) and `0.5000`

... up (right).

Given 18 that the result of floating point computations 17 always contain small error terms, the decision 16 is pure luck. It gets hopeless if you want 15 decimal round-to-even handling with binary 14 numbers.

Unconvinced? You insist that in 13 your account system everything is perfectly 12 ok? Assets and liabilities equal? Ok, then 11 take each of the given formatted numbers 10 of each entry, parse them and sum them with 9 an independent decimal system!

Compare that 8 with the formatted sum. Oops, there is something 7 wrong, isn't it?

For that calculation, extreme 6 accuracy and fidelity was required (we used 5 Oracle's FLOAT) so we could record the "billionth's 4 of a penny" being accured.

It doesn't 3 help against this error. Because all people 2 automatically assume that the computer sums 1 right, and practically no one checks independently.

This photo answers:

This is another situation: *man from Northampton got a letter stating his home would be seized if he didn't pay up zero dollars and zero cents!*

0

First you should read *What Every Computer Scientist Should Know About Floating Point Arithmetic*. Then you should 9 really consider using some type of fixed point / arbitrary-precision number package 8 (e.g., Java BigNum or Python decimal module). Otherwise, you'll 7 be in for a world of hurt. Then figure out 6 if using the native SQL decimal type is 5 enough.

Floats and doubles exist(ed) to expose 4 the fast x87 floating-point coprocessor that is now pretty much obsolete. Don't 3 use them if you care about the accuracy 2 of the computations and/or don't fully compensate 1 for their limitations.

Just as an additional warning, SQL Server 6 and the .NET framework use a different default 5 algorithm for rounding. Make sure you check 4 out the MidPointRounding parameter in Math.Round(). .NET 3 framework uses bankers' rounding by default and SQL Server 2 uses Symmetric Algorithmic Rounding. Check 1 out the Wikipedia article here.

Ask your accountants! They will frown upon 5 you for using float. Like David Singer said, use float *only* if 4 you don't care for accuracy. Although I 3 would always be against it when it comes 2 to money.

In accounting software is *not* acceptable 1 a float. Use decimal with four decimal points.

Floating points have unexpected irrational 23 numbers.

For instance you can't store 1/3 22 as a decimal, it would be 0.3333333333... (and 21 so on)

Floats are actually stored as a binary 20 value and a power of 2 exponent.

So 1.5 is 19 stored as 3 x 2 to the -1 (or 3/2)

Using 18 these base-2 exponents create some odd irrational 17 numbers, for instance:

Convert 1.1 to a float 16 and then convert it back again, your result 15 will be something like: 1.0999999999989

This 14 is because the binary representation of 13 1.1 is actually 154811237190861 x 2^-47, more 12 than a double can handle.

More about this 11 issue on my blog, but basically, for storage, you're 10 better off with decimals.

On Microsoft SQL 9 server you have the `money`

data type - this is 8 usually best for financial storage. It is 7 accurate to 4 decimal positions.

For calculations 6 you have more of a problem - the inaccuracy 5 is a tiny fraction, but put it into a power 4 function and it quickly becomes significant.

However 3 decimals aren't very good for any sort of 2 maths - there's no native support for decimal 1 powers, for instance.

A bit of background here....

No number system 17 can handle all real numbers accurately. All 16 have their limitations, and this includes 15 both the standard IEEE floating point and 14 signed decimal. The IEEE floating point 13 is more accurate per bit used, but that 12 doesn't matter here.

Financial numbers are 11 based on centuries of paper-and-pen practice, with 10 associated conventions. They are reasonably 9 accurate, but, more importantly, they're 8 reproducible. Two accountants working with 7 various numbers and rates should come up 6 with the same number. Any room for discrepancy 5 is room for fraud.

Therefore, for financial 4 calculations, the right answer is whatever 3 gives the same answer as a CPA who's good 2 at arithmetic. This is decimal arithmetic, not 1 IEEE floating point.

Use SQL Server's **decimal** type.

Do not use *money* or *float*.

*money* uses 4 four decimal places and is faster than using 3 decimal, * but* suffers from some obvious and 2 some not so obvious problems with rounding 1 (see this connect issue).

I'd recommend using 64-bit integers that 1 store the whole thing in cents.

Floats are not exact representations, precision 11 issues are possible, for example when adding 10 very large and very small values. That's 9 why decimal types are recommended for currency, even 8 though the precision issue may be sufficiently 7 rare.

To clarify, the decimal 12,2 type will 6 store those 14 digits exactly, whereas the 5 float will not as it uses a binary representation 4 internally. For example, 0.01 cannot be 3 represented exactly by a floating point 2 number - the closest representation is actually 1 0.0099999998

The only reason to use Float for money is 1 if you don't care about accurate answers.

For a banking system I helped develop, I 22 was responsible for the "interest accrual" part 21 of the system. Each day, my code calculated 20 how much interest had been accrued (earnt) on 19 the balance that day.

For that calculation, extreme 18 accuracy and fidelity was required (we used 17 Oracle's FLOAT) so we could record the "billionth's 16 of a penny" being accrued.

When it came to 15 "capitalising" the interest (ie. paying 14 the interest back into your account) the 13 amount was rounded to the penny. The data 12 type for the account balances was two decimal 11 places. (In fact it was more complicated 10 as it was a multi-currency system that could 9 work in many decimal places - but we always 8 rounded to the "penny" of that currency). Yes 7 - there where "fractions" of loss and gain, but 6 when the computers figures were actualised 5 (money paid out or paid in) it was always 4 REAL money values.

This satisfied the accountants, auditors 3 and testers.

So, check with your customers. They 2 will tell you their banking/accounting rules 1 and practices.

Even better than using decimals is using 8 just plain old integers (or maybe some kind 7 of bigint). This way you always have the 6 highest accuracy possible, but the precision 5 can be specified. For example the number 4 `100`

could mean `1.00`

, which is formatted like this:

```
int cents = num % 100;
int dollars = (num - cents) / 100;
printf("%d.%02d", dollars, cents);
```

If 3 you like to have more precision, you can 2 change the 100 to a bigger value, like: 10 1 ^ n, where n is the number of decimals.

Another thing you should be aware of in 24 accounting systems is that no one should 23 have direct access to the tables. This means 22 all access to the accounting system must 21 be through stored procedures.

This is to prevent fraud, not 20 just SQL injection attacks. An internal user who wants 19 to commit fraud should not have the ability 18 to directly change data in the database 17 tables, ever. This is a critical internal 16 control on your system.

Do you really want 15 some disgruntled employee to go to the backend 14 of your database and have it start writing 13 them checks? Or hide that they approved 12 an expense to an unauthorized vendor when 11 they don't have approval authority? Only 10 two people in your whole organization should 9 be able to directly access data in your 8 financial database, your database administrator 7 (DBA) and his backup. If you have many DBAs, only 6 two of them should have this access.

I mention 5 this because if your programmers used float 4 in an accounting system, likely they are 3 completely unfamiliar with the idea of internal 2 controls and did not consider them in their 1 programming effort.

You can always write something like a Money 2 type for .NET.

Take a look at this article: A Money type for the CLR. The 1 author did an excellent work in my opinion.

I had been using SQL's money type for storing 6 monetary values. Recently, I've had to work 5 with a number of online payment systems 4 and have noticed that some of them use integers 3 for storing monetary values. In my current 2 and new projects I've started using integers 1 and I'm pretty content with this solution.

Out of the 100 fractions n/100, where n 4 is a natural number such that 0 <= n 3 and n < 100, only four can be represented 2 as floating point numbers. Take a look at 1 the output of this C program:

```
#include <stdio.h>
int main()
{
printf("Mapping 100 numbers between 0 and 1 ");
printf("to their hexadecimal exponential form (HEF).\n");
printf("Most of them do not equal their HEFs. That means ");
printf("that their representations as floats ");
printf("differ from their actual values.\n");
double f = 0.01;
int i;
for (i = 0; i < 100; i++) {
printf("%1.2f -> %a\n",f*i,f*i);
}
printf("Printing 128 'float-compatible' numbers ");
printf("together with their HEFs for comparison.\n");
f = 0x1p-7; // ==0.0071825
for (i = 0; i < 0x80; i++) {
printf("%1.7f -> %a\n",f*i,f*i);
}
return 0;
}
```

Have you considered using the money-data 5 type to store dollar-amounts?

Regarding the 4 con that decimal takes up one more byte, I 3 would say don't care about it. In 1 million 2 rows you will only use 1 more MB and storage 1 is very cheap these days.

Whatever you do, you need to be careful 2 of rounding errors. Calculate using a greater 1 degree of precision than you display in.

You will probably want to use some form 5 of fixed point representation for currency 4 values. You will also want to investigate 3 banker's rounding (also known as "round half to even"). It 2 avoids bias that exist in the usual "round 1 half up" method.

Your accountants will want to control how 10 you round. Using float means that you'll 9 be constantly rounding, usually with a `FORMAT()`

type 8 statement, which isn't the way you want 7 to do it (use `floor`

/ `ceiling`

instead).

You have currency 6 datatypes (`money`

, `smallmoney`

), which should be used instead 5 of float or real. Storing decimal (12,2) will 4 eliminate your roundings, but will also 3 eliminate them during intermediate steps 2 - which really isn't what you'll want at 1 all in a financial application.

Always use Decimal. Float will give you 1 inaccurate values due to rounding issues.

Floating point numbers can *only* represent numbers 12 that are a sum of negative multiples of 11 the base - for binary floating point, of 10 course, that's two.

There are only four decimal 9 fractions representable precisely in binary 8 floating point: 0, 0.25, 0.5 and 0.75. Everything 7 else is an approximation, in the same way 6 that 0.3333... is an approximation for 1/3 5 in decimal arithmetic.

Floating point is 4 a good choice for computations where the 3 scale of the result is what is important. It's 2 a bad choice where you're trying to be accurate 1 to some number of decimal places.

This is an excellent article describing 13 when to use float and decimal. Float stores an approximate value and 12 decimal stores an exact value.

In summary, exact 11 values like money should use decimal, and 10 approximate values like scientific measurements 9 should use float.

Here is an interesting 8 example that shows that both float and decimal 7 are capable of losing precision. When adding 6 a number that is not an integer and then 5 subtracting that same number float results 4 in losing precision while decimal does not:

```
DECLARE @Float1 float, @Float2 float, @Float3 float, @Float4 float;
SET @Float1 = 54;
SET @Float2 = 3.1;
SET @Float3 = 0 + @Float1 + @Float2;
SELECT @Float3 - @Float1 - @Float2 AS "Should be 0";
Should be 0
----------------------
1.13797860024079E-15
```

When 3 multiplying a non integer and dividing by 2 that same number, decimals lose precision 1 while floats do not.

```
DECLARE @Fixed1 decimal(8,4), @Fixed2 decimal(8,4), @Fixed3 decimal(8,4);
SET @Fixed1 = 54;
SET @Fixed2 = 0.03;
SET @Fixed3 = 1 * @Fixed1 / @Fixed2;
SELECT @Fixed3 / @Fixed1 * @Fixed2 AS "Should be 1";
Should be 1
---------------------------------------
0.99999999999999900
```

More Related questions

We use cookies to improve the performance of the site. By staying on our site, you agree to the terms of use of cookies.