int vs decimal for small money amount handling - c#

For representing money I know it's best to use C#'s decimal data type to double or float, however if you are working with amounts of money less than millions wouldn't int be a better option?
Does decimal have perfect calculation precision, i.e. it doesn't suffer from Why computers are bad at numbers? If there is still some possibility of making a calculation mistake, wouldn't it be better to use int and just display the value with a decimal separator?

The amounts you are working with, "less than millions" in your example, isn't the issue. It's what you want to do with the values and how much precision you need for that. And treating those numbers as integers doesn't really help - that just puts the onus on you to keep track of the precision. If precision is critical then there are libraries to help; BigInteger and BigDecimal packages are available in a variety of languages, and there are libraries that place no limit on the precision Wikipedia has a list The important takeaway is to know your problem space and how much precision you need. For many things the built in precision is fine, when it's not you have other options.

Like li223 said, integer won't allow you to save values with decimal cases - and the majority of currencies allow decimal values.
I would advise to pick a number of decimal cases to work with, and avoid the problem that you refered ("Why computers are bad at numbers"). I work with invocing and we use 8 decimal cases, and works fine with all currencies so far.

The main reason to use decimal over integer in this case is that decimal, well, allows decimal places IE: £2.50 can be properly represented as 2.5 with a decimal. Whereas if you used an integer you can't represent decimal points. This is fine if, like John mentions in their comment, you're representing a currency like Japanese Yen that don't have decimals.
As for decimal's accuracy, it still suffers from "Why are computers bad at numbers" see the answer to this question for more info.

Related

Parsing a string into a float

Now i know to use the method of float.Parse but have bumped into a problem.
I'm parsing the string "36.360", however the parsed float becomes 36.3600006103516.
Am i safe to round it off to the 3 decimal places or is there a better tactic for parsing floats from strings.
Obviously i'm looking for the parsed float to be 36.360.
This has nothing to do with the parsing, but is an inherent "feature" of floating-point numbers. Many numbers which have an exact decimal representation cannot be exactly stored as floating-point number, which causes such inequalities to appear.
Wikipedia (any many articles on the web) explain the issues.
Floating point numbers are inherently prone to rounding errors; even different CPU architectures would give a different number out in the millionths decimal place and beyond. This is also why you cannot use == when comparing floating point numbers....they'll rarely evaluate as equal because of floating point precision errors.
This is due to the fact that float or double are both stored in such a way that it is a mathematical process to read the value from memory. If you want to store the value as the actual value a better choice would be decimal.
Per the MSDN Page on System.Decimal:
The Decimal value type is appropriate for financial calculations
requiring large numbers of significant integral and fractional digits
and no round-off errors. The Decimal type does not eliminate the need
for rounding. Rather, it minimizes errors due to rounding.
There are limits in the precision of floating point numbers. Check out this link for additional details.
If you need more precise tracking, consider using something like a double or decimal type.
That's not an odd issue at all, it's just one of the charming features of floats you'll always going to run into. floats can't express that kind of decimal values accurately!
So if you need the result to be exactly 36.36, use a decimal rather than a float.
Otherwise, you're free to round off. Note that rounding won't help though, because it won't be exactly 36.36 after rounding either.

How to decide what to use - double or decimal? [duplicate]

This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
decimal vs double! - Which one should I use and when?
I'm using double type for price in my trading software.
I've noticed that sometimes there are a odd errors.
They occur if price contains 4 digits after "dot", like 2.1234.
When I sent from my program "2.1234" on the market order appears at the price of "2.1235".
I don't use decimal because I don't need "extreme" precision. I don't need to distinguish for examle "2.00000000003" from "2.00000000002". I need maximum 6 digits after a dot.
The question is - where is the line? When to use decimal?
Should I use decimal for any finansical operations? Even if I need just one digit after the dot? (1.1 1.2 etc.)
I know decimal is pretty slow so I would prefer to use double unless decimal is absolutely required.
Use decimal whenever you're dealing with quantities that you want to (and can) be represented exactly in base-10. That includes monetary values, because you want 2.1234 to be represented exactly as 2.1234.
Use double when you don't need an exact representation in base-10. This is usually good for handling measurements, because those are already approximations, not exact quantities.
Of course, if having or not an exact representation in base-10 is not important to you, other factors come into consideration, which may or may not matter depending on the specific situation:
double has a larger range (it can handle very large and very small magnitudes);
decimal has more precision (has more significant digits);
you may need to use double to interact with some older APIs that are not aware of decimal;
double is faster than decimal;
decimal has a larger memory footprint;
When accuracy is needed and important, use decimal.
When accuracy is not that important, then you can use double.
In your case, you should be using decimal, as its financial matter.
For financial operation I always use the decimal type
Use decimal it's built for representing powers of 10 well (i.e. prices).
Decimal is the way to go when dealing with prices.
If it's financial software you should probably use decimal. This wiki article summarises quite nicely.
A simple response is in this example:
decimal d = 0.3M+0.3M+0.3M;
bool ret = d == 0.9M; // true
double db = 0.3 + 0.3 + 0.3;
bool dret = db == 0.9; // false
the test with the double fails since 0.3 in its binary representation ( base 2 ) is periodic, so you loose precision the decimal is represented by BCD, so base 10, and you did not loose significant digit unexpectedly. The Decimal are unfortunately dramattically slower than double. Usually we use decimal for financial calculation, where any digit has to be considered to avoid tolerance, double/float for engineering.
Double is meant as a generic floating-point data type, decimal is specifically meant for money and financial domains. Even though double usually works just fine decimal might prevent problems in some cases (e.g. rounding errors when you get to values in the billions)
There is an Explantion of it on MSDN
As soon as you start to do calculations on doubles you may get unexpected rounding problems because a double uses a binary representation of the number while the decimal uses a decimal representation preserving the decimal digits. That is probably what you are experiencing. If you only serialize and deserialize doubles to text or database without doing any rounding you will actually not loose any precision.
However, decimals are much more suited for representing monetary values where you are concerned about the decimal digits (and not the binary digits that a double uses internally). But if you need to do complex calculations (e.g. integrals as used by actuary computations) you will have to convert the decimal to double before doing the calculation negating the advantages of using decimals.
A decimal also "remembers" how many digits it has, e.g. even though decimal 1.230 is equal to 1.23 the first is still aware of the trailing zero and can display it if formatted as text.
If you always know the maximum amount of decimals you are going to have (digits after the point). Then the best practice is to use fixed point notation. That will give you an exact result while still working very fast.
The simplest manner in which to use fixed point is to simply store the number in an int of thousand parts. For example if the price always have 2 decimals you would be saving the amount of cents ($12.45 is stored in an int with value 1245 which thus would represent 1245 cents). With four decimals you would be storing pieces of ten-thousands (12.3456 would be stored in an int with value 123456 representing 123456 ten-thousandths) etc etc.
The disadvantage of this is that you would sometimes need a conversion if for example you are multiplying two values together (0.1 * 0.1 = 0.01 while 1 * 1 = 1, the unit has changed from tenths to hundredths). And if you are going to use some other mathematical functions you also has to take things like this into consideration.
On the other hand if the amount of decimals vary a lot using fixed point is a bad idea. And if high-precision floating point calculations are needed the decimal datatype was constructed for exactly that purpose.

How deal with the fact that most decimal fractions cannot be accurately represented in binary?

So, we know that fractions such as 0.1, cannot be accurately represented in binary base, which cause precise problems (such as mentioned here: Formatting doubles for output in C#).
And we know we have the decimal type for a decimal representation of numbers... but the problem is, a lot of Math methods, do not supporting decimal type, so we have convert them to double, which ruins the number again.
so what should we do?
Oh, what should we do about the fact that most decimal fractions cannot be represented in binary? or for that matter, that binary fractions cannot be represented in Decimal ?
or, even, that an infinity (in fact, a non-countable infinity) of real numbers in all bases cannot be accurately represented in any computerized system??
nothing! To recall an old cliche, You can get close enough for government work... In fact, you can get close enough for any work... There is no limit to the degree of accuracy the computer can generate, it just cannot be infinite, (which is what would be required for a number representation scheme to be able to represent every possible real number)
You see, for every number representation scheme you can design, in any computer, it can only represent a finite number of distinct different real numbers with 100.00 % accuracy. And between each adjacent pair of those numbers (those that can be represented with 100% accuracy), there will always be an infinity of other numbers that it cannot represent with 100% accuracy.
so what should we do?
We just keep on breathing. It really isn't a structural problem. We have a limited precision but usually more than enough. You just have to remember to format/round when presenting the numbers.
The problem in the following snippet is with the WriteLine(), not in the calculation(s):
double x = 6.9 - 10 * 0.69;
Console.WriteLine("x = {0}", x);
If you have a specific problem, th post it. There usually are ways to prevent loss of precision. If you really need >= 30 decimal digits, you need a special library.
Keep in mind that the precision you need, and the rounding rules required, will depend on your problem domain.
If you are writing software to control a nuclear reactor, or to model the first billionth of a second of the universe after the big bang (my friend actually did that), you will need much higher precision than if you are calculating sales tax (something I do for a living).
In the finance world, for example, there will be specific requirements on precision either implicitly or explicitly. Some US taxing jurisdictions specify tax rates to 5 digits after the decimal place. Your rounding scheme needs to allow for that much precision. When much of Western Europe converted to the Euro, there was a very specific approach to rounding that was written into law. During that transition period, it was essential to round exactly as required.
Know the rules of your domain, and test that your rounding scheme satisfies those rules.
I think everyone implying:
Inverting a sparse matrix? "There's an app for that", etc, etc
Numerical computation is one well-flogged horse. If you have a problem, it was probably put to pasture before 1970 or even much earlier, carried forward library by library or snippet by snippet into the future.
you could shift the decimal point so that the numbers are whole, then do 64 bit integer arithmetic, then shift it back. Then you would only have to worry about overflow problems.
And we know we have the decimal type
for a decimal representation of
numbers... but the problem is, a lot
of Math methods, do not supporting
decimal type, so we have convert them
to double, which ruins the number
again.
Several of the Math methods do support decimal: Abs, Ceiling, Floor, Max, Min, Round, Sign, and Truncate. What these functions have in common is that they return exact results. This is consistent with the purpose of decimal: To do exact arithmetic with base-10 numbers.
The trig and Exp/Log/Pow functions return approximate answers, so what would be the point of having overloads for an "exact" arithmetic type?

How to fix precision of variable

In c#
double tmp = 3.0 * 0.05;
tmp = 0.15000000000000002
This has to do with money. The value is really $0.15, but the system wants to round it up to $0.16. 0.151 should probably be rounded up to 0.16, but not 0.15000000000000002
What are some ways I can get the correct numbers (ie 0.15, or 0.16 if the decimal is high enough).
Use a fixed-point variable type, or a base ten floating point type like Decimal. Floating point numbers are always somewhat inaccurate, and binary floating point representations add another layer of inaccuracy when they convert to/from base two.
Money should be stored as decimal, which is a floating decimal point type. The same goes for other data which really is discrete rather than continuous, and which is logically decimal in nature.
Humans have a bias to decimal for obvious reasons, so "artificial" quantities such as money tend to be more appropriate in decimal form. "Natural" quantities (mass, height) are on a more continuous scale, which means that float/double (which are floating binary point types) are often (but not always) more appropriate.
In Patterns of Enterprise Application Architecture, Martin Fowler recommends using a Money abstraction
http://martinfowler.com/eaaCatalog/money.html
Mostly he does it for dealing with Currency, but also precision.
You can see a little of it in this Google Book search result:
http://books.google.com/books?id=FyWZt5DdvFkC&pg=PT520&lpg=PT520&dq=money+martin+fowler&source=web&ots=eEys-C_vdA&sig=jckdxgMLSRJtGDYZtcbYST1ak8M&hl=en&sa=X&oi=book_result&resnum=6&ct=result
'decimal' type was designed especially for this
A decimal data type would work well and is probably your choice.
However, in the past I've been able to do this in an optimized way using fixed point integers. It's ideal for high performance computations where decimal bogs down and you can't have the small precision errors of float.
Start with, say an Int32, and split in half. First half is whole number portion, second half is fractional portion. You get 16-bits of signed integer plus 16 bits of fractional precision. e.g. 1.5 as an 16:16 fixed point would be represented as 0x00018000. Or, alter the distribution of bits to suit your needs.
Fixed point numbers can generally be added/sub/mul/div like any other integer, with a little bit of work around mul/div to avoid overflows.
What you faced is a rounding problem, which I had mentioned earlier in another post
Can I use “System.Currency” in .NET?
And refer to this as well Rounding

What is the best way to store a money value in the database?

I need to store a couple of money related fields in the database but I'm not sure which data type to use between money and decimal.
Decimal and money ought to be pretty reliable. What i can assure you (from painful personal experience from inherited applications) is DO NOT use float!
I always use Decimal; never used MONEY before.
Recently, I found an article regarding decimal versus money data type in Sql server that you might find interesting:
Money vs Decimal
It also seems that the money datatype does not always result in accurate results when you perform calculations with it : click
What I've done as wel in the past, is using an INT field, and store the amount in cents (eurocent / dollarcent).
I guess it comes down to precision and scale. IIRC, money is 4dp. If that is fine, money expresses your intent. If you want more control, use decimal with a specific precision and scale.
It depends on your application!!!
I work in financial services where we normally consider price to be significant to 5 decimal places after the point, which of course when you buy a couple of million at 3.12345pence/cents is a significant amount.
Some applications will supply their own sql type to handle this.
On the other hand, this might not be necessary.
<Humour>
Contractor rates always seemed to be rounded to the nearest £100, but currently seem to be to nearest £25 pounds in the current credit crunch.
</Humour>
Don't align your thoughts based on available datatypes. Rather, analyze your requirement and then see which datatype fits best.
Float is anytime the worst choice considering the limitation of the architecture in storing binary version of floating point numbers.
Money is a standard unit and will surely have more support for handling money related operations.
In case of decimal, you'll have to handle each and everything but you know it's only you who is handling a decimal type, thus no surprises which you may get with other two data types.
Use decimal and use more decimal places than you think you will need so that caclulations will be correct. Money does not alwys return correct results in calculations. Under no circumstances use float or real as these are inexact datatypes and can cause calculations to be wrong (especially as they get more complex).
For some data (like money) where you want no approximation or changes due to float value, you must be sure that the data is never 'floating', it must be rigid on both sides of decimal point.
One easy way to be safe is, to value by converting it into INTEGER data type, and be sure that while you retrive the value, decimal point is placed at proper location.
e.g.
1. To save $240.10 into database.
2. Convert it to a pure integral form: 24010 (you know its just the shift of decimal).
3. Revert it back to proper decimal state. Place decimal at 2 positions from right. $240.10
So, while being in databse it will be in a rigid integer form.

Categories

Resources