Is there a reason that a C# System.Decimal remembers the number of trailing zeros it was entered with?
See the following example:
public void DoSomething()
{
decimal dec1 = 0.5M;
decimal dec2 = 0.50M;
Console.WriteLine(dec1); //Output: 0.5
Console.WriteLine(dec2); //Output: 0.50
Console.WriteLine(dec1 == dec2); //Output: True
}
The decimals are classed as equal, yet dec2 remembers that it was entered with an additional zero. What is the reason/purpose for this?
It can be useful to represent a number including its accuracy - so 0.5m could be used to mean "anything between 0.45m and 0.55m" (with appropriate limits) and 0.50m could be used to mean "anything between 0.495m and 0.545m".
I suspect that most developers don't actually use this functionality, but I can see how it could be useful sometimes.
I believe this ability first arrived in .NET 1.1, btw - I think decimals in 1.0 were always effectively normalized.
I think it was done to provide a better internal representation for numeric values retrieved from databases. Dbase engines have a long history of storing numbers in a decimal format (avoiding rounding errors) with an explicit specification for the number of digits in the value.
Compare the SQL Server decimal and numeric column types for example.
Decimals represent fixed-precision decimal values. The literal value 0.50M has the 2 decimal place precision embedded, and so the decimal variable created remembers that it is a 2 decimal place value. Behaviour is entirely by design.
The comparison of the values is an exact numerical equality check on the values, so here, trailing zeroes do not affect the outcome.
Related
I have a string like this: "000123".
I want to know how to convert this string to decimal but keep the leading zeros. I have used Convert.ToDecimal(), Decimal.TryParse & Decimal.Parse. But all of those methods keep removing the leading zeros. They give me an output: 123. I want the decimal returning 000123. Is that possible ?
No, it's not. System.Decimal maintains trailing zeroes (since .NET 1.1) but not leading zeroes. So this works:
decimal d1 = 1.00m;
Console.WriteLine(d1); // 1.00
decimal d2 = 1.000m;
Console.WriteLine(d2); // 1.000
... but your leading zeroes version won't.
If you're actually just trying to format with "at least 6 digits before the decimal point" though, that's easier:
string text = d.ToString("000000.#");
(That will lose information about the number of trailing zeroes, mind you - I'm not sure how to do both easily.)
So you need to store 000123 in a decimal variable, First of all it is not possible since 000123 is not a Real Number. we can store only Real numbers within the range from -79,228,162,514,264,337,593,543,950,335 to +79,228,162,514,264,337,593,543,950,335 in a decimal variable. No worries you can achieve the target, decimal.Parse() to get the value(123) from the input(as you already did) and process the calculations with that value. and use .ToString("000000") whenever you wanted to display it as 000123
In the example below the number 12345678.9 loses accuracy when it's converted to a string as it becomes 1.234568E+07. I just need a way to preserve the accuracy for large floating point numbers. Thanks.
Single sin1 = 12345678.9F;
String str1 = sin1.ToString();
Console.WriteLine(str1); // displays 1.234568E+07
If you want to preserve decimal numbers, you should use System.Decimal. It's as simple as that. System.Single is worse than System.Double in that as per the documentation:
By default, a Single value contains only 7 decimal digits of precision, although a maximum of 9 digits is maintained internally.
You haven't just lost information when you've converted it to a string - you've lost information in the very first line. That's not just because you're using float instead of double - it's because you're using a floating binary point number.
The decimal number 0.1 can't be represented accurately in a binary floating point system no matter how big you make the type...
See my articles on floating binary point and floating decimal point for more information. Of course, it's possible that you should be using double or even float and just not caring about the loss of precision - it depends on what you're trying to represent. But if you really do care about preserving decimal digits, then use a decimal-based type.
You can't. Simple as that. In memory your number is 12345679. Try the code below.
Single sin1 = 12345678.9F;
String str1 = sin1.ToString("r"); // Shows "all" the number
Console.WriteLine(sin1 == 12345679); // true
Console.WriteLine(str1); // displays 12345679
Technically r means (quoting from MSDN) round-trip: Result: A string that can round-trip to an identical number. so in reality it isn't showing all the decimals. It's only showing all the decimals needed to distinguish it from other possible values of Single. If you want to show all the decimals use F20.
If you want more precision use double or better use decimal. float has the precision that it has. As we say in Italy "Non puoi spremere sangue da una rapa" (You can't squeeze blood from a turnip)
You could also write an IFormatProvider for your purpose - but the precision doesn't get any better unless you use a different type.
this article may help - http://www.csharp-examples.net/string-format-double/
Is the equality comparison for C# decimal types any more likely to work as we would intuitively expect than other floating point types?
I guess that depends on your intuition. I would assume that some people would think of the result of dividing 1 by 3 as the fraction 1/3, and others would think more along the lines of "Oh, 1 divided by 3 can't be represented as a decimal number, we'll have to decide how many digits to keep, let's go with 0.333".
If you think in the former way, Decimal won't help you much, but if you think in the latter way, and are explicit about rounding when needed, it is more likely that operations that are "intuitively" not subject to rounding errors in decimal, e.g. dividing by 10, will behave as you expect. This is more intuitive to most people than the behavior of a binary floating-point type, where powers of 2 behave nicely, but powers of 10 do not.
Basically, no. The Decimal type simply represents a specialised sort of floating-point number that is designed to reduce rounding error specifically in the base 10 system. That is, the internal representation of a Decimal is in fact in base 10 (denary) and not the usual binary. Hence, it is a rather more appropriate type for monetary calculations -- though not of course limited to such applications.
From the MSDN page for the structure:
The Decimal value type represents decimal numbers ranging from positive 79,228,162,514,264,337,593,543,950,335 to negative 79,228,162,514,264,337,593,543,950,335. The Decimal value type is appropriate for financial calculations requiring large numbers of significant integral and fractional digits and no round-off errors. The Decimal type does not eliminate the need for rounding. Rather, it minimizes errors due to rounding. For example, the following code produces a result of 0.9999999999999999999999999999 rather than 1.
A decimal number is a floating-point value that consists of a sign, a numeric value where each digit in the value ranges from 0 to 9, and a scaling factor that indicates the position of a floating decimal point that separates the integral and fractional parts of the numeric value.
This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
decimal vs double! - Which one should I use and when?
I'm using double type for price in my trading software.
I've noticed that sometimes there are a odd errors.
They occur if price contains 4 digits after "dot", like 2.1234.
When I sent from my program "2.1234" on the market order appears at the price of "2.1235".
I don't use decimal because I don't need "extreme" precision. I don't need to distinguish for examle "2.00000000003" from "2.00000000002". I need maximum 6 digits after a dot.
The question is - where is the line? When to use decimal?
Should I use decimal for any finansical operations? Even if I need just one digit after the dot? (1.1 1.2 etc.)
I know decimal is pretty slow so I would prefer to use double unless decimal is absolutely required.
Use decimal whenever you're dealing with quantities that you want to (and can) be represented exactly in base-10. That includes monetary values, because you want 2.1234 to be represented exactly as 2.1234.
Use double when you don't need an exact representation in base-10. This is usually good for handling measurements, because those are already approximations, not exact quantities.
Of course, if having or not an exact representation in base-10 is not important to you, other factors come into consideration, which may or may not matter depending on the specific situation:
double has a larger range (it can handle very large and very small magnitudes);
decimal has more precision (has more significant digits);
you may need to use double to interact with some older APIs that are not aware of decimal;
double is faster than decimal;
decimal has a larger memory footprint;
When accuracy is needed and important, use decimal.
When accuracy is not that important, then you can use double.
In your case, you should be using decimal, as its financial matter.
For financial operation I always use the decimal type
Use decimal it's built for representing powers of 10 well (i.e. prices).
Decimal is the way to go when dealing with prices.
If it's financial software you should probably use decimal. This wiki article summarises quite nicely.
A simple response is in this example:
decimal d = 0.3M+0.3M+0.3M;
bool ret = d == 0.9M; // true
double db = 0.3 + 0.3 + 0.3;
bool dret = db == 0.9; // false
the test with the double fails since 0.3 in its binary representation ( base 2 ) is periodic, so you loose precision the decimal is represented by BCD, so base 10, and you did not loose significant digit unexpectedly. The Decimal are unfortunately dramattically slower than double. Usually we use decimal for financial calculation, where any digit has to be considered to avoid tolerance, double/float for engineering.
Double is meant as a generic floating-point data type, decimal is specifically meant for money and financial domains. Even though double usually works just fine decimal might prevent problems in some cases (e.g. rounding errors when you get to values in the billions)
There is an Explantion of it on MSDN
As soon as you start to do calculations on doubles you may get unexpected rounding problems because a double uses a binary representation of the number while the decimal uses a decimal representation preserving the decimal digits. That is probably what you are experiencing. If you only serialize and deserialize doubles to text or database without doing any rounding you will actually not loose any precision.
However, decimals are much more suited for representing monetary values where you are concerned about the decimal digits (and not the binary digits that a double uses internally). But if you need to do complex calculations (e.g. integrals as used by actuary computations) you will have to convert the decimal to double before doing the calculation negating the advantages of using decimals.
A decimal also "remembers" how many digits it has, e.g. even though decimal 1.230 is equal to 1.23 the first is still aware of the trailing zero and can display it if formatted as text.
If you always know the maximum amount of decimals you are going to have (digits after the point). Then the best practice is to use fixed point notation. That will give you an exact result while still working very fast.
The simplest manner in which to use fixed point is to simply store the number in an int of thousand parts. For example if the price always have 2 decimals you would be saving the amount of cents ($12.45 is stored in an int with value 1245 which thus would represent 1245 cents). With four decimals you would be storing pieces of ten-thousands (12.3456 would be stored in an int with value 123456 representing 123456 ten-thousandths) etc etc.
The disadvantage of this is that you would sometimes need a conversion if for example you are multiplying two values together (0.1 * 0.1 = 0.01 while 1 * 1 = 1, the unit has changed from tenths to hundredths). And if you are going to use some other mathematical functions you also has to take things like this into consideration.
On the other hand if the amount of decimals vary a lot using fixed point is a bad idea. And if high-precision floating point calculations are needed the decimal datatype was constructed for exactly that purpose.
I came across following issue while developing some engineering rule value engine using eval(...) implementation.
Dim first As Double = 1.1
Dim second As Double = 2.2
Dim sum As Double = first + second
If (sum = 3.3) Then
Console.WriteLine("Matched")
Else
Console.WriteLine("Not Matched")
End If
'Above condition returns false because sum's value is 3.3000000000000003 instead of 3.3
It looks like 15th digit is round-tripped. Someone may give better explanation on this pls.
Is Math.Round(...) only solution available OR there is something else also I can attempt?
You are not adding decimals - you are adding up doubles.
Not all doubles can be represented accurately in a computer, hence the error. I suggest reading this article for background (What Every Computer Scientist Should Know About Floating-Point Arithmetic).
Use the Decimal type instead, it doesn't suffer from these issues.
Dim first As Decimal = 1.1
Dim second As Decimal = 2.2
Dim sum As Decimal= first + second
If (sum = 3.3) Then
Console.WriteLine("Matched")
Else
Console.WriteLine("Not Matched")
End If
that's how the double number work in PC.
The best way to compare them is to use such a construction
if (Math.Abs(second - first) <= 1E-9)
Console.WriteLine("Matched")
instead if 1E-9 you can use another number, that would represent the possible error in comparison.
Equality comparisons with floating point operations are always inaccurate because of how fractional values are represented within the machine. You should have some sort of epsilon value by which you're comparing against. Here is an article that describes it much more thoroughly:
http://www.cygnus-software.com/papers/comparingfloats/Comparing%20floating%20point%20numbers.htm
Edit: Math.Round will not be an ideal choice because of the error generated with it for certain comparisons. You are better off determining an epsilon value that can be used to limit the amount of error in the comparison (basically determining the level of accuracy).
A double uses floating-point arithmetic, which is approximate but more efficient. If you need to compare against exact values, use the decimal data type instead.
In C#, Java, Python, and many other languages, decimals/floats are not perfect. Because of the way they are represented (using multipliers and exponents), they often have inaccuracies. See http://www.yoda.arachsys.com/csharp/decimal.html for more info.
From the documentaiton:
http://msdn.microsoft.com/en-us/library/system.double.aspx
Floating-Point Values and Loss of
Precision
Remember that a floating-point number
can only approximate a decimal number,
and that the precision of a
floating-point number determines how
accurately that number approximates a
decimal number. By default, a Double
value contains 15 decimal digits of
precision, although a maximum of 17
digits is maintained internally. The
precision of a floating-point number
has several consequences:
Two floating-point numbers that appear
equal for a particular precision might
not compare equal because their least
significant digits are different.
A mathematical or comparison operation
that uses a floating-point number
might not yield the same result if a
decimal number is used because the
floating-point number might not
exactly approximate the decimal
number.
A value might not roundtrip if a
floating-point number is involved. A
value is said to roundtrip if an
operation converts an original
floating-point number to another form,
an inverse operation transforms the
converted form back to a
floating-point number, and the final
floating-point number is equal to the
original floating-point number. The
roundtrip might fail because one or
more least significant digits are lost
or changed in a conversion.
In addition, the result of arithmetic
and assignment operations with Double
values may differ slightly by platform
because of the loss of precision of
the Double type. For example, the
result of assigning a literal Double
value may differ in the 32-bit and
64-bit versions of the .NET Framework.
The following example illustrates this
difference when the literal value
-4.42330604244772E-305 and a variable whose value is -4.42330604244772E-305
are assigned to a Double variable.
Note that the result of the
Parse(String) method in this case does
not suffer from a loss of precision.
THis is a well known problem with floating point arithmatic. Look into binary coding for further details.
Use the type "decimal" if that will fit your needs.
But in general, you should never compare floating point values to constant floating point values with the equality sign.
Failing that, compare to the number of places that you want to compare to (e.g. say it is 4 then you would go (if sum > 3.2999 and sum < 3.3001)