I'm attempting to use a double to represent a bit of a dual-value type in a database that must sometimes accept two values, and sometimes accept only one (int).
So the field is a float in the database, and in my C# code, it is a double (since mapping it via EF makes it a double for some reason... )
So basically what I want to do .. let's say 2.5 is the value. I want to separate that out into 2, and 5. Is there any implicit way to go about this?
Like this:
int intPart = (int)value;
double fractionalPart = value - intPart;
If you want fractionalPart to be an int, you can multiply it by 10n, where n is the number of digits you want, and cast to int.
However, beware of precision loss.
However, this is extremely poor design; you should probably make two fields.
SQL Server's float type is an 8-byte floating-point value, equivalent to C#'s double.
You should be able to implicitly type-cast the double to an int to get the first part and subtract that from the number and multiply that by ten to get the second part
Related
Let's say we have the following simple code
string number = "93389.429999999993";
double numberAsDouble = Convert.ToDouble(number);
Console.WriteLine(numberAsDouble);
after that conversion numberAsDouble variable has the value 93389.43. What can i do to make this variable keep the full number as is without rounding it? I have found that Convert.ToDecimal does not behave the same way but i need to have the value as double.
-------------------small update---------------------
putting a breakpoint in line 2 of the above code shows that the numberAsDouble variable has the rounded value 93389.43 before displayed in the console.
93389.429999999993 cannot be represented exactly as a 64-bit floating point number. A double can only hold 15 or 16 digits, while you have 17 digits. If you need that level of precision use a decimal instead.
(I know you say you need it as a double, but if you could explain why, there may be alternate solutions)
This is expected behavior.
A double can't represent every number exactly. This has nothing to do with the string conversion.
You can check it yourself:
Console.WriteLine(93389.429999999993);
This will print 93389.43.
The following also shows this:
Console.WriteLine(93389.429999999993 == 93389.43);
This prints True.
Keep in mind that there are two conversions going on here. First you're converting the string to a double, and then you're converting that double back into a string to display it.
You also need to consider that a double doesn't have infinite precision; depending on the string, some data may be lost due to the fact that a double doesn't have the capacity to store it.
When converting to a double it's not going to "round" any more than it has to. It will create the double that is closest to the number provided, given the capabilities of a double. When converting that double to a string it's much more likely that some information isn't kept.
See the following (in particular the first part of Michael Borgwardt's answer):
decimal vs double! - Which one should I use and when?
A double will not always keep the precision depending on the number you are trying to convert
If you need to be precise you will need to use decimal
This is a limit on the precision that a double can store. You can see this yourself by trying to convert 3389.429999999993 instead.
The double type has a finite precision of 64 bits, so a rounding error occurs when the real number is stored in the numberAsDouble variable.
A solution that would work for your example is to use the decimal type instead, which has 128 bit precision. However, the same problem arises with a smaller difference.
For arbitrary large numbers, the System.Numerics.BigInteger object from the .NET Framework 4.0 supports arbitrary precision for integers. However you will need a 3rd party library to use arbitrary large real numbers.
You could truncate the decimal places to the amount of digits you need, not exceeding double precision.
For instance, this will truncate to 5 decimal places, getting 93389.42999. Just replace 100000 for the needed value
string number = "93389.429999999993";
decimal numberAsDecimal = Convert.ToDecimal(number);
var numberAsDouble = ((double)((long)(numberAsDecimal * 100000.0m))) / 100000.0;
I have the double value like 12.256852651 and I want to display it as 12.257 as a float number without converting it in to a string type.
How can I do it in C# ?
I'd first convert to Decimal and then use Math.Round on the result. This conversion is not strictly necessary, but I always feel a bit uneasy if I round to decimal places while using binary floating points.
Math.Round((Decimal)f, 3, MidpointRounding.AwayFromZero)
You should also look into the choice of MidpointRounding, since by default this uses Banker's round, which is not what you are used to from school.
If you want to display it, it will be a string and that's what you need to use.
If you want to round in order to use it later in calculations, use Math.Round((decimal)myDouble, 3).
If you don't intend to use it in calculation but need to display it, use double.ToString("F3").
I understand the principle behind this problem but it's giving me a headache to think that this is going on throughout my application and I need to find as solution.
double Value = 141.1;
double Discount = 25.0;
double disc = Value * Discount / 100; // disc = 35.275
Value -= disc; // Value = 105.824999999999999
Value = Functions.Round(Value, 2); // Value = 105.82
I'm using doubles to represent quite small numbers. Somehow in the calculation 141.1 - 35.275 the binary representation of the result gives a number which is just 0.0000000000001 out. Unfortunately, since I am then rounding this number, this gives the wrong answer.
I've read about using Decimals instead of Doubles but I can't replace every instance of a Double with a Decimal. Is there some easier way to get around this?
If you're looking for exact representations of values which are naturally decimal, you will need to replace double with decimal everywhere. You're simply using the wrong datatype. If you'd been using short everywhere for integers and then found out that you needed to cope with larger values than that supports, what would you do? It's the same deal.
However, you should really try to understand what's going on to start with... why Value doesn't equal exactly 141.1, for example.
I have two articles on this:
Binary floating point in .NET
Decimal floating point in .NET
You should use decimal – that's what it's for.
The behaviour of floating point arithmetic? That's just what it does. It has limited finite precision. Not all numbers are exactly representable. In fact, there are an infinite number of real valued numbers, and only a finite number can be representable. The key to decimal, for this application, is that it uses a base 10 representation – double uses base 2.
Instead of using Round to round the number, you could use some function you write yourself which uses a small epsilon when rounding to allow for the error. That's the answer you want.
The answer you don't want, but I'm going to give anyway, is that if you want precision, and since you're dealing with money judging by your example you probably do, you should not be using binary floating point maths. Binary floating point is inherently inaccurate and some numbers just can't be represented correctly. Using Decimal, which does base-10 floating point, would be a much better approach everywhere and will avoid you making costly mistakes with your doubles.
After spending most of the morning trying to replace every instance of a 'double' to 'decimal' and realising I was fighting a losing battle, I had another look at my Round function. This may be useful to those who can't implement the proper solution:
public static double Round(double dbl, int decimals) {
return (double)Math.Round((decimal)dbl, decimals, MidpointRounding.AwayFromZero);
}
By first casting the value to a decimal, and then calling Math.Round, this will return the 'correct' value.
for example:
const decimal dollars = 25.50M;
why do we have to add that M?
why not just do:
const decimal dollars = 25.50;
since it already says decimal, doesnt it imply that 25.50 is a decimal?
No.
25.50 is a standalone expression of type double, not decimal.
The compiler will not see that you're trying to assign it to a decimal variable and interpret it as a decimal.
Except for lambda expressions, anonymous methods, and the conditional operator, all C# expressions have a fixed type that does not depend at all on context.
Imagine what would happen if the compiler did what you want it to, and you called Math.Max(1, 2).
Math.Max has overloads that take int, double, and decimal. Which one would it call?
There are two important concepts to understand in this situation.
Literal Values
Implicit Conversion
Essentially what you are asking is whether a literal value can be implicitly converted between 2 types. The compiler will actually do this for you in some cases when there would be no loss in precision. Take this for example:
long n = 1000; // Assign an Int32 literal to an Int64.
This is possible because a long (Int64) contains a larger range of values compared to an int (Int32). For your specific example it is possible to lose precision. Here are the drastically different ranges for decimal and double.
Decimal: ±1.0 × 10−28 to ±7.9 × 1028
Double: ±5.0 × 10−324 to ±1.7 × 10308
With knowledge it becomes clear why an implicit conversion is a bad idea. Here is a list of implicit conversions that the C# compiler currently supports. I highly recommend you do a bit of light reading on the subject.
Implicit Numeric Conversions Table
Note also that due to the inner details of how doubles and decimals are defined, slight rounding errors can appear in your assignments or calculations. You need to know about how floats, doubles, and decimals work at the bit level to always make the best choices.
For example, a double cannot precisely store the value 25.10, but a decimal can.
A double can precisely store the value 25.50 however, for fun binary-encoding reasons.
Decimal structure
I need to get the left hand side integer value from a decimal or double. For Ex: I need to get the value 4 from 4.6. I tried using Math.Floor function but it's returning a double value, for ex: It's returning 4.0 from 4.6. The MSDN documentation says that it returns an integer value. Am I missing something here? Or is there a different way to achieve what I'm looking for?
The range of double is much wider than the range of int or long. Consider this code:
double d = 100000000000000000000d;
long x = Math.Floor(d); // Invalid in reality
The integer is outside the range of long - so what would you expect to happen?
Typically you know that the value will actually be within the range of int or long, so you cast it:
double d = 1000.1234d;
int x = (int) Math.Floor(d);
but the onus for that cast is on the developer, not on Math.Floor itself. It would have been unnecessarily restrictive to make it just fail with an exception for all values outside the range of long.
According to MSDN, Math.Floor(double) returns a double: http://msdn.microsoft.com/en-us/library/e0b5f0xb.aspx
If you want it as an int:
int result = (int)Math.Floor(yourVariable);
I can see how the MSDN article can be misleading, they should have specified that while the result is an "integer" (in this case meaning whole number) it is still of TYPE Double
If you just need the integer portion of a number, cast the number to an int. This will truncate the number at the decimal point.
double myDouble = 4.6;
int myInteger = (int)myDouble;
Floor leaves it as a double so you can do more double calculations with it.
If you want it as an int, cast the result of floor as an int.
Don't cast the original double as an int because the rules for floor are different (IIRC) for negative numbers.
Convert.ToInt32(Math.Floor(Convert.ToDouble(value)))
This will give you the exact value what you want like if you take 4.6 it returns 4 as output.