How do I divide two integers to get a double?
You want to cast the numbers:
double num3 = (double)num1/(double)num2;
Note: If any of the arguments in C# is a double, a double divide is used which results in a double. So, the following would work too:
double num3 = (double)num1/num2;
For more information see:
Dot Net Perls
Complementing the #NoahD's answer
To have a greater precision you can cast to decimal:
(decimal)100/863
//0.1158748551564310544611819235
Or:
Decimal.Divide(100, 863)
//0.1158748551564310544611819235
Double are represented allocating 64 bits while decimal uses 128
(double)100/863
//0.11587485515643106
In depth explanation of "precision"
For more details about the floating point representation in binary and its precision take a look at this article from Jon Skeet where he talks about floats and doubles and this one where he talks about decimals.
cast the integers to doubles.
Convert one of them to a double first. This form works in many languages:
real_result = (int_numerator + 0.0) / int_denominator
var firstNumber=5000,
secondeNumber=37;
var decimalResult = decimal.Divide(firstNumber,secondeNumber);
Console.WriteLine(decimalResult );
var result = decimal.ToDouble(decimal.Divide(5, 2));
I have went through most of the answers and im pretty sure that it's unachievable. Whatever you try to divide two int into double or float is not gonna happen.
But you have tons of methods to make the calculation happen, just cast them into float or double before the calculation will be fine.
The easiest way to do that is adding decimal places to your integer.
Ex.:
var v1 = 1 / 30 //the result is 0
var v2 = 1.00 / 30.00 //the result is 0.033333333333333333
In the comment to the accepted answer there is a distinction made which seems to be worth highlighting in a separate answer.
The correct code:
double num3 = (double)num1/(double)num2;
is not the same as casting the result of integer division:
double num3 = (double)(num1/num2);
Given num1 = 7 and num2 = 12:
The correct code will result in num3 = 0.5833333
Casting the result of integer division will result in num3 = 0.00
Here is the code which made me post this question.
// int integer;
// int fraction;
// double arg = 110.1;
this.integer = (int)(arg);
this.fraction = (int)((arg - this.integer) * 100);
The variable integer is getting 110. That's OK.
The variable fraction is getting 9, however I am expecting 10.
What is wrong?
Update
It seems I have discovered that the source of the problem is subtraction
arg - this.integer
Its result is 0.099999999999994316.
Now I am wondering how I should correctly subtract so that the result was 0.1.
You have this:
fraction = (int)((110.1 - 110) * 100);
The inner part ((110.1 - 110) * 100), will be 9.999999
When you cast it to int, it will be round off to 9
This is because of "floating point" (see here) limitations:
Computers always need some way of representing data, and ultimately
those representations will always boil down to binary (0s and 1s).
Integers are easy to represent, but non-integers are a bit more
tricky. Consider the following var:
double x = 0.1d;
The variable x will actually store the closest available double to
that value. When you understand this, it becomes obvious why some
calculations seem to be "wrong".
If you were asked to add a third to a third, but could only use 3
decimal places, you'd get the "wrong" answer: the closest you could
get to a third is 0.333, and adding two of those together gives 0.666,
rather than 0.667 (which is closer to the exact value of two thirds).
Update:
In financial applications or where the numbers are so important to be exact, you can use decimal data type:
(int)((110.1m - 110) * 100) //will be 10 (m is decimal symbol)
or:
decimal arg = 110.1m;
int integer = (int)(arg); //110
decimal fraction = (int)((arg - integer) * 100); //will be 10
It is because you are using double, precision gets rounded, if you want it to be 10 use decimal type:
check the following:
int integer;
int fraction;
decimal arg = 110.1M;
integer = (int)(arg);
decimal diff = arg - integer;
decimal multiply = diff * 100;
fraction = (int)multiply;//output will be 10 as you expect
I downloaded for myself GnuMP library: https://gnumpnet.codeplex.com/ which behind the curtains uses gmp.dll which is wrapper for https://gmplib.org.
It has type Real, which is used for high precision calculations. In other project I have double and decimal type, which I want to replace with Real.
I need to replace Math.Round() with customized round for Real ( type in gnump.net). Did anybody tried to implement Round but for Real of Gnump.
Today I found an mathematically correct answer:
public static Real Round(Real r, int precision)
{
Real scaled = Real.Pow(10, precision + 1);
Real multiplied = r*scaled;
Real truncated = Trunc(multiplied);
Real lastNumber = truncated - Trunc(truncated/10)*10;
if (lastNumber >= 5)
{
truncated += 10;
}
truncated = Trunc(truncated/10);
return truncated * 10 /(scaled);
}
When I say mathematically correct I mean that following code:
Real r = 2.5;
r = Real.Round(r, 0);
will give 3. According to http://msdn.microsoft.com/en-us/library/wyk4d9cy.aspx Math.Round will give "round to even" (so called banker's rounding), but I for my task need mathematical round.
Forgive me if this is a naïve question, however I am at a loss today.
I have a simple division calculation such as follows:
double returnValue = (myObject.Value / 10);
Value is an int in the object.
I am getting a message that says Possible Loss of Fraction. However, when I change the double to an int, the message goes away.
Any thoughts on why this would happen?
When you divide two int's into a floating point value the fraction portion is lost. If you cast one of the items to a float, you won't get this error.
So for example turn 10 into a 10.0
double returnValue = (myObject.Value / 10.0);
You're doing integer division if myObject.Value is an int, since both sides of the / are of integer type.
To do floating-point division, one of the numbers in the expression must be of floating-point type. That would be true if myObject.Value were a double, or any of the following:
double returnValue = myObject.Value / 10.0;
double returnValue = myObject.Value / 10d; //"d" is the double suffix
double returnValue = (double)myObject.Value / 10;
double returnValue = myObject.Value / (double)10;
An integer divided by an integer will return your an integer. Cast either Value to a double or divide by 10.0.
Assuming that myObject.Value is an int, the equation myObject.Value / 10 will be an integer division which will then be cast to a double.
That means that myObject.Value being 12 will result in returnValue becoming 1, not 1.2.
You need to cast the value(s) first:
double returnValue = (double)(myObject.Value) / 10.0;
This would result in the correct value 1.2, at least as correct as doubles will allow given their limitations but that's discussed elsewhere on SO, almost endlessly :-).
I think since myObject is an int, you should
double returnValue=(myObject.Value/10.0);
I want to do this using the Math.Round function
Here's some examples:
decimal a = 1.994444M;
Math.Round(a, 2); //returns 1.99
decimal b = 1.995555M;
Math.Round(b, 2); //returns 2.00
You might also want to look at bankers rounding / round-to-even with the following overload:
Math.Round(a, 2, MidpointRounding.ToEven);
There's more information on it here.
Try this:
twoDec = Math.Round(val, 2)
If you'd like a string
> (1.7289).ToString("#.##")
"1.73"
Or a decimal
> Math.Round((Decimal)x, 2)
1.73m
But remember! Rounding is not distributive, ie. round(x*y) != round(x) * round(y). So don't do any rounding until the very end of a calculation, else you'll lose accuracy.
Personally I never round anything. Keep it as resolute as possible, since rounding is a bit of a red herring in CS anyway. But you do want to format data for your users, and to that end, I find that string.Format("{0:0.00}", number) is a good approach.
Wikipedia has a nice page on rounding in general.
All .NET (managed) languages can use any of the common language run time's (the CLR) rounding mechanisms. For example, the Math.Round() (as mentioned above) method allows the developer to specify the type of rounding (Round-to-even or Away-from-zero). The Convert.ToInt32() method and its variations use round-to-even. The Ceiling() and Floor() methods are related.
You can round with custom numeric formatting as well.
Note that Decimal.Round() uses a different method than Math.Round();
Here is a useful post on the banker's rounding algorithm.
See one of Raymond's humorous posts here about rounding...
// convert upto two decimal places
String.Format("{0:0.00}", 140.6767554); // "140.67"
String.Format("{0:0.00}", 140.1); // "140.10"
String.Format("{0:0.00}", 140); // "140.00"
Double d = 140.6767554;
Double dc = Math.Round((Double)d, 2); // 140.67
decimal d = 140.6767554M;
decimal dc = Math.Round(d, 2); // 140.67
=========
// just two decimal places
String.Format("{0:0.##}", 123.4567); // "123.46"
String.Format("{0:0.##}", 123.4); // "123.4"
String.Format("{0:0.##}", 123.0); // "123"
can also combine "0" with "#".
String.Format("{0:0.0#}", 123.4567) // "123.46"
String.Format("{0:0.0#}", 123.4) // "123.4"
String.Format("{0:0.0#}", 123.0) // "123.0"
If you want to round a number, you can obtain different results depending on: how you use the Math.Round() function (if for a round-up or round-down), you're working with doubles and/or floats numbers, and you apply the midpoint rounding. Especially, when using with operations inside of it or the variable to round comes from an operation. Let's say, you want to multiply these two numbers: 0.75 * 0.95 = 0.7125. Right? Not in C#
Let's see what happens if you want to round to the 3rd decimal:
double result = 0.75d * 0.95d; // result = 0.71249999999999991
double result = 0.75f * 0.95f; // result = 0.71249997615814209
result = Math.Round(result, 3, MidpointRounding.ToEven); // result = 0.712. Ok
result = Math.Round(result, 3, MidpointRounding.AwayFromZero); // result = 0.712. Should be 0.713
As you see, the first Round() is correct if you want to round down the midpoint. But the second Round() it's wrong if you want to round up.
This applies to negative numbers:
double result = -0.75 * 0.95; //result = -0.71249999999999991
result = Math.Round(result, 3, MidpointRounding.ToEven); // result = -0.712. Ok
result = Math.Round(result, 3, MidpointRounding.AwayFromZero); // result = -0.712. Should be -0.713
So, IMHO, you should create your own wrap function for Math.Round() that fit your requirements. I created a function in which, the parameter 'roundUp=true' means to round to next greater number. That is: 0.7125 rounds to 0.713 and -0.7125 rounds to -0.712 (because -0.712 > -0.713). This is the function I created and works for any number of decimals:
double Redondea(double value, int precision, bool roundUp = true)
{
if ((decimal)value == 0.0m)
return 0.0;
double corrector = 1 / Math.Pow(10, precision + 2);
if ((decimal)value < 0.0m)
{
if (roundUp)
return Math.Round(value, precision, MidpointRounding.ToEven);
else
return Math.Round(value - corrector, precision, MidpointRounding.AwayFromZero);
}
else
{
if (roundUp)
return Math.Round(value + corrector, precision, MidpointRounding.AwayFromZero);
else
return Math.Round(value, precision, MidpointRounding.ToEven);
}
}
The variable 'corrector' is for fixing the inaccuracy of operating with floating or double numbers.
This is for rounding to 2 decimal places in C#:
label8.Text = valor_cuota .ToString("N2") ;
In VB.NET:
Imports System.Math
round(label8.text,2)
I know its an old question but please note for the following differences between Math round and String format round:
decimal d1 = (decimal)1.125;
Math.Round(d1, 2).Dump(); // returns 1.12
d1.ToString("#.##").Dump(); // returns "1.13"
decimal d2 = (decimal)1.1251;
Math.Round(d2, 2).Dump(); // returns 1.13
d2.ToString("#.##").Dump(); // returns "1.13"
Had a weird situation where I had a decimal variable, when serializing 55.50 it always sets default value mathematically as 55.5. But whereas, our client system is seriously expecting 55.50 for some reason and they definitely expected decimal. Thats when I had write the below helper, which always converts any decimal value padded to 2 digits with zeros instead of sending a string.
public static class DecimalExtensions
{
public static decimal WithTwoDecimalPoints(this decimal val)
{
return decimal.Parse(val.ToString("0.00"));
}
}
Usage should be
var sampleDecimalValueV1 = 2.5m;
Console.WriteLine(sampleDecimalValueV1.WithTwoDecimalPoints());
decimal sampleDecimalValueV1 = 2;
Console.WriteLine(sampleDecimalValueV1.WithTwoDecimalPoints());
Output:
2.50
2.00
One thing you may want to check is the Rounding Mechanism of Math.Round:
http://msdn.microsoft.com/en-us/library/system.midpointrounding.aspx
Other than that, I recommend the Math.Round(inputNumer, numberOfPlaces) approach over the *100/100 one because it's cleaner.
You should be able to specify the number of digits you want to round to using Math.Round(YourNumber, 2)
You can read more here.
Math.Floor(123456.646 * 100) / 100
Would return 123456.64
string a = "10.65678";
decimal d = Math.Round(Convert.ToDouble(a.ToString()),2)
public double RoundDown(double number, int decimalPlaces)
{
return Math.Floor(number * Math.Pow(10, decimalPlaces)) / Math.Pow(10, decimalPlaces);
}