C# double, decimal problems - c#

Why does this calcuation: double number = (13 /(13+12+13))
equals 0?
It should be around 0.34, I think!
Thanks!

Because you are dividing an int with an int. You should be doing
double number = (13.0 /(13.0+12.0+13.0));

That are integers. So it does integer division. And thus truncates to the next lower(closer to 0) integer.
Add a .0 to a number like 13.0 to make it a double.

Because you're using all INT in your formula - it will be treated as INT for the result too.
Try this instead:
var result = 13.0 / (13.0 + 12.0 + 13.0)
and your result will be:
0.34210526315789475

Try adding a .0:
(13.0 /(13+12+13))
Otherwise you're dealing with integers.

Another option is to cast one of the arguments explicitly to double and thus forcing the runtime to perform double division. e.g. :
double result = ((double)13 / (13 + 12 + 13));

Adding a ".0" will help:
double number = (13.0 /(13.0+12.0+13.0));

Related

Decimal and double values are resolving to zero [duplicate]

How do I divide two integers to get a double?
You want to cast the numbers:
double num3 = (double)num1/(double)num2;
Note: If any of the arguments in C# is a double, a double divide is used which results in a double. So, the following would work too:
double num3 = (double)num1/num2;
For more information see:
Dot Net Perls
Complementing the #NoahD's answer
To have a greater precision you can cast to decimal:
(decimal)100/863
//0.1158748551564310544611819235
Or:
Decimal.Divide(100, 863)
//0.1158748551564310544611819235
Double are represented allocating 64 bits while decimal uses 128
(double)100/863
//0.11587485515643106
In depth explanation of "precision"
For more details about the floating point representation in binary and its precision take a look at this article from Jon Skeet where he talks about floats and doubles and this one where he talks about decimals.
cast the integers to doubles.
Convert one of them to a double first. This form works in many languages:
real_result = (int_numerator + 0.0) / int_denominator
var firstNumber=5000,
secondeNumber=37;
var decimalResult = decimal.Divide(firstNumber,secondeNumber);
Console.WriteLine(decimalResult );
var result = decimal.ToDouble(decimal.Divide(5, 2));
I have went through most of the answers and im pretty sure that it's unachievable. Whatever you try to divide two int into double or float is not gonna happen.
But you have tons of methods to make the calculation happen, just cast them into float or double before the calculation will be fine.
The easiest way to do that is adding decimal places to your integer.
Ex.:
var v1 = 1 / 30 //the result is 0
var v2 = 1.00 / 30.00 //the result is 0.033333333333333333
In the comment to the accepted answer there is a distinction made which seems to be worth highlighting in a separate answer.
The correct code:
double num3 = (double)num1/(double)num2;
is not the same as casting the result of integer division:
double num3 = (double)(num1/num2);
Given num1 = 7 and num2 = 12:
The correct code will result in num3 = 0.5833333
Casting the result of integer division will result in num3 = 0.00

how to leave decimal places to 1 without rounding

I need to convert my value 2.8634 to 2.8. I tried the following ,
var no = Math.Round(2.8634,2,MidpointRounding.AwayFromZero)
I'm getting 2.87.
Suggest me some ideas how to convert.
Thanks
This might do the trick for you
decimal dsd = 2.8634m;
var no = Math.Truncate(dsd * 10) / 10;
Math.Truncate calculates the integral part of a specified decimal number. The number is rounded to the nearest integer towards zero.
You can also have a look on the difference between Math.Floor, Math.Ceiling, Math.Truncate, Math.Round with an amazing explanation.
Use this one.Hope this will work for you.
var no = Math.Round(2.8634,1,MidpointRounding.AwayFromZero)
It's a tad more cryptic (but more efficient) than calling a Math method, but you can simply multiply the value by 10, cast to an integer (which effectively truncates the decimal portion), and then divide by 10.0 (or 10d/10f, all just to ensure we don't get integer division) to get back the value you are after. I.e.:
float val = 2.8634;
val = ((int)(val * 10)) / 10.0;

how to return double from double plus double?

I hava 2 Double variables, and when I sum them it return me int variable.
but I want it return Double too.
What should I do?
Thank you!
example:
double d = 4.0;
double Myd = 4.0;
Console.WriteLine ( d+Myd ); // it returns me 8 but I want 8.0 .
The value 8 and 8.0 are the same. If you want decimals shown in your results, use a converter like N2 for fixed with 2 SigDig after the decimal.
Console.WriteLine((d+Myd).ToString("N2"));
Shannon is corret. it is returning a double, but because it is all 0's after the decimal it cuts it off. if you were to do 8.5 + 8 it would return 16.5. If you require one decimal, just use an N1.

Division in C# to get exact value [duplicate]

This question already has answers here:
Why does integer division in C# return an integer and not a float?
(8 answers)
Closed 9 years ago.
If I divide 150 by 100, I should get 1.5. But I am getting 1.0 when I divided like I did below:
double result = 150 / 100;
Can anyone tell me how to get 1.5?
try:
double result = (double)150/100;
When you are performing the division as before:
double result = 150/100;
The devision is first done as an Int and then it gets cast as a double hence you get 1.0, you need to have a double in the equation for it to divide as a double.
Cast one of the ints to a floating point type. You should look into the difference between decimal and double and decide which you want, but to use double:
double result = (double)150 / 100;
Make the number float
var result = 150/100f
or you can make any of number to float by adding .0:
double result=150.0/100
or
double result=150/100.0
double result = (150.0/100.0)
One or both numbers should be a float/double on the right hand side of =
If you're just using literal values like 150 and 100, C# is going to treat them as integers, and integer math always "rounds down". You can add a flag like "f" for float or "m" for decimal to not get integer math. So for example result = 150m/100m will give you a different answer than result = 150/100.

Calculation of database cells values

I have a database with a table, containing two integer fields.
When I try to get a percentage (fieldA / (fieldA+fieldB)), the value is 0.
double percentage = fieldA+fieldB // WORKS; 5+5=10
double percentage = fieldA / (fieldA+fieldB) // DOES NOT WORK; 5+5=0
So what's the problem here? Thanks..
When you do fieldA / (fieldA+fieldB) you get 5/10, which as an integer is truncated to 0, you have to do double division if you want 0.5 as a result.
i.e. something like this:
double percentage = (double)fieldA/(fieldA+fieldB)
I do assume fieldA and fieldB are integers? If yes, you should be aware that dividing two integers results in an integer, too, no matter what is on the left side of the equation.
So dividing fieldA by (fieldA+fieldB) results in a value < 1 which results in zero. See Integer Division on Wikipedia.
To correct the issue, simply cast at least one operand to a floating point type like e.g.:
double percentage = fieldA/(double)(fieldA+fieldB)
Since fieldA and fieldB are integers, the expression fieldA / (fieldA+fieldB)is of the form int / int which means you will use integer division, in this case - 5/10 = 0, as integer division solves x = am + b, and in this case 5 = a*10 + b which means a = 0, b = 5
You can do however:
double percentage = (double)fieldA / (fieldA+fieldB);
You may need to cast the fields as doubles before applying the operation
try this : double percentage = (double)fieldA/(double)(fieldA+fieldB)
if both fields are integer, then the addition of the two integer results also integer.
Try this
float result = (float)((A*100)/(B+A));
Answer: result = 50.0

Categories

Resources