Possible Loss of Fraction - c#

Forgive me if this is a naïve question, however I am at a loss today.
I have a simple division calculation such as follows:
double returnValue = (myObject.Value / 10);
Value is an int in the object.
I am getting a message that says Possible Loss of Fraction. However, when I change the double to an int, the message goes away.
Any thoughts on why this would happen?

When you divide two int's into a floating point value the fraction portion is lost. If you cast one of the items to a float, you won't get this error.
So for example turn 10 into a 10.0
double returnValue = (myObject.Value / 10.0);

You're doing integer division if myObject.Value is an int, since both sides of the / are of integer type.
To do floating-point division, one of the numbers in the expression must be of floating-point type. That would be true if myObject.Value were a double, or any of the following:
double returnValue = myObject.Value / 10.0;
double returnValue = myObject.Value / 10d; //"d" is the double suffix
double returnValue = (double)myObject.Value / 10;
double returnValue = myObject.Value / (double)10;

An integer divided by an integer will return your an integer. Cast either Value to a double or divide by 10.0.

Assuming that myObject.Value is an int, the equation myObject.Value / 10 will be an integer division which will then be cast to a double.
That means that myObject.Value being 12 will result in returnValue becoming 1, not 1.2.
You need to cast the value(s) first:
double returnValue = (double)(myObject.Value) / 10.0;
This would result in the correct value 1.2, at least as correct as doubles will allow given their limitations but that's discussed elsewhere on SO, almost endlessly :-).

I think since myObject is an int, you should
double returnValue=(myObject.Value/10.0);

Related

Decimal and double values are resolving to zero [duplicate]

How do I divide two integers to get a double?
You want to cast the numbers:
double num3 = (double)num1/(double)num2;
Note: If any of the arguments in C# is a double, a double divide is used which results in a double. So, the following would work too:
double num3 = (double)num1/num2;
For more information see:
Dot Net Perls
Complementing the #NoahD's answer
To have a greater precision you can cast to decimal:
(decimal)100/863
//0.1158748551564310544611819235
Or:
Decimal.Divide(100, 863)
//0.1158748551564310544611819235
Double are represented allocating 64 bits while decimal uses 128
(double)100/863
//0.11587485515643106
In depth explanation of "precision"
For more details about the floating point representation in binary and its precision take a look at this article from Jon Skeet where he talks about floats and doubles and this one where he talks about decimals.
cast the integers to doubles.
Convert one of them to a double first. This form works in many languages:
real_result = (int_numerator + 0.0) / int_denominator
var firstNumber=5000,
secondeNumber=37;
var decimalResult = decimal.Divide(firstNumber,secondeNumber);
Console.WriteLine(decimalResult );
var result = decimal.ToDouble(decimal.Divide(5, 2));
I have went through most of the answers and im pretty sure that it's unachievable. Whatever you try to divide two int into double or float is not gonna happen.
But you have tons of methods to make the calculation happen, just cast them into float or double before the calculation will be fine.
The easiest way to do that is adding decimal places to your integer.
Ex.:
var v1 = 1 / 30 //the result is 0
var v2 = 1.00 / 30.00 //the result is 0.033333333333333333
In the comment to the accepted answer there is a distinction made which seems to be worth highlighting in a separate answer.
The correct code:
double num3 = (double)num1/(double)num2;
is not the same as casting the result of integer division:
double num3 = (double)(num1/num2);
Given num1 = 7 and num2 = 12:
The correct code will result in num3 = 0.5833333
Casting the result of integer division will result in num3 = 0.00

Math.Round() while casting and calculating returning wrong result

I was just curious if anyone could explain to me why these different data types round differently in my code? (note: this is not how the variables are actually declared, it is how they are stored. I just displayed them like this for clarity)
double amount = 15 ;
double taxPercentage = 0.015;
decimal itemTax;
First, the un-rounded result:
itemTax = (decimal)(amount * taxPercentage);
// itemTax returns -0.225
If I round first, then cast to decimal:
itemTax = (decimal)(Math.Round(amount * taxPercentage, 2, MidpointRounding.AwayFromZero));
// itemTax returns -0.22
If I cast to decimal first, then round:
itemTax = (decimal)(amount * taxPercentage);
itemTax = Math.Round(itemTax,2,MidpointRounding.AwayFromZero);
// itemTax returns -0.23
Does this have something to do with the way double types round vs. decimal types?
Indeed. Double (like float) is base-2 number and is actually an approximation of a decimal value. Unlike decimal values, it can't represent infinite precision numbers so your -0.025 values might really be -0.024999999999

how to leave decimal places to 1 without rounding

I need to convert my value 2.8634 to 2.8. I tried the following ,
var no = Math.Round(2.8634,2,MidpointRounding.AwayFromZero)
I'm getting 2.87.
Suggest me some ideas how to convert.
Thanks
This might do the trick for you
decimal dsd = 2.8634m;
var no = Math.Truncate(dsd * 10) / 10;
Math.Truncate calculates the integral part of a specified decimal number. The number is rounded to the nearest integer towards zero.
You can also have a look on the difference between Math.Floor, Math.Ceiling, Math.Truncate, Math.Round with an amazing explanation.
Use this one.Hope this will work for you.
var no = Math.Round(2.8634,1,MidpointRounding.AwayFromZero)
It's a tad more cryptic (but more efficient) than calling a Math method, but you can simply multiply the value by 10, cast to an integer (which effectively truncates the decimal portion), and then divide by 10.0 (or 10d/10f, all just to ensure we don't get integer division) to get back the value you are after. I.e.:
float val = 2.8634;
val = ((int)(val * 10)) / 10.0;

Calculation of database cells values

I have a database with a table, containing two integer fields.
When I try to get a percentage (fieldA / (fieldA+fieldB)), the value is 0.
double percentage = fieldA+fieldB // WORKS; 5+5=10
double percentage = fieldA / (fieldA+fieldB) // DOES NOT WORK; 5+5=0
So what's the problem here? Thanks..
When you do fieldA / (fieldA+fieldB) you get 5/10, which as an integer is truncated to 0, you have to do double division if you want 0.5 as a result.
i.e. something like this:
double percentage = (double)fieldA/(fieldA+fieldB)
I do assume fieldA and fieldB are integers? If yes, you should be aware that dividing two integers results in an integer, too, no matter what is on the left side of the equation.
So dividing fieldA by (fieldA+fieldB) results in a value < 1 which results in zero. See Integer Division on Wikipedia.
To correct the issue, simply cast at least one operand to a floating point type like e.g.:
double percentage = fieldA/(double)(fieldA+fieldB)
Since fieldA and fieldB are integers, the expression fieldA / (fieldA+fieldB)is of the form int / int which means you will use integer division, in this case - 5/10 = 0, as integer division solves x = am + b, and in this case 5 = a*10 + b which means a = 0, b = 5
You can do however:
double percentage = (double)fieldA / (fieldA+fieldB);
You may need to cast the fields as doubles before applying the operation
try this : double percentage = (double)fieldA/(double)(fieldA+fieldB)
if both fields are integer, then the addition of the two integer results also integer.
Try this
float result = (float)((A*100)/(B+A));
Answer: result = 50.0

How do you divide integers and get a double in C#?

int x = 73;
int y = 100;
double pct = x/y;
Why do I see 0 instead of .73?
Because the division is done with integers then converted to a double. Try this instead:
double pct = (double)x / (double)y;
It does the same in all C-like languages. If you divide two integers, the result is an integer. 0.73 is not an integer.
The common work-around is to multiply one of the two numbers by 1.0 to make it a floating point type, or just cast it.
because the operation is still on int type. Try double pct = (double)x / (double)y;
Integer division drops the fractional portion of the result. See: http://mathworld.wolfram.com/IntegerDivision.html
It's important to understand the flow of execution in a line of code. You're correct to assume that setting the right side of the equation equal to double (on the left side) will implicitly convert the solution as a double. However, the flow execution dicates that x/y is evaluated by itself before you even get to the double pct = portion of the code. Thus, since two ints are divided by each other, they will evaluate to an int solution (in this case, rounding towards zero) before being implicitly converted to a double.
As other have noted, you'll need to cast the int variables as doubles so the solution comes out as a double and not as an int.
That’s because the type of the left hand operand of the division (x) is of type int, so the return type of x / y is still int. The fact that the destination variable is of type double doesn’t affect the operation.
To get the intended result, you first have to cast (convert) x to double, as in:
double pct = (double)x / y;

Categories

Resources