This question already has answers here:
How can I divide two integers to get a double?
(9 answers)
Closed 7 years ago.
I'm trying to calculate the area of a sector but when I divide angleParse by 360 and times it by radiusParse, I will sometimes receive a output of 0.
What happens and where do I need to fix it? (Sorry, if this is a weird question but I started learning C# yesterday, also I just started using StackOverflow today)
Frostbyte
static void AoaSc()
{
Console.WriteLine("Enter the radius of the circle in centimetres.");
string radius = Console.ReadLine();
int radiusParse;
Int32.TryParse(radius, out radiusParse);
Console.WriteLine("Enter the angle of the sector.");
string sectorAngle = Console.ReadLine();
int angleParse;
Int32.TryParse(sectorAngle, out angleParse);
double area = radiusParse * angleParse / 360;
Console.WriteLine("The area of the sector is: " + area + "cm²");
Console.ReadLine();
}
You've encountered integer division. If a and b are int, then a / b is also an int, where the non-integer part has been truncated (i.e. everything following the decimal point has been cut off).
If you want the "true" result, one or more of the operands in your division needs to be a floating point. Either of the following will work:
radiusParse * (double)angleParse / 360;
radiusParse * angleParse / 360.0;
Note that it's not sufficient to cast radiusParse to double, because the / operator has higher precedence than * (so the integer division happens first).
Finally, also note that decimal in .NET is its own type, and is distinct from float and double.
I think if you divide it by 360.0 it will work.
Alternatively declare a variable of type decimal and set this to 360.
private decimal degreesInCirle = 360;
// Other code removed...
double area = radiusParse * angleParse / degreesInCirle;
Related
I wanted to ask a question about a calculation I had today in C#.
double expenses = (pricePen + priceMark + priceLitres) - discount / 100*(pricePen + priceMark + priceLitres); //Incorrect
double expenses = (pricePen + priceMark + priceLitres) - (pricePen + priceMark + priceLitres)* discount/100; //Correct
So as you can see at the end of the equation I had to multiply the brackets by the integer named "discount", which is obviously a discount percentage.
When I change the places of that value whether it would be in front of the brackets or behind the brackets the answer will always be different, but in Maths I even checked myself that I should get the same answer even if the value is placed in front of the brackets to multiply or placed behind the brackets to multiply again, but C# doesn't think so.
I wanted to ask people, how does C# actually calculate this and why am I getting different results at the end? (Result should be 28.5, not 38)
[Data: pricePen = 11.6; priceMark = 21.6; priceLitres = 4.8; discount = 25;]
(I know that the question is irrelevant.)
In first line after dividing by 100 the result is in an integer. For that the rest of division get lost. So the multiplication has a lower result.
In second line the multiplication has the correct result and the rest of devision is lower than one.
So I know its already answered but if you want to learn more about divisions with int
here it is:
for example:
float value = 3/4 you would expect it to be 0.75 but that's not the case.
Because when the Compiler goes through the values 3 and 4 he makes des Literal of the highest data type - in this case (int)-.
That means the result of this division will be "0".75 because int has no floating numbers and just cuts it off. Then the program just takes that value and puts it in the float value ...
so the result will be
"3/4" 0 ->"float value" 0.0 = 0.0
Some guys before me already told you the solution to that problem like making one divisor to float with .0
float value = 3.0/4
or you can tell the Compiler to store the value in a float Literal with the (float) "command"
float value = (float) 3/4
I hope it helped you explain why you did that :)
To avoid these problems makes sure you are doing math with floating point types, and not int types. In your case discount is an int and thus
x * (discount / 100) = x * <integer>
Best to define a function to do the calculation which forces the type
double DiscountedPrice(double price, double discount)
{
return price - (discount/100) * price;
}
and then call it as
var x = DiscountedPrice( pricePen + priceMark + priceLitres, 15);
In the above scenario, the compiler will force the integer 15 to be converted into an double as a widening conversion (double has more digits than integer).
Here is the code which made me post this question.
// int integer;
// int fraction;
// double arg = 110.1;
this.integer = (int)(arg);
this.fraction = (int)((arg - this.integer) * 100);
The variable integer is getting 110. That's OK.
The variable fraction is getting 9, however I am expecting 10.
What is wrong?
Update
It seems I have discovered that the source of the problem is subtraction
arg - this.integer
Its result is 0.099999999999994316.
Now I am wondering how I should correctly subtract so that the result was 0.1.
You have this:
fraction = (int)((110.1 - 110) * 100);
The inner part ((110.1 - 110) * 100), will be 9.999999
When you cast it to int, it will be round off to 9
This is because of "floating point" (see here) limitations:
Computers always need some way of representing data, and ultimately
those representations will always boil down to binary (0s and 1s).
Integers are easy to represent, but non-integers are a bit more
tricky. Consider the following var:
double x = 0.1d;
The variable x will actually store the closest available double to
that value. When you understand this, it becomes obvious why some
calculations seem to be "wrong".
If you were asked to add a third to a third, but could only use 3
decimal places, you'd get the "wrong" answer: the closest you could
get to a third is 0.333, and adding two of those together gives 0.666,
rather than 0.667 (which is closer to the exact value of two thirds).
Update:
In financial applications or where the numbers are so important to be exact, you can use decimal data type:
(int)((110.1m - 110) * 100) //will be 10 (m is decimal symbol)
or:
decimal arg = 110.1m;
int integer = (int)(arg); //110
decimal fraction = (int)((arg - integer) * 100); //will be 10
It is because you are using double, precision gets rounded, if you want it to be 10 use decimal type:
check the following:
int integer;
int fraction;
decimal arg = 110.1M;
integer = (int)(arg);
decimal diff = arg - integer;
decimal multiply = diff * 100;
fraction = (int)multiply;//output will be 10 as you expect
This question already has answers here:
Why does integer division in C# return an integer and not a float?
(8 answers)
Closed 6 years ago.
I want to calculate the average of two floating point numbers, but whatever the input, I am getting an integer returned.
What should I do to make this work?
public class Program
{
public static float Average(int a, int b)
{
return (a + b) / 2;
}
public static void Main(string[] args)
{
Console.WriteLine(Average(2, 1));
}
}
There're two problems with your code
Evident one - Integer division - e.g. 1 / 2 == 0 not 0.5 since result must be integer
Hidden one - Integer overflow - e.g. a + b can overflow int.MaxValue and you'll get negative result
The most accurate implementation is
public static float Average(int a, int b)
{
return 0.5f * a + 0.5f * b;
}
Tests:
Average(1, 2); // 1.5
Average(int.MaxValue, int.MaxValue); // some large positive value
The trick is to write the expression as 0.5 * a + 0.5 * b, which also obviates the potential for int overflow (acknowledge Dmitry Bychenko).
Currently your expression is evaluated in integer arithmetic, which means that any fractional part is discarded.
In setting one of the values in each term to a floating point literal, the entire expression is evaluated in floating point.
Finally, if you want the type of the expression to be a float, then use
0.5f * a + 0.5f * b
The f suffix is used to denote a float literal.
return (a + b) / 2F; tells the compiler to treat the number as a float, otherwise it will be treated as an int.
Use this:
public static float Average(int a, int b)
{
return (float)(a + b) / 2;
}
You can use:
(float)(a + b) / 2.0
This will return float
Sorry, if anyone has answered the same way (I did not read all answers)
This question already has answers here:
How do I display a decimal value to 2 decimal places?
(19 answers)
Closed 9 years ago.
I need a way to round down a float up to a specific number of decimal places. Math.Round will round up if the number after the cut is larger than 6, and Math.Floor does not work with decimal places.
Basically if I have 2.566321, I want the code to return 2.56. The only way I know that this can be done is to convert the float to a string and use string.format but I would rather not do that if possible.
Thanks.
A brute force way might be to multiply by 10^n where n is the number of decimal places you want, cast to int (which does truncation rather than rounding), then cast back to float and divide by 10^n again.
visually:
2.566321 * 10^2 = 2.566321 * 100 = 256.6321
(int) 256.6321 = 256
(float) 256 / 10^2 = (float) 256 / 100 = 2.56
Quick attempt at the code:
public float Truncate(float value, int decimalPlaces) {
int temp = (int) (value * Math.Pow(10, decimalPlaces));
return (float) temp / Math.Pow(10, decimalPlaces);
}
I haven't tested this, but that should get you going.
This question already has answers here:
Why does integer division in C# return an integer and not a float?
(8 answers)
Closed 9 years ago.
If I divide 150 by 100, I should get 1.5. But I am getting 1.0 when I divided like I did below:
double result = 150 / 100;
Can anyone tell me how to get 1.5?
try:
double result = (double)150/100;
When you are performing the division as before:
double result = 150/100;
The devision is first done as an Int and then it gets cast as a double hence you get 1.0, you need to have a double in the equation for it to divide as a double.
Cast one of the ints to a floating point type. You should look into the difference between decimal and double and decide which you want, but to use double:
double result = (double)150 / 100;
Make the number float
var result = 150/100f
or you can make any of number to float by adding .0:
double result=150.0/100
or
double result=150/100.0
double result = (150.0/100.0)
One or both numbers should be a float/double on the right hand side of =
If you're just using literal values like 150 and 100, C# is going to treat them as integers, and integer math always "rounds down". You can add a flag like "f" for float or "m" for decimal to not get integer math. So for example result = 150m/100m will give you a different answer than result = 150/100.