Rounding a number to 10 - c#

I am trying to round numbers to 10
ex:
6 becomes 10
4 becomes 0
11 becomes 10
14 becomes 10
17 becomes 20
How would I do this?
Math.Round doesn't work with this as far as I know.

For double (float and decimal will require additional casting):
value = Math.Round(value / 10) * 10;
For int :
value = (int) (Math.Round(value / 10.0) * 10);

Related

Math.Round for a decimal number

Code:
double PagesCount = 9 / 6 ;
Console.WriteLine(Math.Round(PagesCount, 0));
I'm trying to round the answer to 9/6(1,5) to 2 but this code snippet always results in 1.
How can I avoid that?
9 / 6 is an integer division which returns an integer, which is then cast to a double in your code.
To get double division behavior, try
double PagesCount = 9d / 6 ;
Console.WriteLine(Math.Round(PagesCount, 0));
(BTW I'm picky but decimal in the .net world refers base 10 numbers, unlike the base 2 numbers in your code)
Math.Round is not the problem here.
9.0 / 6.0 ==> 1.5 but 9 / 6 ==> 1 because in the second case an integer division is performed.
In the mixed cases 9.0 / 6 and 9 / 6.0, the int is converted to double and a double division is performed. If the numbers are given as int variables, this means that is enough to convert one of them to double:
(double)i / j ==> 1.5
Note that the integer division is complemented by the modulo operator % which yields the remainder of this division:
9 / 6 ==> 1
9 % 6 ==> 3
because 6 * 1 + 3 ==> 9

Why print(70 * (50 / 100)) returns 0? [duplicate]

This question already has answers here:
1/252 = 0 in c#?
(3 answers)
Closed 5 years ago.
Why it returns 0?
print(70 * (50 / 100));
70 * 0,5 = 35
I don't know why this is happening. Or did I make some stupid mistake either ... I don't know
The 50/100 is a division between integers. Since 50<100, the result is 0 and consequently 70*(50/100) results in 0.
If you want to avoid this, you have to cast one of them as a double.
70 * (50 / (double)100)
or
70 * ((double) 50/ 100)
When you divide two integers, the result is always an integer. For example, the result of 7 / 3 is 2.
So on 50/100 is equal to 0.35 and because you are diving two Integers, it will ignore decimal places so it's gonna be a zero - 0, so computer see it as : (ignoring .35)
70 * 0 = 0
P.S
Maybe you could explore little bit about invoking Decimal.Divide, your int arguments get implicitly converted to Decimals so .35 won't be ignored.
You can also enforce non-integer division on int arguments by explicitly casting at least one of the arguments to a floating-point type, e.g.:
int a = 42;
int b = 23;
double result = (double)a / b;
That's how it comes.. :)
Because / will output int and not double value. The result should be 0.5 and 0 will be taken as int then it will be multiplied by 70 and the result will be 0.
You need to make a cast as follows :
double x = 50/(double)100 ;
Then:
print(70 * x);
50 / 100 is 0 and 70 * 0 is 0.

Why division is always ZERO? [duplicate]

This question already has answers here:
C# is rounding down divisions by itself
(10 answers)
Closed 6 years ago.
int value = 10 * (50 / 100);
The expected answer is 5, but it is always zero. Could anyone please give me detail explanation why it is?
Thanks much in advance.
Because the result of 50/100 is 0 .
50/100 is equals to int(50/100) which returns 0.
Also, if you want to return 5, use this:
int value = (int)(10 * (50 / 100.0));
The result of (50/100.0) is 0.5.
Because you're doing an integer division: (50 / 100) gives 0
Try this:
int value = (int)(10 * (50 / 100.0));
Or reverse the multiply/division
int value = (10 * 50) / 100;
So it's getting multiplied before the divide
You make operation on int values.
50/100 in int is 0.

Why does Mod operator returns the first number when the second number is larger than the first?

I hope I'm not making a stupid question but, I can't find any good explanation on this result:
35 % 36 is equal to 35
https://www.google.com.ph/webhp?sourceid=chrome-instant&ion=1&espv=2&ie=UTF-8#q=35%20%25%2036
But if I divide the two numbers, 35 / 36 the result is: 0.97222222222 where I assume that the remainder would be 97.
Can anyone explain this?
When we divide 13 % 3 it gives 1 (the remainder)
similarly when we do 35 % 36 it will give the first number as remainder, as the dividend is less than the divider.
when you are dividing 35/36, integer division will give you 0 quotient.
Float division will give you the fraction value, and the fraction value is the remainder part.
13/3 = 4.33333 = 4 * 3 + (0.333)* 3
=(integer quotient) divider + remainder.
The result of 35 % 36 is equivalent to dividing 35 by 36, and returning the remainder. Since 36 goes into 35 exactly 0 times, you're left with a remainder of 35.
Similarly, let's assume you do 7 % 3. In this example, 3 goes into 7 twice and you're left with a remainder of 1. So 7 % 3 == 1.
I don't have the source code for the operation, but you could mimic it (I'm sure this isn't as efficient as whatever's built in!) with a small function like this:
public static class MyMath
{
public static int Mod(this int operand1, int operand2)
{
while (operand1 >= operand2)
operand1 -= operand2;
return operand1;
}
}
And then call it like this:
var remainder1 = 7.Mod(3); // 1
var remainder2 = 35.Mod(36); // 35
mod gives you the reminder of an integer division. 36 fits 0 times in 35 so
35 / 36 is 0 and has a reminder of 35 which is the result of mod

How does this C# code work out the answer?

I have solved the project Euler problem 16 but discovered this rather novel approach but I cannot get my head around the technique employed (from http://www.mathblog.dk/project-euler-16/):
int result = 0;
BigInteger number = BigInteger.Pow(2, 1000);
while (number > 0) {
result += (int) (number % 10);
number /= 10;
}
My version seems more conventional but I think the above approach is cooler.
var result = BigInteger
.Pow(2, 1000)
.ToString()
.Aggregate(0, (total, next) => total + (int) Char.GetNumericValue(next));
How does the mathematics work on the first approach, it is cool, but I need some explanation to help me understand, so if someone would be so kind to explain to me I would really appreciate it.
NOTE: If I have posted in the wrong section please let me know the better place to ask.
value % 10 will return the last digit (remainder after dividing by 10). Dividing the integer by 10 will remove this number.
Think of the number as a list, and you're just dequeuing the list and summing the values.
The modulus operator provides the remainder from the division. So, mod 10 is going to be the one's place in the number. The integer division by 10 will then shift everything so it can be repeated.
Example for the number 12345:
12345 % 10 = 5
12345 / 10 = 1234
1234 % 10 = 4
1234 / 10 = 123
123 % 10 = 3
123 / 10 = 12
12 % 10 = 2
12 / 10 = 1
1 % 10 = 1
1 / 10 = 0 (loop ends)
The addition is performed on the result of each modulus so you'd get 5+4+3+2+1
number % 10 extracts the least significant decimal digit. e.g. 12345 => 5
number / 10 removes the least significant decimal digit. This works because integer division in C# throws away the remainder. e.g. 12345 => 1234
Thus the above code extracts each digit, adds it to the sum, and then removes it. It repeats the process until all digits have been removed, and the number is 0.
It's quite simple:
Imagine this:
number = 54
It uses modulo to get the remainder of this divded by 10
e.g. 54 / 10 = 5 remainder 4
It then adds this digit (4) to the result, then divides the number by 10 (storing into int which discards decimal places)
so then number = 5
same again, 5 / 10 = 0 remainder 5
Adds them togther, result is now 9
and so on until the number is 0 :)
(in this case 9 is the answer)
They found the number 2^1000.
Modulo 10 gets the least significant digit.
E.G. 12034 % 10 = 4
Dividing by 10 strips the least significant digit.
E.G. 12034 / 10 = 1203
They sum up these least significant digits.

Categories

Resources