Related
Does anyone know why integer division in C# returns an integer and not a float?
What is the idea behind it? (Is it only a legacy of C/C++?)
In C#:
float x = 13 / 4;
//== operator is overridden here to use epsilon compare
if (x == 3.0)
print 'Hello world';
Result of this code would be:
'Hello world'
Strictly speaking, there is no such thing as integer division (division by definition is an operation which produces a rational number, integers are a very small subset of which.)
While it is common for new programmer to make this mistake of performing integer division when they actually meant to use floating point division, in actual practice integer division is a very common operation. If you are assuming that people rarely use it, and that every time you do division you'll always need to remember to cast to floating points, you are mistaken.
First off, integer division is quite a bit faster, so if you only need a whole number result, one would want to use the more efficient algorithm.
Secondly, there are a number of algorithms that use integer division, and if the result of division was always a floating point number you would be forced to round the result every time. One example off of the top of my head is changing the base of a number. Calculating each digit involves the integer division of a number along with the remainder, rather than the floating point division of the number.
Because of these (and other related) reasons, integer division results in an integer. If you want to get the floating point division of two integers you'll just need to remember to cast one to a double/float/decimal.
See C# specification. There are three types of division operators
Integer division
Floating-point division
Decimal division
In your case we have Integer division, with following rules applied:
The division rounds the result towards zero, and the absolute value of
the result is the largest possible integer that is less than the
absolute value of the quotient of the two operands. The result is zero
or positive when the two operands have the same sign and zero or
negative when the two operands have opposite signs.
I think the reason why C# use this type of division for integers (some languages return floating result) is hardware - integers division is faster and simpler.
Each data type is capable of overloading each operator. If both the numerator and the denominator are integers, the integer type will perform the division operation and it will return an integer type. If you want floating point division, you must cast one or more of the number to floating point types before dividing them. For instance:
int x = 13;
int y = 4;
float x = (float)y / (float)z;
or, if you are using literals:
float x = 13f / 4f;
Keep in mind, floating points are not precise. If you care about precision, use something like the decimal type, instead.
Since you don't use any suffix, the literals 13 and 4 are interpreted as integer:
Manual:
If the literal has no suffix, it has the first of these types in which its value can be represented: int, uint, long, ulong.
Thus, since you declare 13 as integer, integer division will be performed:
Manual:
For an operation of the form x / y, binary operator overload resolution is applied to select a specific operator implementation. The operands are converted to the parameter types of the selected operator, and the type of the result is the return type of the operator.
The predefined division operators are listed below. The operators all compute the quotient of x and y.
Integer division:
int operator /(int x, int y);
uint operator /(uint x, uint y);
long operator /(long x, long y);
ulong operator /(ulong x, ulong y);
And so rounding down occurs:
The division rounds the result towards zero, and the absolute value of the result is the largest possible integer that is less than the absolute value of the quotient of the two operands. The result is zero or positive when the two operands have the same sign and zero or negative when the two operands have opposite signs.
If you do the following:
int x = 13f / 4f;
You'll receive a compiler error, since a floating-point division (the / operator of 13f) results in a float, which cannot be cast to int implicitly.
If you want the division to be a floating-point division, you'll have to make the result a float:
float x = 13 / 4;
Notice that you'll still divide integers, which will implicitly be cast to float: the result will be 3.0. To explicitly declare the operands as float, using the f suffix (13f, 4f).
Might be useful:
double a = 5.0/2.0;
Console.WriteLine (a); // 2.5
double b = 5/2;
Console.WriteLine (b); // 2
int c = 5/2;
Console.WriteLine (c); // 2
double d = 5f/2f;
Console.WriteLine (d); // 2.5
It's just a basic operation.
Remember when you learned to divide. In the beginning we solved 9/6 = 1 with remainder 3.
9 / 6 == 1 //true
9 % 6 == 3 // true
The /-operator in combination with the %-operator are used to retrieve those values.
The result will always be of type that has the greater range of the numerator and the denominator. The exceptions are byte and short, which produce int (Int32).
var a = (byte)5 / (byte)2; // 2 (Int32)
var b = (short)5 / (byte)2; // 2 (Int32)
var c = 5 / 2; // 2 (Int32)
var d = 5 / 2U; // 2 (UInt32)
var e = 5L / 2U; // 2 (Int64)
var f = 5L / 2UL; // 2 (UInt64)
var g = 5F / 2UL; // 2.5 (Single/float)
var h = 5F / 2D; // 2.5 (Double)
var i = 5.0 / 2F; // 2.5 (Double)
var j = 5M / 2; // 2.5 (Decimal)
var k = 5M / 2F; // Not allowed
There is no implicit conversion between floating-point types and the decimal type, so division between them is not allowed. You have to explicitly cast and decide which one you want (Decimal has more precision and a smaller range compared to floating-point types).
As a little trick to know what you are obtaining you can use var, so the compiler will tell you the type to expect:
int a = 1;
int b = 2;
var result = a/b;
your compiler will tell you that result would be of type int here.
I'm trying to cast the result of a divide result to an int in c#
This is my code:
decimal testDecimal = 5.00; // testDecimal always is dividable by 0.25 with 0 rest
int times=0;
int times = testDecimal / Convert.ToDecimal(0.250);
// error returned -> Cannot implicitly convert type 'decimal' to 'int'.
if I change my cast to
int times = (int) testDecimal / Convert.ToDecimal(0.250);
//also returns an error: Cannot implicitly convert type 'decimal' to 'int'
How could I get the result (20) as an integer? What am I doing wrong?
Try this:
times = (int)(testDecimal / Convert.ToDecimal(0.250));
Without the extra parenthesis, it is trying to convert ONLY testDecimal to integer, then trying to convert the int/decimal result to an integer implicitly, which is what causes the error.
In an unrelated note, you are trying to declare the variable 'times' twice.
As everybody answered, you have to add parenthesis to cast the result of the your division instead of just trying to cast the first part and then getting the error after the division.
I also want to point out that it is not necessary to use Convert.ToDecimal just to declare your constant as adecimal, you could use C# suffixs to do so:
int times = (int)(testDecimal / 0.250m);
You have to cast the whole division result. try like:
int times = (int) (testDecimal / Convert.ToDecimal(0.250));
Be careful though because this could suffer the seemingly random floating point arithmetic error depending on which values you use.
https://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html
You may avoid this by first rounding the value.
(int) Math.Round(testDecimal / Convert.ToDecimal(0.250));
first you don't need to do convert to decimal, you can just do 0.25m
and then you can do
times = (int) (testDecimal / 0.25m);
do note that if the number is too big this might give you wrong result.
If you want a numeric literal to be treated as decimal, use the suffix m or M. By this way there is no need to use Convert.ToDecimal.
decimal testDecimal = 5.00M;
int times = (int)(testDecimal / 0.250M);
First of all you should add "M" suffix to your testDecimal declaration otherwise your 5.00 is a litteral and not a decimal.
decimal testDecimal = 5.00M;
Now, your compiler is not aware that your division result is an integer. Even if your devision can be casted to "int", for him, a decimal devided by another is a decimal and not an integer. You have to implicitly cast it:
int times = (int)(testDecimal / 0.250M);
Works like charm for me.
Wrap your expression in parenthesis so that you can convert it:
// int times = (int)testDecimal / 0.250m;
int times = (int)(testDecimal / 0.250m);
Just a note, make sure you are try/catching while converting because not all decimal values can fit into int.
try
{
times = (int)decimal.MaxValue;
}
catch(OverflowException ex)
{
// Value was either too large or too small for an Int32.
}
I'm not sure why you have Convert.ToDecimal(0.250). Why convert the float (0.250) to a decimal at run time? Why not just use a decimal literal (like 0.25M)? As other folks have noted, you need to cast the results of the division to an int, like:
decimal testDecimal = 5M;
int times = (int) (testDecimal / 0.25M);
Assert.AreEqual(20, times);
Also, as other folks have noted, you may want to think through how you do your conversion from decimal to int. Do you want to the default behavior (what you get from a simple cast), or do you want round up, round down, round to even, etc.? In this case, since the division yields an integral result, it doesn't matter, but in the general case, you'll want to put some thought into it.
By the way, the reason you have an error in this code:
int times = (int) testDecimal / Convert.ToDecimal(0.250);
is that you are casting testDecimal (5.0M) to an int (5). Then you are dividing it by a decimal (0.25M), which yields a decimal (20.0M). Finally, you are trying to assign that decimal to an integer, and the compiler signals an error.
HTH
I'm trying to subtract percentage in C# using:
n = n - (n * 0.25);
but I'm getting an error:
"Cannot implicitly 'double' to 'int'. An explicit conversions exists
(are you missing a cast?)"
Your value n is an int.
When you multiply by 0.25( which is a double), the resulting value is a double that you try to assign to a int.
To solve it, you have to specify that you are aware that you will lose precision using "explicit conversion".
n = n - (int)(n * 0.25);
Doing (Type)value is called "to cast value to Type". This is exactly what the error message suggest you to do.
Or, if you don't want to keep the precision, declare n not as an int but as a double. In this case, you will not have to cast n * 0.25 to int.
If you don't want to switch back and forwards between int and double types you could just use:
n = (n * 75) / 100
if your answer ever has decimals they'll be lost though
Your variable n must be an integer, but the result of your calculation is a double, since it involves multiplication by a double (0.25).
You can cast the result back to an int like this:
n = (int)(n - (n * 0.25));
I'm assuming that n is an integer type then, say int, as you don't give a clue to that. In which case the easiest solution is to do:
n = Convert.ToInt32(n - (n * 0.25));
Or you can cast:
n = (int)(n - (n * 0.25));
Check the type of variable 'n'.
Either 'n' should be of double type.
Or
Use explicit cast to convert to int.
int n = (int)(n - (n * 0.25));
you must cast result to int
n=(int)(n-(n*0.25));
try:
n = n - (int)((double)n * 0.25);
note: by doing this you wont have numbers behind the point in the n result.
I guess this could be an issue with the type of n being int it least needs to be double
hence when you have n = n - (n * 0.25) the result is a double
if you want to cast it as int then beware of rounding since it would not always be ending in .00
Also i think this would be better n = n * 0.75
Your n variable is an int. When you try to multiple with 0.25, 0.25 is double, so result will be double. You should cast it manually because there is no Implicit Numeric Conversion for double to int. You have to use Explicit Numeric Conversion for them.
From --> To
double --> sbyte , byte, short, ushort, int, uint, long, ulong, char, float, or decimal
You should convert your right expression to int.
int n = 100;
n = (int) (n - (n * 0.25));
Console.WriteLine(n);
Here is a DEMO.
And remember;
Explicit numeric conversion may cause loss of precision.
When you convert from a double value to an integral type, the value is truncated. If the resulting integral value is outside the range of the destination value, the result depends on the overflow checking context.
The best way would be to do
n = n - n/4;
If you want a percentage to be a whole number between 0 and 100, otherwise you should declare n to be a double by replacing int n with double n.
No costly conversion will occur in the proposed assignment. Note that n/4 is an integer because both operands (n and 4) are integers, causing no promotions, thus using integer division.
Explanation
This is type promotion, n is multiplied by a double, which promotes n*0.25 automatically to a double. A primitive can only be promoted into a higher rank, not demoted to a lower rank. A primitive x is of a higher rank then another primitive y if it can hold all values of y without causing loss of precision. A double can hold all values of an integer, but an integer can, for example, not hold 0.1. So you are trying to promote and demote. See MSDN library for more information.
Note:
Casting from a double to an int causes the value to be truncated, that is all decimals after the 'dot' will be erased, so -2.5 becomes -2 and 1.5 becomes 1. Integer division, as used above also rounds to zero, making this assignment equal with your assignment. But avoiding any costly conversions.
I've a double variable called x.
In the code, x gets assigned a value of 0.1 and I check it in an 'if' statement comparing x and 0.1
if (x==0.1)
{
----
}
Unfortunately it does not enter the if statement
Should I use Double or double?
What's the reason behind this? Can you suggest a solution for this?
It's a standard problem due to how the computer stores floating point values. Search here for "floating point problem" and you'll find tons of information.
In short – a float/double can't store 0.1 precisely. It will always be a little off.
You can try using the decimal type which stores numbers in decimal notation. Thus 0.1 will be representable precisely.
You wanted to know the reason:
Float/double are stored as binary fractions, not decimal fractions. To illustrate:
12.34 in decimal notation (what we use) means
1 * 101 + 2 * 100 + 3 * 10-1 + 4 * 10-2
The computer stores floating point numbers in the same way, except it uses base 2: 10.01 means
1 * 21 + 0 * 20 + 0 * 2-1 + 1 * 2-2
Now, you probably know that there are some numbers that cannot be represented fully with our decimal notation. For example, 1/3 in decimal notation is 0.3333333…. The same thing happens in binary notation, except that the numbers that cannot be represented precisely are different. Among them is the number 1/10. In binary notation that is 0.000110011001100….
Since the binary notation cannot store it precisely, it is stored in a rounded-off way. Hence your problem.
double and Double are the same (double is an alias for Double) and can be used interchangeably.
The problem with comparing a double with another value is that doubles are approximate values, not exact values. So when you set x to 0.1 it may in reality be stored as 0.100000001 or something like that.
Instead of checking for equality, you should check that the difference is less than a defined minimum difference (tolerance). Something like:
if (Math.Abs(x - 0.1) < 0.0000001)
{
...
}
You need a combination of Math.Abs on X-Y and a value to compare with.
You can use following Extension method approach
public static class DoubleExtensions
{
const double _3 = 0.001;
const double _4 = 0.0001;
const double _5 = 0.00001;
const double _6 = 0.000001;
const double _7 = 0.0000001;
public static bool Equals3DigitPrecision(this double left, double right)
{
return Math.Abs(left - right) < _3;
}
public static bool Equals4DigitPrecision(this double left, double right)
{
return Math.Abs(left - right) < _4;
}
...
Since you rarely call methods on double except ToString I believe its pretty safe extension.
Then you can compare x and y like
if(x.Equals4DigitPrecision(y))
Comparing floating point number can't always be done precisely because of rounding. To compare
(x == .1)
the computer really compares
(x - .1) vs 0
Result of sybtraction can not always be represeted precisely because of how floating point number are represented on the machine. Therefore you get some nonzero value and the condition evaluates to false.
To overcome this compare
Math.Abs(x- .1) vs some very small threshold ( like 1E-9)
From the documentation:
Precision in Comparisons
The Equals method should be used with caution, because two apparently equivalent values can be unequal due to the differing precision of the two values. The following example reports that the Double value .3333 and the Double returned by dividing 1 by 3 are unequal.
...
Rather than comparing for equality, one recommended technique involves defining an acceptable margin of difference between two values (such as .01% of one of the values). If the absolute value of the difference between the two values is less than or equal to that margin, the difference is likely to be due to differences in precision and, therefore, the values are likely to be equal. The following example uses this technique to compare .33333 and 1/3, the two Double values that the previous code example found to be unequal.
So if you really need a double, you should use the techique described on the documentation.
If you can, change it to a decimal. It' will be slower, but you won't have this type of problem.
Use decimal. It doesn't have this "problem".
Exact comparison of floating point values is know to not always work due to the rounding and internal representation issue.
Try imprecise comparison:
if (x >= 0.099 && x <= 0.101)
{
}
The other alternative is to use the decimal data type.
double (lowercase) is just an alias for System.Double, so they are identical.
For the reason, see Binary floating point and .NET.
In short: a double is not an exact type and a minute difference between "x" and "0.1" will throw it off.
Double (called float in some languages) is fraut with problems due to rounding issues, it's good only if you need approximate values.
The Decimal data type does what you want.
For reference decimal and Decimal are the same in .NET C#, as are the double and Double types, they both refer to the same type (decimal and double are very different though, as you've seen).
Beware that the Decimal data type has some costs associated with it, so use it with caution if you're looking at loops etc.
Official MS help, especially interested "Precision in Comparisons" part in context of the question.
https://learn.microsoft.com/en-us/dotnet/api/system.double.equals
// Initialize two doubles with apparently identical values
double double1 = .333333;
double double2 = (double) 1/3;
// Define the tolerance for variation in their values
double difference = Math.Abs(double1 * .00001);
// Compare the values
// The output to the console indicates that the two values are equal
if (Math.Abs(double1 - double2) <= difference)
Console.WriteLine("double1 and double2 are equal.");
else
Console.WriteLine("double1 and double2 are unequal.");
1) Should i use Double or double???
Double and double is the same thing. double is just a C# keyword working as alias for the class System.Double
The most common thing is to use the aliases! The same for string (System.String), int(System.Int32)
Also see Built-In Types Table (C# Reference)
Taking a tip from the Java code base, try using .CompareTo and test for the zero comparison. This assumes the .CompareTo function takes in to account floating point equality in an accurate manner. For instance,
System.Math.PI.CompareTo(System.Math.PI) == 0
This predicate should return true.
// number of digits to be compared
public int n = 12
// n+1 because b/a tends to 1 with n leading digits
public double MyEpsilon { get; } = Math.Pow(10, -(n+1));
public bool IsEqual(double a, double b)
{
// Avoiding division by zero
if (Math.Abs(a)<= double.Epsilon || Math.Abs(b) <= double.Epsilon)
return Math.Abs(a - b) <= double.Epsilon;
// Comparison
return Math.Abs(1.0 - a / b) <= MyEpsilon;
}
Explanation
The main comparison function done using division a/b which should go toward 1. But why division? it simply puts one number as reference defines the second one. For example
a = 0.00000012345
b = 0.00000012346
a/b = 0.999919002
b/a = 1.000081004
(a/b)-1 = 8.099789405475458e-5
1-(b/a) = 8.100445524503848e-5
or
a=12345*10^8
b=12346*10^8
a/b = 0.999919002
b/a = 1.000081004
(a/b)-1 = 8.099789405475458e-5
1-(b/a) = 8.100445524503848e-5
by division we get rid of trailing or leading zeros (or relatively small numbers) that pollute our judgement of number precision. In the example, the comparison is of order 10^-5, and we have 4 number accuracy, because of that in the beginning code I wrote comparison with 10^(n+1) where n is number accuracy.
Adding onto Valentin Kuzub's answer above:
we could use a single method that supports providing nth precision number:
public static bool EqualsNthDigitPrecision(this double value, double compareTo, int precisionPoint) =>
Math.Abs(value - compareTo) < Math.Pow(10, -Math.Abs(precisionPoint));
Note: This method is built for simplicity without added bulk and not with performance in mind.
As a general rule:
Double representation is good enough in most cases but can miserably fail in some situations. Use decimal values if you need complete precision (as in financial applications).
Most problems with doubles doesn't come from direct comparison, it use to be a result of the accumulation of several math operations which exponentially disturb the value due to rounding and fractional errors (especially with multiplications and divisions).
Check your logic, if the code is:
x = 0.1
if (x == 0.1)
it should not fail, it's to simple to fail, if X value is calculated by more complex means or operations it's quite possible the ToString method used by the debugger is using an smart rounding, maybe you can do the same (if that's too risky go back to using decimal):
if (x.ToString() == "0.1")
Floating point number representations are notoriously inaccurate because of the way floats are stored internally. E.g. x may actually be 0.0999999999 or 0.100000001 and your condition will fail. If you want to determine if floats are equal you need to specify whether they're equal to within a certain tolerance.
I.e.:
if(Math.Abs(x - 0.1) < tol) {
// Do something
}
My extensions method for double comparison:
public static bool IsEqual(this double value1, double value2, int precision = 2)
{
var dif = Math.Abs(Math.Round(value1, precision) - Math.Round(value2, precision));
while (precision > 0)
{
dif *= 10;
precision--;
}
return dif < 1;
}
To compare floating point, double or float types, use the specific method of CSharp:
if (double1.CompareTo(double2) > 0)
{
// double1 is greater than double2
}
if (double1.CompareTo(double2) < 0)
{
// double1 is less than double2
}
if (double1.CompareTo(double2) == 0)
{
// double1 equals double2
}
https://learn.microsoft.com/en-us/dotnet/api/system.double.compareto?view=netcore-3.1
Forgive me if this is a naïve question, however I am at a loss today.
I have a simple division calculation such as follows:
double returnValue = (myObject.Value / 10);
Value is an int in the object.
I am getting a message that says Possible Loss of Fraction. However, when I change the double to an int, the message goes away.
Any thoughts on why this would happen?
When you divide two int's into a floating point value the fraction portion is lost. If you cast one of the items to a float, you won't get this error.
So for example turn 10 into a 10.0
double returnValue = (myObject.Value / 10.0);
You're doing integer division if myObject.Value is an int, since both sides of the / are of integer type.
To do floating-point division, one of the numbers in the expression must be of floating-point type. That would be true if myObject.Value were a double, or any of the following:
double returnValue = myObject.Value / 10.0;
double returnValue = myObject.Value / 10d; //"d" is the double suffix
double returnValue = (double)myObject.Value / 10;
double returnValue = myObject.Value / (double)10;
An integer divided by an integer will return your an integer. Cast either Value to a double or divide by 10.0.
Assuming that myObject.Value is an int, the equation myObject.Value / 10 will be an integer division which will then be cast to a double.
That means that myObject.Value being 12 will result in returnValue becoming 1, not 1.2.
You need to cast the value(s) first:
double returnValue = (double)(myObject.Value) / 10.0;
This would result in the correct value 1.2, at least as correct as doubles will allow given their limitations but that's discussed elsewhere on SO, almost endlessly :-).
I think since myObject is an int, you should
double returnValue=(myObject.Value/10.0);