Why isn't an overflow exception raised? - c#

The following runs fine without error and diff is 1:
int max = int.MaxValue;
int min = int.MinValue;
//Console.WriteLine("Min is {0} and max is {1}", min, max);
int diff = min - max;
Console.WriteLine(diff);
Wouldn't then all programs be suspect? a+b is no more the sum of a and b, where a and b are of type int. Sometimes it is, but sometimes it is the sum* of a, b and 2*int.MinValue.
* Sum as in the ordinary English meaning of addition, ignoring any computer knowledge or word size
In PowerShell, it looks better, but it still is not a hardware exception from the add operation. It appears to use a long before casting back to an int:
[int]$x = [int]::minvalue - [int]::maxvalue
Cannot convert value "-4294967295" to type "System.Int32". Error: "Value was either too large or too small for an Int32."

By default, overflow checking is turned off in C#. Values simply "wrap round" in the common way.
If you compiled the same code with /checked or used a checked { ... } block, it would throw an exception.
Depending on what you're doing, you may want checking or explicitly not want it. For example, in Noda Time we have overflow checking turned on by default, but explicitly turn it off for GetHashCode computations (where we expect overflow and have no problem with it) and computations which we know won't overflow, and where we don't want the (very slight) performance penalty of overflow checking.
See the checked and unchecked pages in the C# reference, or section 7.6.12 of the C# language specification, for more details.

When not specified, .NET will not check for numeric overflows when doing operations on numeric data types int, etc.
You can enable this by either building in checked mode (passing /checked into the compiler arguments), or use checked in a code segment:
checked
{
int i = int.MaxValue + int.MaxValue;
Console.WriteLine(i);
}

Related

.NET vs Mono: different results for conversion from 2^32 as double to int

TL;DR Why does the (int)Math.Pow(2,32) return 0 on Mono and Int32.MinValue on .NET ?
While testing my .NET-written code on Mono I've stumbled upon the following line:
var i = X / ( (int)Math.Pow(2,32) );
well that line doesn't make much sense of course and I've already changed it to long.
I was however curious why didn't my code throw DivideByZeroException on .NET so I've checked the return value of that expression on both Mono and .NET
Can anyone please explain me the results?
IMHO the question is academic; the documentation promises only that "the result is an unspecified value of the destination type", so each platform is free to do whatever it wants.
One should be very careful when casting results that might overflow. If there is a possibility of that, and it's important to get a specific result instead of whatever arbitrary implementation a given platform has provided, one should use the checked keyword and catch any OverflowException that might occur, handling it with whatever explicit behavior is desired.
"the result is an unspecified value of the destination type". I thought it would be interesting to see what's actually happening in the .NET implementation.
It's to do with the OpCodes.Conv_I4 Field in IL: " Conversion from floating-point numbers to integer values truncates the number toward zero. When converting from a float64 to a float32, precision can be lost. If value is too large to fit in a float32 (F), positive infinity (if value is positive) or negative infinity (if value is negative) is returned. If overflow occurs converting one integer type to another, the high order bits are truncated" It does once again say overflow is unspecified however.

Integer overflow in for-loop causing odd behaviour

Consider the following code. First 5 iterations goes well and then it goes on in an infinite loop spamming 0's.
for (int i = 2; i < 100000; i*=i)
{
Console.WriteLine(i);
}
I'm suspecting its because the Integer overflows, and guessing that the default returned value when it overflows is 0.
How would you write this code, so that it does what it is intended to do?
Edit: This is kinda a theoretical example, and i'm just playing around to learn. Is there no way to do this, without changing datatype or checking for 0? Would be nice if it actually threw an exception.
I believe you're right that it overflows.
Rather than just telling you to use unchecked to force it to throw an exception, here is an explanation of WHY you get zeros after the 5th result
Here is the series:
2
4
16
256
65536
4294967296 <- this won't fit into an int
in your code, 2,147,483,648 + 1 becomes -2,147,483,648
the 6th value overflows and becomes 0 (4,294,967,296 ends up going from 0 to 2,147,483,648 to -2,147,483,648 and keeps adding back up towards 0 again)
and your for modifier clause will then run 0*=0 indefinitely
EDIT: as per your update, use the checked keyword to get an exception on the overflow
checked {
for (int i = 2; i < 100000; i*=i)
{
Console.WriteLine(i);
}
}
otherwise use a long or ulong instead of int as both of those can safely store the result of 100000*100000
Use checked if you want an overflow exception:
checked
{
for (int i = 2; i < 100000; i*=i)
{
Console.WriteLine(i);
}
}
The checked keyword is used to explicitly enable overflow checking for
integral-type arithmetic operations and conversions.
By default, an expression that contains only constant values causes a
compiler error if the expression produces a value that is outside the
range of the destination type. If the expression contains one or more
non-constant values, the compiler does not detect the overflow.

Why int.MaxValue - int.MinValue = -1?

To my understanding, that should give you an overflow error and when I write it like this:
public static void Main()
{
Console.WriteLine(int.MaxValue - int.MinValue);
}
it does correctly give me an overflow error.
However:
public static void Main()
{
Console.WriteLine(test());
}
public static Int32 test(int minimum = int.MinValue, int maximum = int.MaxValue)
{
return maximum - minimum;
}
will output -1
Why does it do this? It should throw an error because its clearly an overflow!
int.MaxValue - int.MinValue = a value which int cannot hold. Thus, the number wraps around back to -1.
It is like 2147483647-(-2147483648) = 4294967295 which is not an int
Int32.MinValue Field
The value of this constant is -2,147,483,648; that is, hexadecimal
0x80000000.
And Int32.MaxValue Field
The value of this constant is 2,147,483,647; that is, hexadecimal
0x7FFFFFFF.
From MSDN
When integer overflow occurs, what happens depends on the execution
context, which can be checked or unchecked. In a checked context, an
OverflowException is thrown. In an unchecked context, the most
significant bits of the result are discarded and execution continues.
Thus, C# gives you the choice of handling or ignoring overflow.
This is because of compile-time overflow checking of your code. The line
Console.WriteLine(int.MaxValue - int.MinValue);
would not actually error at runtime, it would simple write "-1", but due to overflow checking you get the compile error "The operation overflows at compile time in checked mode".
To get around the compile-time overflow checking in this case you can do:
unchecked
{
Console.WriteLine(int.MaxValue - int.MinValue);
}
Which will run fine and output "-1"
The default project-level setting that controls this is set to "unchecked" by default. You can turn on overflow checking by going to the project properties, Build tab, Advanced button. The popup allows you to turn on overflow checking. The .NET Fiddle tool that you link to seems to perform some additional static analysis that is preventing you from seeing the true out-of-the-box runtime behavior. (The error for your first code snippet above is "The operation overflows at compile time in checked mode." You aren't seeing a runtime error.)
I think it goes even further than overflows.
if i look at this
Int64 max = Int32.MaxValue;
Console.WriteLine(max.ToString("X16")); // 000000007FFFFFFF
Int64 min = Int32.MinValue;
Console.WriteLine(min.ToString("X")); //FFFFFFFF80000000
Int64 subtract = max - min;
Console.WriteLine(subtract.ToString("X16")); //00000000FFFFFFFF <- not an overflow since it's a 64 bit number
Int32 neg = -1
Console.WriteLine(neg.ToString("X")); //FFFFFFFF
Here you see that if you just subtract the hex values) in 2's complement you get the number that's -1 in a 32 bit number. (after trunkating the leading 0's
2's complement arithmetic can be very fun http://en.wikipedia.org/wiki/Two's_complement

In what situations would I specify operation as unchecked?

For example:
int value = Int32.MaxValue;
unchecked
{
value += 1;
}
In what ways would this be useful? can you think of any?
Use unchecked when:
You want to express a constant via overflow (this can be useful when specifying bit patterns)
You want arithmetic to overflow without causing an error
The latter is useful when computing a hash code - for example, in Noda Time the project is built with checked arithmetic for virtual everything apart from hash code generation. When computing a hash code, it's entirely normal for overflow to occur, and that's fine because we don't really care about the result as a number - we just want it as a bit pattern, really.
That's just a particularly common example, but there may well be other times where you're really happy for MaxValue + 1 to be MinValue.

When must we use checked operator in C#?

When must we use checked operator in C#?
Is it only suitable for exception handling?
You would use checkedto guard against a (silent) overflow in an expression.
And use unchecked when you know a harmless overflow might occur.
You use both at places where you don't want to rely on the default (project-wide) compiler setting.
Both forms are pretty rare, but when doing critical integer arithmetic it is worth thinking about possible overflow.
Also note that they come in two forms:
x = unchecked(x + 1); // ( expression )
unchecked { x = x + 1;} // { statement(s) }
checked will help you to pick up System.OverFlowException which will go unnoticed otherwise
int result = checked (1000000 * 10000000);
// Error: operation > overflows at compile time
int result = unchecked (1000000 * 10000000);
// No problems, compiles fine
From The checked and unchecked operators
The checked and unchecked operators
are used to control the overflow
checking context for integral-type
arithmetic operations and conversions.
In a checked context, if an expression
produces a value that is outside the
range of the destination type, the
result depends on whether the
expression is constant or
non-constant. Constant expressions
cause compile time errors, while
non-constant expressions are evaluated
at run time and raise exceptions.
In an unchecked context, if an
expression produces a value that is
outside the range of the destination
type, the result is truncated.
checked, unchecked
checked vs. unchecked is also useful in those times when you are doing integer math. especially incrementing operations and you know you will increment past UInt32.MaxValue, and want it to harmlessly overflow back to 0.

Categories

Resources