I'm a little confused about the behavior of the ulong type. I understand it to be a 64-bit integer used for positive numbers (max value 18,446,744,073,709,551,615). I just started using it because I need really big positive numbers, but there's some strange behavior I don't understand around potential negative numbers.
ulong big = 1000000; //Perfectly legal
ulong negative = -1; //"Constant value "-1" cannot be converted to a ulong" Makes sense
//Attempt to sneak around the -1 as a constant compile-time error
int negativeInt = -1;
ulong negativeUlongFail = negativeInt; //"Cannot implicitly convert 'int' to 'ulong'.
//An explicit conversion exists (are you missing a cast?)"
//So I add casting
ulong negativeUlong = (ulong)negativeInt; //But this yields 18446744073709551615
//Try and get a negative number indirectly
ulong number = 0;
number = number - 10; //now number is 18446744073709551615 but no underflow error
What's going on? How come some underflow errors are caught and others aren't and don't even throw exceptions? Can I detect these?
I focused on getting negative numbers by underflowing, but I've seen similar things when getting numbers bigger than the max value and overflowing.
I'm not sure if they're technically Errors or Exceptions, so please forgive me if I used incorrect terminology
if you want the underflow (or overflow) exception to be thrown, try this instead :-
checked
{
ulong number = 0;
number = number - 10;
}
by default the exception gets swallowed (it runs in unchecked).
see official documentation
https://learn.microsoft.com/en-us/dotnet/csharp/language-reference/keywords/checked
Related
The following runs fine without error and diff is 1:
int max = int.MaxValue;
int min = int.MinValue;
//Console.WriteLine("Min is {0} and max is {1}", min, max);
int diff = min - max;
Console.WriteLine(diff);
Wouldn't then all programs be suspect? a+b is no more the sum of a and b, where a and b are of type int. Sometimes it is, but sometimes it is the sum* of a, b and 2*int.MinValue.
* Sum as in the ordinary English meaning of addition, ignoring any computer knowledge or word size
In PowerShell, it looks better, but it still is not a hardware exception from the add operation. It appears to use a long before casting back to an int:
[int]$x = [int]::minvalue - [int]::maxvalue
Cannot convert value "-4294967295" to type "System.Int32". Error: "Value was either too large or too small for an Int32."
By default, overflow checking is turned off in C#. Values simply "wrap round" in the common way.
If you compiled the same code with /checked or used a checked { ... } block, it would throw an exception.
Depending on what you're doing, you may want checking or explicitly not want it. For example, in Noda Time we have overflow checking turned on by default, but explicitly turn it off for GetHashCode computations (where we expect overflow and have no problem with it) and computations which we know won't overflow, and where we don't want the (very slight) performance penalty of overflow checking.
See the checked and unchecked pages in the C# reference, or section 7.6.12 of the C# language specification, for more details.
When not specified, .NET will not check for numeric overflows when doing operations on numeric data types int, etc.
You can enable this by either building in checked mode (passing /checked into the compiler arguments), or use checked in a code segment:
checked
{
int i = int.MaxValue + int.MaxValue;
Console.WriteLine(i);
}
Consider the following code. First 5 iterations goes well and then it goes on in an infinite loop spamming 0's.
for (int i = 2; i < 100000; i*=i)
{
Console.WriteLine(i);
}
I'm suspecting its because the Integer overflows, and guessing that the default returned value when it overflows is 0.
How would you write this code, so that it does what it is intended to do?
Edit: This is kinda a theoretical example, and i'm just playing around to learn. Is there no way to do this, without changing datatype or checking for 0? Would be nice if it actually threw an exception.
I believe you're right that it overflows.
Rather than just telling you to use unchecked to force it to throw an exception, here is an explanation of WHY you get zeros after the 5th result
Here is the series:
2
4
16
256
65536
4294967296 <- this won't fit into an int
in your code, 2,147,483,648 + 1 becomes -2,147,483,648
the 6th value overflows and becomes 0 (4,294,967,296 ends up going from 0 to 2,147,483,648 to -2,147,483,648 and keeps adding back up towards 0 again)
and your for modifier clause will then run 0*=0 indefinitely
EDIT: as per your update, use the checked keyword to get an exception on the overflow
checked {
for (int i = 2; i < 100000; i*=i)
{
Console.WriteLine(i);
}
}
otherwise use a long or ulong instead of int as both of those can safely store the result of 100000*100000
Use checked if you want an overflow exception:
checked
{
for (int i = 2; i < 100000; i*=i)
{
Console.WriteLine(i);
}
}
The checked keyword is used to explicitly enable overflow checking for
integral-type arithmetic operations and conversions.
By default, an expression that contains only constant values causes a
compiler error if the expression produces a value that is outside the
range of the destination type. If the expression contains one or more
non-constant values, the compiler does not detect the overflow.
To my understanding, that should give you an overflow error and when I write it like this:
public static void Main()
{
Console.WriteLine(int.MaxValue - int.MinValue);
}
it does correctly give me an overflow error.
However:
public static void Main()
{
Console.WriteLine(test());
}
public static Int32 test(int minimum = int.MinValue, int maximum = int.MaxValue)
{
return maximum - minimum;
}
will output -1
Why does it do this? It should throw an error because its clearly an overflow!
int.MaxValue - int.MinValue = a value which int cannot hold. Thus, the number wraps around back to -1.
It is like 2147483647-(-2147483648) = 4294967295 which is not an int
Int32.MinValue Field
The value of this constant is -2,147,483,648; that is, hexadecimal
0x80000000.
And Int32.MaxValue Field
The value of this constant is 2,147,483,647; that is, hexadecimal
0x7FFFFFFF.
From MSDN
When integer overflow occurs, what happens depends on the execution
context, which can be checked or unchecked. In a checked context, an
OverflowException is thrown. In an unchecked context, the most
significant bits of the result are discarded and execution continues.
Thus, C# gives you the choice of handling or ignoring overflow.
This is because of compile-time overflow checking of your code. The line
Console.WriteLine(int.MaxValue - int.MinValue);
would not actually error at runtime, it would simple write "-1", but due to overflow checking you get the compile error "The operation overflows at compile time in checked mode".
To get around the compile-time overflow checking in this case you can do:
unchecked
{
Console.WriteLine(int.MaxValue - int.MinValue);
}
Which will run fine and output "-1"
The default project-level setting that controls this is set to "unchecked" by default. You can turn on overflow checking by going to the project properties, Build tab, Advanced button. The popup allows you to turn on overflow checking. The .NET Fiddle tool that you link to seems to perform some additional static analysis that is preventing you from seeing the true out-of-the-box runtime behavior. (The error for your first code snippet above is "The operation overflows at compile time in checked mode." You aren't seeing a runtime error.)
I think it goes even further than overflows.
if i look at this
Int64 max = Int32.MaxValue;
Console.WriteLine(max.ToString("X16")); // 000000007FFFFFFF
Int64 min = Int32.MinValue;
Console.WriteLine(min.ToString("X")); //FFFFFFFF80000000
Int64 subtract = max - min;
Console.WriteLine(subtract.ToString("X16")); //00000000FFFFFFFF <- not an overflow since it's a 64 bit number
Int32 neg = -1
Console.WriteLine(neg.ToString("X")); //FFFFFFFF
Here you see that if you just subtract the hex values) in 2's complement you get the number that's -1 in a 32 bit number. (after trunkating the leading 0's
2's complement arithmetic can be very fun http://en.wikipedia.org/wiki/Two's_complement
I am doing: -
Decimal production = 0;
Decimal expense = 5000;
Decimal.ToUInt64(production - expense);
But it throws exception with the following error message.
"Value was either too large or too small for a UInt64"
Can someone give me a workaround for this.
Thanks!
Edit
In any case, I want the result as a positive number.
Problem: -5000m is a negative number, which is outside the range of UInt64 (an unsigned type).
Solution: use Int64 instead of UInt64 if you want to cope with negative numbers.
Note that you can just cast instead of calling Decimal.To...:
long x = (long) (production - expense);
Alternative: validate that the number is non-negative before trying to convert it, and deal with it however you deem appropriate.
Very dodgy alternative: if you really just want the absolute value (which seems unlikely) you could use Math.Abs:
UInt64 alwaysNonNegative = Decimal.ToUInt64(Math.Abs(production - expense));
0 - 5000 will return -5000. And you are trying to convert to an unsigned int which can not take negative values.
Try changing it to signed int
Decimal.ToInt64(production - expense);
UInt can not store negative numbers. The result of your calculation is negative. That's why the error comes. Check the sign before using ToUInt64 and correct it via *-1 or use a signed Int64.
use
var result = Decimal.ToInt64(production - expense);
Consider this
int i = 2147483647;
var n = i + 3;
i = n;
Console.WriteLine(i); // prints -2147483646 (1)
Console.WriteLine(n); // prints -2147483646 (2)
Console.WriteLine(n.GetType()); // prints System.Int32 (3)
I am confused with following
(1) how could int hold the value -2147483646 ? (int range = -2,147,483,648 to 2,147,483,647)
(2) why does this print -2147483648 but not 2147483648 (compiler should
decide better type as int range
exceeds)
(3) if it is converted somewhere, why n.GetType() gives System.Int32
?
Edit1: Made the correction: Now you will get What I am Getting. (sorry for that)
var n = i + 1; to
var n = i + 3;
Edit2: One more thing, if it as overflow, why is an exception not raised ?
Addition: as the overflow occurs, is it not right to set the type for
var n
in statement var n = i + 3; to another type accordingly ?
you are welcome to suggest a better title, as this is not making sense to.... me at least
Thanks
Update: Poster fixed his question.
1) This is output is expected because you added 3 to int.MaxValue causing an overflow. In .NET by default this is a legal operation in unchecked code giving a wrap-around to negative values, but if you add a checked block around the code it will throw an OverflowException instead.
2) The type of a variable declared with var is determined at compile time not runtime. It's a rule that adding two Int32s gives an Int32, not a UInt32, an Int64 or something else. So even though at runtime you can see that the result is too big for an Int32, it still has to return an Int32.
3) It's not converted to another type.
1) -2147483646 is bigger than -2,147,483,648
2) 2147483648 is out of range
3) int is an alias for Int32
1)
First of all, the value in the variable is not -2147483646, it's -2147483648. Run your test again and check the result.
There is no reason that an int could not hold the value -2147483646. It's within the range -2147483648..2147483647.
2)
The compiler chooses the data type of the variable to be the type of the result of the expression. The expression returns an int value, and even if the compiler would choose a larger data type for the variable, the expression still returns an int and you get the same value as result.
It's the operation in the expression that overflows, it's not when the result is assigned to the variable that it overflows.
3)
It's not converted anywhere.
This is an overflow, your number wrapped around and went negative
This isn't the compiler's job, as a loop at runtime can cause the same thing
int is an alias or System.Int32 they are equivalent in .Net.
This is because of the bit representation
you use Int32 but the same goes for char (8 bits)
the first bit holds the sign, then the following bits hold the number
so with 7 bits you can represent 128 numbers 0111 1111
when you try the 129th, 1000 0001, the sign bits get set so the computer thinks its -1 instead
Arithmic operations in .NET don't change the actual type.
You start off with an (32bit) integer and the +3 isn't going to change that.
That's also why you get an unexpected round number when you do this:
int a = 2147483647;
double b = a / 4;
or
int a = 2147483647;
var b = a / 4;
for that matter.
EDIT:
There is no exception because .NET overflows the number.
The overflow exception will only occur at assignment operations or as Mark explains when you set the conditions to generate the exception.
If you want an exception to be thrown, write
abc = checked(i+3)
instead. That will check for overflows.
Also, in c#, the default setting is to not throw exceptions on overflows. But you can switch that option somewhere on your project's properties.
You could make this easier on us all by using hex notation.
Not everyone knows that the eighth Mersenne prime is 0x7FFFFFFF
Just sayin'