Issue with "arithmetic operation resulted in overflow" - c#

I have this expression
long balance = (long)answer.Find(DppGlobals.TAG_TI_BALANCE).Get_QWORD();
which raises exception that there was overflow. The value on the right hand side is of unsigned type and has value: 18446744073708240732.
How to avoid this exception, use unchecked?
PS equivalent C++ implementation returned balance = -1310984, I need same value here too.
Why is there such an exception?
By using unchecked indeed now also on C# side I get -1310984. Can someone advice, am I losing data somehow?

Using uncheck will also crop your value, but you will not have an exception,you will get wrong number.I think you will do something changing your code, not mask your exception.
long balance = (unchecked))(long)answer.Find(DppGlobals.TAG_TI_BALANCE).Get_QWORD());
Why is there such an exception?
It is a very good that you get an error.Because if you are working in the sphere where everything is numbers, you can have a mistake and don't understand this.So the RunTime will show you your eror

Given the comments, it sounds like you do just need to use unchecked arithmetic - but you should be concerned about the use of ulong in your API. If your aim is to just propagate 8 bytes of data, and interpret it as either an unsigned integer or a signed integer depending on context, then you're fine - so long as nothing performs arithmetic on it in the "wrong" form.
It's important to understand why this happens though, and it's much easier to explain that with small numbers. I'll use byte and sbyte as an example. The range of byte is 0 to 255 inclusive. The range of sbyte is -128 to 127 inclusive. Both can store 256 different values. No problem.
Now, using unchecked conversions, it's fine to say:
byte a = GetByteFromSomewhere();
sbyte b = (sbyte) a;
StoreSByteSomewhere(b);
...
sbyte b = GetSByteFromStorage();
byte a = (byte) b;
If that's all you're doing, that's fine. A byte value of -1 will become an sbyte value of 255 and vice versa - basically it's just interpreting the same bit pattern in different ways.
But if you're performing other operations on the value when it's being handled as the "wrong" type, then you could get unexpected answers. For example, consider "dividing by 3". If you divide -9 by 3, you get -3, right? Whereas if you have:
byte original = -9;
sbyte converted = (sbyte) original;
sbyte divided = converted / 3;
byte result = (byte) divided;
... then you end up with a result of -2 instead of -3, due to the way the arithmetic worked.
Now that's all for unchecked conversions. When you have a checked conversion, it doesn't just interpret the bits - it treats the value as a number instead. So for example:
// In a checked context...
byte a = 128;
sbyte b = (sbyte) a; // Bang! Exception
That's because 128 (as a number) is outside the range of sbyte. This is what's happening in your case - the number 18446744073708240732 is outside the range of long, so you're getting an exception. The checked conversion is treating it as a number which can be range-checked, rather than an unchecked conversion just reinterpreting the bits as a long (which leads to the negative number you want).

Related

Calculators working with larger numbers than 18446744073709551615

When I Initialize a ulong with the value 18446744073709551615 and then add a 1 to It and display to the Console It displays a 0 which is totally expected.
I know this question sounds stupid but I have to ask It. if my Computer has a 64-bit architecture CPU how is my calculator able to work with larger numbers than 18446744073709551615?
I suppose floating-point has a lot to do here.
I would like to know exactly how this happens.
Thank you.
working with larger numbers than 18446744073709551615
"if my Computer has a 64-bit architecture CPU" --> The architecture bit size is largely irrelevant.
Consider how you are able to add 2 decimal digits whose sum is more than 9. There is a carry generated and then used when adding the next most significant decimal place.
The CPU can do the same but with base 18446744073709551616 instead of base 10. It uses a carry bit as well as a sign and overflow bit to perform extended math.
I suppose floating-point has a lot to do here.
This is nothing to do with floating point.
; you say you're using ulong, which means your using unsigned 64-but arithmetic. The largest value you can store is therefore "all ones", for 64 bits - aka UInt64.MaxValue, which as you've discovered: https://learn.microsoft.com/en-us/dotnet/api/system.uint64.maxvalue
If you want to store arbitrarily large numbers: there are APIs for that - for example BigInteger. However, arbitrary size cones at a cost, so it isn't the default, and certainly isn't what you get when you use ulong (or double, or decimal, etc - all the compiler-level numeric types have fixed size).
So: consider using BigInteger
You either way have a 64 bits architecture processor and limited to doing 64 bits math - your problem is a bit hard to explain without taking an explicit example of how this is solved with BigInteger in System.Numerics namespace, available in .NET Framework 4.8 for example. The basis is to 'decompose' the number into an array representation.
mathematical expression 'decompose' here meaning :
"express (a number or function) as a combination of simpler components."
Internally BigInteger uses an internal array (actually multiple internal constructs) and a helper class called BigIntegerBuilder. In can implicitly convert an UInt64 integer without problem, for even bigger numbers you can use the + operator for example.
BigInteger bignum = new BigInteger(18446744073709551615);
bignum += 1;
You can read about the implicit operator here:
https://referencesource.microsoft.com/#System.Numerics/System/Numerics/BigInteger.cs
public static BigInteger operator +(BigInteger left, BigInteger right)
{
left.AssertValid();
right.AssertValid();
if (right.IsZero) return left;
if (left.IsZero) return right;
int sign1 = +1;
int sign2 = +1;
BigIntegerBuilder reg1 = new BigIntegerBuilder(left, ref sign1);
BigIntegerBuilder reg2 = new BigIntegerBuilder(right, ref sign2);
if (sign1 == sign2)
reg1.Add(ref reg2);
else
reg1.Sub(ref sign1, ref reg2);
return reg1.GetInteger(sign1);
}
In the code above from ReferenceSource you can see that we use the BigIntegerBuilder to add the left and right parts, which are also BigInteger constructs.
Interesting, it seems to keep its internal structure into an private array called "_bits", so that is the answer to your question. BigInteger keeps track of an array of 32-bits valued integer array and is therefore able to handle big integers, even beyond 64 bits.
You can drop this code into a console application or Linqpad (which has the .Dump() method I use here) and inspect :
BigInteger bignum = new BigInteger(18446744073709551615);
bignum.GetType().GetField("_bits",
BindingFlags.NonPublic | BindingFlags.Instance).GetValue(bignum).Dump();
A detail about BigInteger is revealed in a comment in the source code of BigInteger on Reference Source. So for integer values, BigInteger stores the value in the _sign field, for other values the field _bits is used.
Obviously, the internal array needs to be able to be converted into a representation in the decimal system (base-10) so humans can read it, the ToString() method converts the BigInteger to a string representation.
For a better in-depth understanding here, consider doing .NET source stepping to step way into the code how you carry out the mathematics here. But for a basic understanding, the BigInteger uses an internal representation of which is composed with 32 bits array which is transformed into a readable format which allows bigger numbers, bigger than even Int64.
// For values int.MinValue < n <= int.MaxValue, the value is stored in sign
// and _bits is null. For all other values, sign is +1 or -1 and the bits are in _bits

Compiling in checked and unckecked context

What is the value of j?
Int32 i = Int32.MinValue;
Int32 j = -i;
Compiling in checked context will throw an exception.
In unchecked we obtain value Int32.MinValue.
But why so?
Here is min and max Int32 values:
Dec Bin (first bit is a sign bit)
Int32.MinValue -2147483648 10000000000000000000000000000000
Int32.MaxValue 2147483647 01111111111111111111111111111111
When you are trying to get -(-2147483648) with explicit overflow checking you get exception in this case, because 2147483648 is bigger than max allowed value for int type.
Then why you are getting MinValue when overflow is allowed? Because when Int32.MinValue is negated you have 2147483648 which has binary representation 10000000000000000000000000000000 but with signed integer first bit is a sign bit, so you get exactly Int32.Min value.
So, the problem here is treating first bit as number sign. If you would assign result of negated Int32.Min value to unsigned integer, you would get 2147483648 value, as expected:
Int32 i = Int32.MinValue;
UInt32 j = (UInt32)-i; // it has 32 bits and does not treat first bit as sign
This is an example of integer overflow. In a checked context, the integer overflow will be detected and converted to an exception, because the language designers decided so.
To explain integer overflow, you can do the calculation in binary by hand. To calculate -X, take X in binary, change all the 1's to 0's and 0's to 1's, then add 1 (the number 1, not a 1 bit).
Example: 5 = 00000000000000000000000000000101
flip all bits: 11111111111111111111111111111010
add one: 11111111111111111111111111111011 which is -5
Int32.MinValue = 10000000000000000000000000000000
flip all bits: 01111111111111111111111111111111
add one: 10000000000000000000000000000000
If you take Int32.MinValue and negate it, it doesn't change. -Int32.MinValue can't fit in an int - if you do -(Int64)Int32.MinValue it will work as expected.

What actually happens when a Byte overflows?

What actually happens when a Byte overflows?
Say we have
byte byte1 = 150; // 10010110
byte byte2 = 199; // 11000111
If we now do this addition
byte byte3 = byte1 + byte2;
I think we'll end up with byte3 = 94 but what actually happens? Did I overwrite some other memory somehow or is this totally harmless?
It's quite simple. It just does the addition and comes off at a number with more than 8 bits. The ninth bit (being one) just 'falls off' and you are left with the remaining 8 bits which form the number 94.
(yes it's harmless)
The top bits will be truncated. It is not harmful to any other memory, it is only harmful in terms of unintended results.
In C# if you have
checked { byte byte3 = byte1 + byte2; }
It will throw an overflow exception. Code is compiled unchecked by default. As the other answers are saying, the value will 'wrap around'. ie, byte3 = (byte1 + byte2) & 0xFF;
The carry flag gets set... but besides the result not being what you expect, there should be no ill effects.
Typically (and the exact behaviour will depend on the language and platform), the result will be taken modulo-256. i.e. 150+199 = 349. 349 mod 256 = 93.
This shouldn't affect any other storage.
Since you have tagged your question C#, C++ and C, I'll answer about C and C++. In c++ overflow on signed types, including sbyte (which, I believe, is signed char in C/C++) results in undefined behavior. However for unsigned types, such as byte (which is unsigned char in C++) the result is takes modulo 2n where n is the number of bits in the unsigned type. In C# the second rule holds, and the signed types generate an exception if they are in checked block. I may be wrong in the C# part.
Overflow is harmless in c# - you won't overflow memory - you simply obly get the last 8 bits of the result. If you want this to this an exception, use the 'checked' keyword. Note also that you may find byte+byte gives int, so you may need to cast back to byte.
The behavior depends on the language.
In C and C++, signed overflow is undefined and unsigned overflow has the behavior you mentioned (although there is no byte type).
In C#, you can use the checked keyword to explicitly say you want to receive an exception if there is overflow and the unchecked keyword to explicitly say you want to ignore it.
Leading bit just dropped off.
And arithmetic overflow occurs. Since 150+199=349, binary 1 0101 1101, the upper 1 bit is dropped and the byte becomes 0101 1101; i.e. the number of bits a byte can hold overflowed.
No damage was done - e.g. the memory didn't overflow to another location.
Let's look at what actually happens (in C (assuming you've got the appropriate datatype, as some have pointed out that C doesn't have a "byte" datatype; nonetheless, there are 8-bit datatypes which can be added)). If these bytes are declared on the stack, they exist in main memory; at some point, the bytes will get copied to the processor for operation (I'm skipping over several important steps, such as processsor cacheing...). Once in the processor, they will be stored in registers; the processor will execute an add operation upon those two registers to add the data together. Here's where the cause of confusion occurs. The CPU will perform the add operation in the native (or sometimes, specified) datatype. Let's say the native type of the CPU is a 32-bit word (and that that datatype is what is used for the add operation); that means these bytes will be stored in 32-bit words with the upper 24 bits unset; the add operation will indeed do the overflow in the target 32-bit word. But (and here's the important bit) when the data is copied back from the register to the stack, only the lowest 8 bits (the byte) will be copied back to the target variable's location on the stack. (Note that there's some complexity involved with byte packing and the stack here as well.)
So, here's the upshot; the add causes an overflow (depending on the specific processor instruction chosen); the data, however, is copied out of the processor into a datatype of the appropriate size, so the overflow is unseen (and harmless, assuming a properly written compiler).
As far as C# goes, adding two values of type byte together results in a value of type int which must then be cast back to byte.
Therefore your code sample will result in a compiler error without a cast back to byte as in the following.
byte byte1 = 150; // 10010110
byte byte2 = 199; // 11000111
byte byte3 = (byte)(byte1 + byte2);
See MSDN for more details on this. Also, see the C# language specification, section 7.3.6 Numeric promotions.

int or uint or what

Consider this
int i = 2147483647;
var n = i + 3;
i = n;
Console.WriteLine(i); // prints -2147483646 (1)
Console.WriteLine(n); // prints -2147483646 (2)
Console.WriteLine(n.GetType()); // prints System.Int32 (3)
I am confused with following
(1) how could int hold the value -2147483646 ? (int range = -2,147,483,648 to 2,147,483,647)
(2) why does this print -2147483648 but not 2147483648 (compiler should
decide better type as int range
exceeds)
(3) if it is converted somewhere, why n.GetType() gives System.Int32
?
Edit1: Made the correction: Now you will get What I am Getting. (sorry for that)
var n = i + 1; to
var n = i + 3;
Edit2: One more thing, if it as overflow, why is an exception not raised ?
Addition: as the overflow occurs, is it not right to set the type for
var n
in statement var n = i + 3; to another type accordingly ?
you are welcome to suggest a better title, as this is not making sense to.... me at least
Thanks
Update: Poster fixed his question.
1) This is output is expected because you added 3 to int.MaxValue causing an overflow. In .NET by default this is a legal operation in unchecked code giving a wrap-around to negative values, but if you add a checked block around the code it will throw an OverflowException instead.
2) The type of a variable declared with var is determined at compile time not runtime. It's a rule that adding two Int32s gives an Int32, not a UInt32, an Int64 or something else. So even though at runtime you can see that the result is too big for an Int32, it still has to return an Int32.
3) It's not converted to another type.
1) -2147483646 is bigger than -2,147,483,648
2) 2147483648 is out of range
3) int is an alias for Int32
1)
First of all, the value in the variable is not -2147483646, it's -2147483648. Run your test again and check the result.
There is no reason that an int could not hold the value -2147483646. It's within the range -2147483648..2147483647.
2)
The compiler chooses the data type of the variable to be the type of the result of the expression. The expression returns an int value, and even if the compiler would choose a larger data type for the variable, the expression still returns an int and you get the same value as result.
It's the operation in the expression that overflows, it's not when the result is assigned to the variable that it overflows.
3)
It's not converted anywhere.
This is an overflow, your number wrapped around and went negative
This isn't the compiler's job, as a loop at runtime can cause the same thing
int is an alias or System.Int32 they are equivalent in .Net.
This is because of the bit representation
you use Int32 but the same goes for char (8 bits)
the first bit holds the sign, then the following bits hold the number
so with 7 bits you can represent 128 numbers 0111 1111
when you try the 129th, 1000 0001, the sign bits get set so the computer thinks its -1 instead
Arithmic operations in .NET don't change the actual type.
You start off with an (32bit) integer and the +3 isn't going to change that.
That's also why you get an unexpected round number when you do this:
int a = 2147483647;
double b = a / 4;
or
int a = 2147483647;
var b = a / 4;
for that matter.
EDIT:
There is no exception because .NET overflows the number.
The overflow exception will only occur at assignment operations or as Mark explains when you set the conditions to generate the exception.
If you want an exception to be thrown, write
abc = checked(i+3)
instead. That will check for overflows.
Also, in c#, the default setting is to not throw exceptions on overflows. But you can switch that option somewhere on your project's properties.
You could make this easier on us all by using hex notation.
Not everyone knows that the eighth Mersenne prime is 0x7FFFFFFF
Just sayin'

Why is the result of a subtraction of an Int16 parameter from an Int16 variable an Int32? [duplicate]

This question already has answers here:
Closed 13 years ago.
Possible Duplicate:
byte + byte = int… why?
I have a method like this:
void Method(short parameter)
{
short localVariable = 0;
var result = localVariable - parameter;
}
Why is the result an Int32 instead of an Int16?
It's not just subtraction, there simply exisits no short (or byte/sbyte) arithmetic.
short a = 2, b = 3;
short c = a + b;
Will give the error that it cannot convert int (a+b) to short (c).
One more reason to almost never use short.
Additional: in any calculation, short and sbyte will always be 'widened' to int, ushort and byte to uint. This behavior goes back to K&R C (and probaly is even older than that).
The (old) reason for this was, afaik, efficiency and overflow problems when dealing with char. That last reason doesn't hold so strong for C# anymore, where a char is 16 bits and not implicitly convertable to int. But it is very fortunate that C# numerical expressions remain compatible with C and C++ to a very high degree.
All operations with integral numbers smaller than Int32 are widened to 32 bits before calculation by default. The reason why the result is Int32 is simply to leave it as it is after calculation. If you check the MSIL arithmetic opcodes, the only integral numeric type they operate with are Int32 and Int64. It's "by design".
If you desire the result back in Int16 format, it is irrelevant if you perform the cast in code, or the compiler (hypotetically) emits the conversion "under the hood".
Also, the example above can easily be solved with the cast
short a = 2, b = 3;
short c = (short) (a + b);
The two numbers would expand to 32 bits, get subtracted, then truncated back to 16 bits, which is how MS intended it to be.
The advantage of using short (or byte) is primarily storage in cases where you have massive amounts of data (graphical data, streaming, etc.)
P.S. Oh, and the article is "a" for words whose pronunciation starts with a consonant, and "an" for words whose pronunciated form starts with a vowel. A number, AN int. ;)
The other answers given within this thread, as well as the discussions given here are instructive:
(1) Why is a cast required for byte subtraction in C#?
(2) byte + byte = int… why?
(3) Why is a cast required for byte subtraction in C#?
But just to throw another wrinkle into it, it can depend on which operators you use. The increment (++) and decrement (--) operators as well as the addition assignment (+=) and subtraction assignment (-=) operators are overloaded for a variety of numeric types, and they perform the extra step of converting the result back to the operand's type when returning the result.
For example, using short:
short s = 0;
s++; // <-- Ok
s += 1; // <-- Ok
s = s + 1; // <-- Compile time error!
s = s + s; // <-- Compile time error!
Using byte:
byte b = 0;
b++; // <-- Ok
b += 1; // <-- Ok
b = b + 1; // <-- Compile time error!
b = b + b; // <-- Compile time error!
If they didn't do it this way, calls using the increment operator (++) would be impossible and calls to the addition assignment operator would be awkward at best, e.g.:
short s
s += (short)1;
Anyway, just another aspect to this whole discussion...
I think its done automatically done to avoid overflow,
lets say you do something like this.
Short result = Short.MaxValue + Short.MaxValue;
The result clearly wouldn't fit in a short.
one thing i don't understand is then why not do it for int32 too which would automatically convert to long???
The effect you are seeing...
short - short = int
...is discussed extensively in this Stack Overflow question: byte + byte = int… why?
There is a lot of good information and some interesting discussions as to why it is that way.
Here is a highly-voted answer:
I believe it's basically for the sake
of performance. (In terms of "why it
happens at all" it's because there
aren't any operators defined by C# for
arithmetic with byte, sbyte, short or
ushort, just as others have said. This
answer is about why those operators
aren't defined.)
Processors have native operations to
do arithmetic with 32 bits very
quickly. Doing the conversion back
from the result to a byte
automatically could be done, but would
result in performance penalties in the
case where you don't actually want
that behaviour.
-- Jon Skeet

Categories

Resources