strange no error within C# application [duplicate] - c#

This question already has answers here:
No overflow exception for int in C#?
(6 answers)
Closed 8 years ago.
I have a c# application in which i have this code :
public static void Main()
{
int i = 2147483647;
int j = i+1;
Console.WriteLine(j);
Console.ReadKey();
}
The result is : -2147483648
I know that every integer must be < 2147483648. So
Why I don't have a compilation or runtime error? like in this exemple
What is the reason of the negative sign?
thanks

The compiler defaults to unchecked arithmetic; you have simply overflown and looped around, thanks to two's-complement storage.
This fails at runtime:
public static void Main()
{
int i = 2147483647;
int j = checked((int)(i + 1)); // <==== note "checked"
Console.WriteLine(j);
Console.ReadKey();
}
This can also be enabled globally as a compiler-switch.

As Christos says, the negative sign comes from integer overflow. The reason you do net get an error is because the compiler does not evaluate expressions for overflowing values.
0111 1111 1111 1111 1111 1111 1111 1111 2^31-1
+0000 0000 0000 0000 0000 0000 0000 0001 1
=1000 0000 0000 0000 0000 0000 0000 0000 -2^31
The reason for this is that the leftmost bit is the sign bit, it determines whether the int is positive or negative. 0 is positive, 1 is negative. If you add one to the largest possible number, you essentially change the sign bit and get the smallest representable number. The reason for this is that integers use two's complement storage
To check if the value overflows, do:
int j = checked(i + 1);

Why I don't have a compilation or runtime error?
Because compiler can determine that you have assigned a larger than int.MaxValue value to the variable. Since it is hard coded. But for i+1 compiler can't execute the code to determine that the result of this calculation would be greater than int.MaxValue
What is the reason of the negative sign?
It is because of integer overflow.
See: checked (C# Reference)
By default, an expression that contains only constant values causes
a compiler error if the expression produces a value that is outside
the range of the destination type. If the expression contains one or
more non-constant values, the compiler does not detect the overflow.

What is the reason of the negative sign?
You have a negative sign, because you have exceeded the maximum integer value and the next integer is the lowest integer that can be represented.
Why I don't have a compilation or runtime error?
You don't have a complilation error, because this is not an error. Also, this is not a runtime error. You just add one to the i in the runtime. Since the value of i is the maximum integer value that can be stored in a variable of type int and since the circular nature of the integers in programming, you will get the lowest integer that can be stored in a variable of type int.
(A variable of type int can store 32-bit integers).
Furthermore, by default you in C# integer operation don't throw exceptions upon overflow. You could alter this either from project settings or using a checked statement, as it is already have been pointed out here.

Related

How Logical XOR [duplicate]

This question already has answers here:
In c# what does the ^ character do? [duplicate]
(4 answers)
Closed 5 years ago.
I understand that for boolean values Exclusive OR says the output will be on if the inputs are different. link
But how does it work on non boolean values like below. In C# or Javascript, how is its value "10" for below code. Can anyone explain this for me please?
Console.WriteLine(9^3);
I get the impression you are thinking in purely logical terms, where the result must be true or false, 1 or 0. The ^ operator does act this way, but as a bitwise operator is does it per bit, one at a time, instead of on the whole value at once. It's not that 9 and 3 are both "true", and so the result must be false. It's that 9 is 1001 and 3 is 0011, and when you check each of the corresponding bits with xor you get 1010, which is 10:
1001 (9)
^ 0011 (3)
------
1010 (10)
Bitwise operators perform operation on the bits stored in the memory.
Hence take equalent binary value for each decimal number and perform the operation.
9 ---Binary Value ---> 0000 1001
3 ---Binary Value ---> 0000 0011
Perform EXOR (^) ------------
0000 1010 ---- Decimal Value --> 10

Signed vs Unsigned numbers 0xFFFFFFFF ambiguity? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
As you can see here I have an unsigned variable and a signed one both with the same binary values but different decimal results.
uint unsigned = 0xFFFFFFFF; // 4,294,967,295
int signed = 0xFFFFFFFF; // -1
I am really confused! How does the machine make the difference between signed and unsigned , after all those are 32/64 bits all turned on. Is there a CPU flag that is used exactly for this? How does the CPU know the difference between a unsigned and a signed number? Trying to understand the concept for hours!
Or... another example...let's take 4 bits:
// 1 1 0 1 <--- -3
// 0 0 1 0 <--- 2
If we subtract 2 from -3 the result will be:
// 1 0 0 1 <--- -1 or 9???
The cpu will yield that result, but how does my application know that it is a -1 or a 9? It is known at compile time the expected result type?
another example...
int negative = -1;
Console.WriteLine(negative.ToString("X")); //output FFFFFFFF
uint positive = 0xFFFFFFFF;
Console.WriteLine(positive.ToString("X")); //output FFFFFFFF
later edit :
I messed up a little more with this stuff and I think I got it and please correct me if I am wrong:
We have the following example:
sbyte positive = 15; //0xF ( 0000 1111)
sbyte negative = unchecked((sbyte)-positive); //0xF01;( 1111 0001)
sbyte anotherPositive = 7; //0x7 ( 0000 0111)
sbyte result = (sbyte)(negative + anotherPositive);
//The cpu will add the numbers 1111 0001
// 0000 0111
// 1111 1000
// (Addition result is 248 but our variable is
//SIGNED and we have the sign bit ON(1) so the the value will be interpreted as -8)
I am really confused! How does the machine make the difference between at low level , after all those are 32 bits all turned on.
Machine doesn't; the compiler does. It is the compiler that knows the type of signed and unsigned variables (ignoring for a moment the fact that both signed and unsigned are keywords in C and C++). Therefore, the compiler knows what instructions to generate and what functions to call based on these types.
The distinction between the types is made at compile time, changing the interpretation of possibly identical raw data based on its compile-time type.
Consider an example of printing the value of a variable. When you write
cout << mySignedVariable << " " << myUnsignedVariable << endl;
the compiler sees two overloads of << operator being applied:
The first << makes a call to ostream& operator<< (int val);
The second << makes a call to ostream& operator<< (unsigned int val);
Once the code reaches the proper operator implementation, you are done: the code generated for the specific overload has the information on how to handle the signed or unsigned value "baked into" its machine code. The implementation that takes an int uses signed machine instructions for comparisons, divisions, multiplications, etc. while the unsigned implementation uses different instructions, producing the desired results.
It's the job of the language's type system to interpret what it sees.
All the CPU really sees is a bunch of zeros and ones.
Interestingly, two's complement arithmetic can use the same instructions as unsigned arithmetic. One's complement arithmetic requires an annoying complementing subtractor.
the computer doesn't care, as far as its concerned they are both the same, its the compiler and language structure running on the computer that decides how to interpret and handle the bit array and they know if its a uint, int, double, single, unicode char etc.
that's why managed memory is so important, if your program writes a uint to a memory address and another program reads it an a char then you get all sorts of strange behaviour, this can been seen when you open a binary file in notepad

why 32-bit signed integer maximum value change to minimum value after increment by 1 in c#?

If I have integer variable with maximum value assigned it which is (2,147,483,647) for 32 bit integer, and if I am incrementing it by 1 then it turn to (-2147483648) negative value
code
int i = int.MaxValue; // i = 2,147,483,647
i = i + 1;
Response.Write(i); // i = -2,147,483,648
Can anyone explain me?
I did not find the exact reason for this change in values.
This is just integer overflow, where the value is effectively leaking into the sign bit. It's simpler to reason about it with sbyte, for example. Think about the bitwise representations of 127 and -127 as signed bytes:
127: 01111111
-128: 10000000
Basically the addition is performed as if with an infinite range, and then the result is truncated to the appropriate number of bits, and that value "interpreted" according to its type.
Note that this is all if you're executing in an unchecked context. In a checked context, an OverflowException will be thrown instead.
From section 7.8.4 of the C# 5 specification:
In a checked context, if the sum is outside the range of the result type, a System.OverflowException is thrown. In an unchecked context, overflows are not reported and any significant high-order bits outside the range of the result type are discarded.
in signed int, first bit shows the sign and the rests shows the number:
so, in 32bit int, first bit is the sign, so maxInt is: 2147483647 or 01111111111111111111111111111111,
and if we increment this number by 1 it will become: 10000000000000000000000000000000 which is - 2147483647

Compiling in checked and unckecked context

What is the value of j?
Int32 i = Int32.MinValue;
Int32 j = -i;
Compiling in checked context will throw an exception.
In unchecked we obtain value Int32.MinValue.
But why so?
Here is min and max Int32 values:
Dec Bin (first bit is a sign bit)
Int32.MinValue -2147483648 10000000000000000000000000000000
Int32.MaxValue 2147483647 01111111111111111111111111111111
When you are trying to get -(-2147483648) with explicit overflow checking you get exception in this case, because 2147483648 is bigger than max allowed value for int type.
Then why you are getting MinValue when overflow is allowed? Because when Int32.MinValue is negated you have 2147483648 which has binary representation 10000000000000000000000000000000 but with signed integer first bit is a sign bit, so you get exactly Int32.Min value.
So, the problem here is treating first bit as number sign. If you would assign result of negated Int32.Min value to unsigned integer, you would get 2147483648 value, as expected:
Int32 i = Int32.MinValue;
UInt32 j = (UInt32)-i; // it has 32 bits and does not treat first bit as sign
This is an example of integer overflow. In a checked context, the integer overflow will be detected and converted to an exception, because the language designers decided so.
To explain integer overflow, you can do the calculation in binary by hand. To calculate -X, take X in binary, change all the 1's to 0's and 0's to 1's, then add 1 (the number 1, not a 1 bit).
Example: 5 = 00000000000000000000000000000101
flip all bits: 11111111111111111111111111111010
add one: 11111111111111111111111111111011 which is -5
Int32.MinValue = 10000000000000000000000000000000
flip all bits: 01111111111111111111111111111111
add one: 10000000000000000000000000000000
If you take Int32.MinValue and negate it, it doesn't change. -Int32.MinValue can't fit in an int - if you do -(Int64)Int32.MinValue it will work as expected.

How to convert negative number to positive by |= Operator in C#?

We all know that the highest bit of an Int32 defines its sign. 1 indicates that it's negative and 0 that it's positive (possibly reversed). Can I convert a negative number to a positive one by changing its highest bit?
I tried to do that using the following code:
i |= Int32.MaxValue;
But it doesn't work.
Why don't you just use the Math.Abs(yourInt) method? I don't see the necessity to use bitwise operations here.
If you are just looking for a bitwise way to do this (like an interview question, etc), you need to negate the number (bitwise) and add 1:
int x = -13;
int positiveX = ~x + 1;
This will flip the sign if it's positive or negative. As a small caveat, this will NOT work if x is int.MinValue, though, since the negative range is one more than the positive range.
Of course, in real world code I'd just use Math.Abs() as already mentioned...
The most-significant bit defines it's sign, true. But that's not everything:
To convert a positive number to a negative one, you have to:
Negate the number (for example, +1, which is 0000 0001 in binary, turns into 1111 1110)
Add 1 (1111 1110 turns into 1111 1111, which is -1)
That process is known as Two's complement.
Inverting the process is equally simple:
Substract 1 (for example, -1, 1111 1111 turns into 1111 1110)
Negate the number (1111 1110 turns into 0000 0001, which is +1 again).
As you can see, this operation is impossible to implement using the binary or-operator. You need the bitwise-not and add/substract.
The above examples use 8-bit integers, but the process works exactly the same for all integers. Floating-point numbers, however, use only a sign bit.
If you're talking about using bitwise operations, it won't work like that. I'm assuming you were thinking you'd flip the highest bit (the sign flag) to get the negative, but it won't work as you expect.
The number 6 is represented as 00000000 00000000 00000000 00000110 in a 32-bit signed integer. If you flip the highest bit (the signing bit) to 1, you'll get 10000000 00000000 00000000 00000110, which is -2147483642 in decimal. Not exactly what you expected, I should imagine. This is because negative numbers are stored in "negative logic", which means that 11111111 11111111 11111111 11111111 is -1.
If you flip every bit in a positive value x, you get -(x+1). For example:
00000000 00000000 00000000 00000110 = 6
11111111 11111111 11111111 11111001 = -7
You've still got to use an addition (or subtraction) to produce the correct result, though.
I just solved this by doing:
Int a = IntToSolve; //Whatever int you want
b = a < 0 ? a*-1 : a*1 ;
Doing this will output only positive ints.
The other way around would be:
Int a = IntToSolve; //Same but from positive to negative
b = a < 0 ? a*1 : a*-1 ;
Whats wrong with Math.Abs(i) if you want to go from -ve to +ve, or -1*i if you want to go both ways?
It's impossible with the |= operator. It cannot unset bits. And since the sign bit is set on negative numbers you can't unset it.
Your quest is, sadly, futile. The bitwise OR operator will not be able to arbitrarily flip a bit. You can set a bit, but if that bit is already set, OR will not be able to clear it.
You absolutely cannot since the negative is the two's complements of the original one. So even if it is true that in a negative number the MSB is 1 is not enought to put a 1 that bit to obtain the negative. You must negate all the bits and add one.
You can give a try simpler way changeTime = changeTime >= 0 ? changeTime : -(changeTime);

Categories

Resources