I would expect that a, due to a NOT 00000001 would turn into 11111110, otherwise known as 127, or -126 if counting the far left bit as the sign, if sign&magnitude was used.
Even in the instance of 2s compliment, I would expect the answer to result in -127
Why is it that the result is -2?
In two's complement:
-x = ~x + 1
By subtracting one from both sides we can see that:
~x = -x - 1
And so in your example, if we set x = 1 we get:
~1 = -1 - 1 = -2
Consider how the numbers wrap around.
If we start with 00000010 (2) and take away one then it is:
00000010
- 00000001
---------
00000001
Which is 1. We "borrow 1" from the column to the left just as we do with decimal subtraction, except that because it's binary 10 - 1 is 1 rather than 9.
Take 1 away again and we of course get zero:
00000001
- 00000001
---------
00000000
Now, take 1 away from that, and we're borrowing 1 from the column to the left every time, and that borrowing wraps us around, so 0 - 1 = -1 is:
00000000
- 00000001
-----------
11111111
So -1 is all-ones.
This is even easier to see in the other direction, in that 11111111 plus one must be 00000000 as it keeps carrying one until it is lost to the left, so if x is 11111111 then it must be the case that x + 1 == 0, so it must be -1.
Take away another one and we have:
11111111
- 00000001
--------
11111110
So -2 is 1111110, and of course ~1 means flipping every bit of 00000001, which is also 11111110. So ~1 must be -2.
Another factor to note here though is that arithmetic and complements in C# always converts up to int for anything smaller. For a byte the value 11111110 is 254, but because ~ casts up to int first you get -2 rather than 254.
byte b = 1;
var i = ~b; // i is an int, and is -2
b = unchecked((byte)~b); // Forced back into a byte, now is 254
To convert a negative 2-compliment number to its decimal representation we have to:
start scanning the bitstring from right to left, until the first '1' is encountered
start inverting every bit to the left of that first '1'
Thus, in 11111110 we see the sign bit is 1 (negative number), and above method yields the number 000000010, which is a decimal 2. In total, we thus get -2.
Related
Note I think some commenters misunderstand my question that I don't understand integer division and floating point division - More clarification: I expected -1/2 == -1 >> 1 == 0, but in fact -1 >> 1 = -1.
I'm learning two's complement. I understand a special thing about bit shifting in two's complement's context is that right shifting needs to maintain the sign of the bit, such that right shifting a negative number should fill in 1 instead of 0. And left shifting should always fill 0. This is explained in the Wikipedia's article.
According to the article, the motivation behind this is to maintain the equivalency of bit shifting operation and the corresponding multiplication or division by 2. However, a special case I immediately noticed is -1. Under the above mentioned rule, -1>>1 does not equal to -1/2.
My question is how should I understand this? And what precautions should I take when applying bit shifts in optimization of multiplication and division?
Here's a C# (should be equivalent in other languages) code illustrating what I meant:
class Program
{
static void Main(string[] args)
{
foreach (int x in new[] { 0, 1, 2, 3, -1, -2, -3 })
{
int a = x >> 1;
int b = x / 2;
Console.WriteLine($"Number:{x}, x>>1: {a}, x/2: {b}");
}
}
}
This produces the output of:
Number:0, x>>1: 0, x/2: 0
Number:1, x>>1: 0, x/2: 0
Number:2, x>>1: 1, x/2: 1
Number:3, x>>1: 1, x/2: 1
Number:-1, x>>1: -1, x/2: 0
Number:-2, x>>1: -1, x/2: -1
Number:-3, x>>1: -2, x/2: -1
You should not use shift operatings in your code, if you are trying to do division. The optimizer is able to figure this out better than you can. Of course, if you really know what you are doing and you are using unsigned integers entirely and you are writing in assembly, go ahead. Until then, just use the normal math operators for doing normal math operations.
If you are asking why -1 >> 1 == -1, well that's easy. The value negative one looks like all ones in binary, or 0xFFFFFFFF in hex. Shift the bits to the right and shift-in in a new 1 to the empty hole on the left, and you are left with what you started !
The results are still consistent. The effect that a right shift by 1 has is division by 2 rounded down.
In the case of an odd positive value, the "extra" 0.5 gets dropped. In the case of an odd negative value, it goes the other way.
In the examples you give above, half of -1 is -0.5, rounded down to -1. Similarly, half of -3 is -1.5, rounded down to -2.
Why does -1 >> 1 == -1?
Whenever you shift, the machine must fill in a missing value. We'll use four bits just to keep things simple. When you shift left, the bit that must be replaced is the trailing bit:
0101 << 1 // equivalent of 5 * 2
101_ // This is filled in with a zero
1010 // 5 * 2 == 10
When you shift right, the leading bit must be replaced. But since the leading bit determines sign in signed 2s complement, we don't want to lose that sign (shifting left, or int dividing by some power of 2, should never cause a negative number to become positive or vice versa). So the replacement value is whatever the leading (sign) bit already was:
0111 >> 1 // equivalent of 7 intdiv 2
_011 // Signed, so this is filled in with a zero
0011 // 7 intdiv 2 == 3
1111 >> 1 // equivalent of -1 intdiv 2, kinda
_111 // Signed, so this is filled in with a 1
1111 // -1 intdiv 2 == -1
However, if this was unsigned representation, the leading bit would simply be filled in with a zero:
1111 >> 1 // equivalent of 15 intdiv 2
_111 // Unsigned, so this is filled in with a 0
0111 // 15 intdiv 2 == 7
Further reading: https://msdn.microsoft.com/en-us/library/336xbhcz.aspx
The Wiki article is wrong for c. This statement: These rules preserve the common semantics that left shifts multiply the number by two and right shifts divide the number by two. is not correct for c when looking at odd negative integers.
For integer division c uses round towards zero (or truncation toward zero as it is stated in the standard), i.e. if the real result is 1.5 the integer result will be 1. If the real result is -1.5 the integer result will be -1.
For c the standard doesn't specify what should happen to negative integers when doing a right shift - it is implementation-defined.
If your system uses two's complement right shift doesn't work the same way as division. In general shifting has nothing to do with rounding. However, if you want to look at a two's complement right shift as a division, you got to notice that it rounds towards minus infinity. That is: if the real result is 1.5 the integer result will be 1. If the real result is -1.5 the integer result will be -2.
Conclusion: Rigth shift is not the same as division by 2 for odd negative integers.
Your direct question:
-1 >> 1 is ??
If you consider right shift as a division by 2, the real result would be -0.5 but since it works like round towards minus infinity, the result will be rounded to -1
But as stated - right shift is not a division by 2. You have to look at the bit level.
At bit level it is simply because the bit pattern for -1doesn't change.
In two's complement -1 is an all-ones bit pattern, e.g.
int a = -1; // Pattern 11111111.11111111.11111111.11111111
when you right shift a two complements integer being -1 you simply get the same bit pattern and therefore the same value. You shift out a 1at LSB and shift in a 1 at MSB. So the binary pattern stays the same.
So what happens to -3:
int a = -3; // Pattern 11111111.11111111.11111111.11111101
a = a >> 1; // Pattern 11111111.11111111.11111111.11111110 = -2
Consider this case:
-1 = 0b11
-1 >> 1 => 0b11 >> 1 => 0b11 (Many 1s on the left) (= -1)
If you look back -3 >> 1 case (prepare your computer's calc in programmer's mode)
you should see 0b111101 >> 1 becomes 0b111110 (-2)
Hello everyone I need a little help understanding the logic behind Swapping ranges of bits algorithm.
The "program" swaps given number of consecutive bits in a given positions and It works perfectly , but I need to understand the logic behind it in order to move on to other topics.
Here is the source code for the full "program" http://pastebin.com/ihVpseE1 , I need someone to tell me if I am on the right track so far and to clarify one part of the code that I find difficult to understand.
temp = ((number >> firstPosition) ^ (number >> secondPosition)) & ((1U << numberOfBits) - 1);
result = number ^ ((temp << firstPosition) | (temp << secondPosition));
(number >> firstPostion) move the binary representation of a given uint number(5351) to the right(>>) 3 times (firstPosition).
So 00000000 00000000 00010100 11100111 (5351) becomes 00000000 00000000 00000001 01001110 , because to my understanding when you shift the bits you loose the digits that falls out of range.Is that correct? Or the bits from the most right side appear on the left side?
(number >> secondPosition) I apply the same logic as .1 , but in my case secondPosition is 27 so the number is comprised of only zeroes(0) 00000000 00000000 00000000 00000000 (which is the number 0)
I move the bits of the number 5351 to the right 27 times and that results in only zeroes.
((number >> firstPosition) ^ (number >> secondPosition))
I use the ^ operator on 00000000 00000000 00000001 01001110 and 00000000 00000000 00000000 00000000
which results in the number 00000000 00000000 00000001 01001110 aka
(((number >> firstPosition) ^ (number >> secondPosition))
((1U << numberOfBits) - 1) THIS is the part I find difficult (if my understanding of 1. 2. 3. is correct) Does ((1U << numberOfBits) - 1) means that
1) Put 1 at position 3 (numberOfBits) and fill the rest with zeroes (0) and then substract 1 from the decimal representation of that number
OR
2) Move the binary representation of the number 1 to the left 3 times (numberOfBits) and then substract 1 from the decimal representation of that number
IF my logic so far is correct then we apply the & operator on the result of ((number >> firstPosition) ^ (number >> secondPosition)) and ((1U << numberOfBits) - 1).
and I follow the same logic for
result = number ^ ((temp << firstPosition) | (temp << secondPosition));
in order to get the result.
Sorry for the long and probably stupid question , but I really cant ask anyone for help except you guys.Thank you all in advance.
The two alternatives you put up for 4. are effectively the same :)
The trick is that this produces a string of binary 1s, up to the given numberOfBits - ie. (1 << 3) - 1 produces 7, or 111 in binary - in other words, "give me only the numberOfBits least significant bits".
Basically, you've described this well, if overly wordy.
The result of the first line is a sequence of numberOfBits bits. The value is a xor between the bit sequences starting from the two different indices and numberOfBits long. The and then simply discards the bits higher than numberOfBits.
The second line then exploits the fact that a ^ b ^ a == b, and b ^ a ^ b == a, and the order of operations doesn't matter - the xor operation is commutative.
As long as the two sequences don't overlap and don't cross the decimal point, it should work just fine :)
My question is that how does this assignment happen in c#? I mean, how does it calculate the answer 1 (with 257), and how does it calculate 0(with 256)?
the code is:
int intnumber=257;
byte bytenumber=(byte)intnumber;//the out put of this code is 1
int intnumber=256;
byte bytenumber=(byte)intnumber;//the out put of this code is 0
My question is what happen,that the output in first code is:1 and in second one is:0
A byte only occupies one byte in memory. An int occupies 4 bytes in memory. Here is the binary representation of some int values you've mentioned:
most significant least significant
255: 00000000 00000000 00000000 11111111
256: 00000000 00000000 00000001 00000000
257: 00000000 00000000 00000001 00000001
You can also see how this works when casting negative int values to a byte. An int value of -255, when cast to a byte, is 1.
-255: 11111111 11111111 11111111 00000001
When you cast an int to a byte, only the least significant byte is assigned to the byte value. The three higher significance bytes are ignored.
A single byte only goes up to 255. The code wraps around to 0 for 256 and 1 for 257, etc...
The most significant bits are discarded and you're left with the rest.
255 is the maximum value that can be represented in a single byte:
Hex code: FF
256 does not fit in 1 byte. It takes 2 bites to represent that:
01 00
since you're trying to put that value in a variable of type byte (which of course may only contain 1 byte), the second byte is "cropped" away, leaving only:
00
Same happens for 257 and actually for any value.
1 is assigned because the arithmetic overflow of byte values (max 255) exceed by 2 unit.
0 is assigned because exceed by 1 unit.
the byte data type contains a number between 0 to 255. When converting an int to byte, it calculates the number modulo 256.
byte = int % 256
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 9 years ago.
I am trying some bitwise operators in C#, not sure how compliment calculates output as -2 for 1
If i represent 1 in 8 bit binary
1 = 00000001
~1 =11111110 = How come this evaluates to be -2?
Sample code that i am using in C#
//Bitwise Compliment
//1 = 00000001
//~1 = 11111110 = -2
Console.WriteLine(~1);
Well... What do you hope it to be? Since we are using the two-complements representation this is simply how it is:
00000011 = 3
00000010 = 2
00000001 = 1
00000000 = 0
11111111 = -1
11111110 = -2
11111101 = -3
11111100 = -4
If we would use the one-complient representation, we would have this list, and then you would be right:
00000011 = 3
00000010 = 2
00000001 = 1
00000000 = 0
11111111 = -0 <== Watch this!!!
11111110 = -1
11111101 = -2
11111100 = -3
Since the computerbuilders decided not to have a negative zero, they created the two-complements representation.
If you do a bitwise complement, all bits are reversed. So 00000001 will result in 11111110 and that is simply -2 (when using two-complements).
Are you looking for the negate operator -?
Console.WriteLine(-1);
BTW: De complement negate operator is the same as the complement operator plus one (when using the two-compliment representation).
So:
-x == ~x + 1;
For more info: http://en.wikipedia.org/wiki/Signed_number_representations
Negative int numbers in .NET are treated as two's-complement. That means that:
1111....111111 = -1
1111....111110 = -2
1111....111101 = -3
1111....111100 = -4
etc; basically, negative x is equal to 2base-x
If you don't want negatives, use uint instead of int
.NET (and most languages) use two's complement to represent negative numbers. In the simplest explanation, this is found by taking the one's complement (which involves reversing each bit) and then adding 1.
Your inversion creates the one's complement, which is interpreted one lower than the two's complement.
If your eight bits represent a signed value then is only bits 1-7 that represent the number, bit 8 indicates whether the number is positive (0) or negative (1).
To get your desired behaviour you must use an unsigned type.
Because using Two's Complement, you invert all of the bits, and add one. So this is the entire process:
11111110 ; start
00000001 ; invert
00000010 ; add one
Source: link.
First of all, ~ operators works like
~x = -x - 1
In .NET, negative integers called with twos-complement
11111111 = -1
11111110 = -2
Signed integers in most computing contexts (including signed integers in C#: sbyte, short, int, and long) are represented using the Two's complement representation. In this, the first bit indicates the sign, and the rest indicates the number. (0..127 counts from 0000 0000..0111 1111, and -128..-1 counts from 1000 0000..1111 1111)
What you might've been expecting is what you'd get with the bitwise compliment of an unsigned int:
uint b = 1;
uint a = (uint)~b;
// a == 4294967294, which is 2^32-2, or in binary,
// 11111111 11111111 11111111 11111110
We all know that the highest bit of an Int32 defines its sign. 1 indicates that it's negative and 0 that it's positive (possibly reversed). Can I convert a negative number to a positive one by changing its highest bit?
I tried to do that using the following code:
i |= Int32.MaxValue;
But it doesn't work.
Why don't you just use the Math.Abs(yourInt) method? I don't see the necessity to use bitwise operations here.
If you are just looking for a bitwise way to do this (like an interview question, etc), you need to negate the number (bitwise) and add 1:
int x = -13;
int positiveX = ~x + 1;
This will flip the sign if it's positive or negative. As a small caveat, this will NOT work if x is int.MinValue, though, since the negative range is one more than the positive range.
Of course, in real world code I'd just use Math.Abs() as already mentioned...
The most-significant bit defines it's sign, true. But that's not everything:
To convert a positive number to a negative one, you have to:
Negate the number (for example, +1, which is 0000 0001 in binary, turns into 1111 1110)
Add 1 (1111 1110 turns into 1111 1111, which is -1)
That process is known as Two's complement.
Inverting the process is equally simple:
Substract 1 (for example, -1, 1111 1111 turns into 1111 1110)
Negate the number (1111 1110 turns into 0000 0001, which is +1 again).
As you can see, this operation is impossible to implement using the binary or-operator. You need the bitwise-not and add/substract.
The above examples use 8-bit integers, but the process works exactly the same for all integers. Floating-point numbers, however, use only a sign bit.
If you're talking about using bitwise operations, it won't work like that. I'm assuming you were thinking you'd flip the highest bit (the sign flag) to get the negative, but it won't work as you expect.
The number 6 is represented as 00000000 00000000 00000000 00000110 in a 32-bit signed integer. If you flip the highest bit (the signing bit) to 1, you'll get 10000000 00000000 00000000 00000110, which is -2147483642 in decimal. Not exactly what you expected, I should imagine. This is because negative numbers are stored in "negative logic", which means that 11111111 11111111 11111111 11111111 is -1.
If you flip every bit in a positive value x, you get -(x+1). For example:
00000000 00000000 00000000 00000110 = 6
11111111 11111111 11111111 11111001 = -7
You've still got to use an addition (or subtraction) to produce the correct result, though.
I just solved this by doing:
Int a = IntToSolve; //Whatever int you want
b = a < 0 ? a*-1 : a*1 ;
Doing this will output only positive ints.
The other way around would be:
Int a = IntToSolve; //Same but from positive to negative
b = a < 0 ? a*1 : a*-1 ;
Whats wrong with Math.Abs(i) if you want to go from -ve to +ve, or -1*i if you want to go both ways?
It's impossible with the |= operator. It cannot unset bits. And since the sign bit is set on negative numbers you can't unset it.
Your quest is, sadly, futile. The bitwise OR operator will not be able to arbitrarily flip a bit. You can set a bit, but if that bit is already set, OR will not be able to clear it.
You absolutely cannot since the negative is the two's complements of the original one. So even if it is true that in a negative number the MSB is 1 is not enought to put a 1 that bit to obtain the negative. You must negate all the bits and add one.
You can give a try simpler way changeTime = changeTime >= 0 ? changeTime : -(changeTime);