Hello everyone I need a little help understanding the logic behind Swapping ranges of bits algorithm.
The "program" swaps given number of consecutive bits in a given positions and It works perfectly , but I need to understand the logic behind it in order to move on to other topics.
Here is the source code for the full "program" http://pastebin.com/ihVpseE1 , I need someone to tell me if I am on the right track so far and to clarify one part of the code that I find difficult to understand.
temp = ((number >> firstPosition) ^ (number >> secondPosition)) & ((1U << numberOfBits) - 1);
result = number ^ ((temp << firstPosition) | (temp << secondPosition));
(number >> firstPostion) move the binary representation of a given uint number(5351) to the right(>>) 3 times (firstPosition).
So 00000000 00000000 00010100 11100111 (5351) becomes 00000000 00000000 00000001 01001110 , because to my understanding when you shift the bits you loose the digits that falls out of range.Is that correct? Or the bits from the most right side appear on the left side?
(number >> secondPosition) I apply the same logic as .1 , but in my case secondPosition is 27 so the number is comprised of only zeroes(0) 00000000 00000000 00000000 00000000 (which is the number 0)
I move the bits of the number 5351 to the right 27 times and that results in only zeroes.
((number >> firstPosition) ^ (number >> secondPosition))
I use the ^ operator on 00000000 00000000 00000001 01001110 and 00000000 00000000 00000000 00000000
which results in the number 00000000 00000000 00000001 01001110 aka
(((number >> firstPosition) ^ (number >> secondPosition))
((1U << numberOfBits) - 1) THIS is the part I find difficult (if my understanding of 1. 2. 3. is correct) Does ((1U << numberOfBits) - 1) means that
1) Put 1 at position 3 (numberOfBits) and fill the rest with zeroes (0) and then substract 1 from the decimal representation of that number
OR
2) Move the binary representation of the number 1 to the left 3 times (numberOfBits) and then substract 1 from the decimal representation of that number
IF my logic so far is correct then we apply the & operator on the result of ((number >> firstPosition) ^ (number >> secondPosition)) and ((1U << numberOfBits) - 1).
and I follow the same logic for
result = number ^ ((temp << firstPosition) | (temp << secondPosition));
in order to get the result.
Sorry for the long and probably stupid question , but I really cant ask anyone for help except you guys.Thank you all in advance.
The two alternatives you put up for 4. are effectively the same :)
The trick is that this produces a string of binary 1s, up to the given numberOfBits - ie. (1 << 3) - 1 produces 7, or 111 in binary - in other words, "give me only the numberOfBits least significant bits".
Basically, you've described this well, if overly wordy.
The result of the first line is a sequence of numberOfBits bits. The value is a xor between the bit sequences starting from the two different indices and numberOfBits long. The and then simply discards the bits higher than numberOfBits.
The second line then exploits the fact that a ^ b ^ a == b, and b ^ a ^ b == a, and the order of operations doesn't matter - the xor operation is commutative.
As long as the two sequences don't overlap and don't cross the decimal point, it should work just fine :)
Related
I have the below code and I can't understand why the last line doesn't return 77594624. Can anyone help me write the inverse bitwise operation to go from 77594624 to 4 and back to 77594624?
Console.WriteLine(77594624);
Console.WriteLine((77594624 >> 24) & 0x1F);
Console.WriteLine((4 & 0x1F) << 24);
When you bit shift a value you might "lose" bits during that operation. If you right shift the value 16, which is 0b10000 in binary, by 4, you will get 1.
0b10000 = 16
0b00001 = 1
But this is also the case for other numbers like 28, which is 0b11100.
0b11100 = 28 = 16 + 8 + 4
0b00001 = 1
So with the start point 1 you cannot go "back" to the original number by left shifting again, as there is not enough information/bits available, you don't know if you want to go back to 16 or 28.
77594624 looks like this in binary, and the x's mark the part that is extracted by the right shift and bitwise AND:
000001000101000000000000000000000
xxxxx
Clearly some information is lost.
If the other parts of the number were available as well, then they could be reassembled.
I would expect that a, due to a NOT 00000001 would turn into 11111110, otherwise known as 127, or -126 if counting the far left bit as the sign, if sign&magnitude was used.
Even in the instance of 2s compliment, I would expect the answer to result in -127
Why is it that the result is -2?
In two's complement:
-x = ~x + 1
By subtracting one from both sides we can see that:
~x = -x - 1
And so in your example, if we set x = 1 we get:
~1 = -1 - 1 = -2
Consider how the numbers wrap around.
If we start with 00000010 (2) and take away one then it is:
00000010
- 00000001
---------
00000001
Which is 1. We "borrow 1" from the column to the left just as we do with decimal subtraction, except that because it's binary 10 - 1 is 1 rather than 9.
Take 1 away again and we of course get zero:
00000001
- 00000001
---------
00000000
Now, take 1 away from that, and we're borrowing 1 from the column to the left every time, and that borrowing wraps us around, so 0 - 1 = -1 is:
00000000
- 00000001
-----------
11111111
So -1 is all-ones.
This is even easier to see in the other direction, in that 11111111 plus one must be 00000000 as it keeps carrying one until it is lost to the left, so if x is 11111111 then it must be the case that x + 1 == 0, so it must be -1.
Take away another one and we have:
11111111
- 00000001
--------
11111110
So -2 is 1111110, and of course ~1 means flipping every bit of 00000001, which is also 11111110. So ~1 must be -2.
Another factor to note here though is that arithmetic and complements in C# always converts up to int for anything smaller. For a byte the value 11111110 is 254, but because ~ casts up to int first you get -2 rather than 254.
byte b = 1;
var i = ~b; // i is an int, and is -2
b = unchecked((byte)~b); // Forced back into a byte, now is 254
To convert a negative 2-compliment number to its decimal representation we have to:
start scanning the bitstring from right to left, until the first '1' is encountered
start inverting every bit to the left of that first '1'
Thus, in 11111110 we see the sign bit is 1 (negative number), and above method yields the number 000000010, which is a decimal 2. In total, we thus get -2.
I've had a good search, spent a few hours of wasted time and I can't do a simple bit shift in reverse :(
Dim result = VALUE >> 8 And &HFF
I have existing code that reads VALUE (an UInt16) from a file, does the bit shift to it. What I am trying to do is the reverse of it so it can be saved and read using the existing code above.
I've read up in bit shifting and read this great Code Project article but it may as well be in Latin.
UInt16 tt = 12123; //10111101011011
int aa = tt >> 8 & 0xFF; //101111 = 47
8 bits are disappeared. you can never get it back.
If you have the value 54, in binary 110110
If you shift 54 >> 2, it moves the bit to the right
00110110
00011011 (shift once)
00001101 (shift twice)
You end up with 13. If you shift 13 to the left. 13 << 2
00001101
00011010 (shift once)
00110100 (shift twice)
You will end up with 52
First an explanation of why:
I have a list of links to a variety of MP3 links and I'm trying to read the ID3 information for these files quickly. I'm only downloading the first 1500 or so bytes and trying to ana yze the data within this chunk. I came across ID3Lib, but I could only get it to work on completely downloaded files and didn't notice any support for Streams. (If I'm wrong in this, feel free to point that out)
So basically, I'm left trying to parse the ID3 tag by myself. The size of the tag can be determined from four bytes near the start of the file. From the ID3 site:
The ID3v2 tag size is encoded with four bytes where the most
significant bit (bit 7) is set to zero in every byte, making a total
of 28 bits. The zeroed bits are ignored, so a 257 bytes long tag is
represented as $00 00 02 01.
So basically:
00000000 00000000 00000010 00000001
becomes
0000 00000000 00000001 00000001
I'm not too familiar with bit level operations and was wondering if someone could shed some insight on an elegant solution to ignore the leftmost bit of each of these four bytes? I'm trying to pull a base 10 integer from it, so that works as well.
If you've got the four individual bytes, you'd want:
int value = ((byte1 & 0x7f) << 21) |
((byte2 & 0x7f) << 14) |
((byte3 & 0x7f) << 7) |
((byte4 & 0x7f) << 0);
If you've got it in a single int already:
int value = ((value & 0x7f000000) >> 3) |
((value & 0x7f0000) >> 2) |
((value & 0x7f00) >> 1) |
(value & 0x7f);
To clear the most significant bit, AND with 127 (0x7F), this takes all bits apart from the MSB.
int tag1 = tag1Byte & 0x7F; // this is the first one read from the file
int tag2 = tag2Byte & 0x7F;
int tag3 = tag3Byte & 0x7F;
int tag4 = tag4Byte & 0x7F; // this is the last one
To convert this into a single number, realize that each tag value is a base 128 digit. So, the least signifiant is multipled by 128^0 (1), the next 128^1 (128), the third significant (128^2) and so on.
int tagLength = tag4+(tag3<<7)+(tag2<<14)+(tag1<<21)
You mention you want to conver this to base 10. You can then convert this to base 10, say for printing, using int to string conversion:
String base10 = String.valueOf(tagLength);
I am working on a little Hardware interface project based on the Velleman k8055 board.
The example code comes in VB.Net and I'm rewriting this into C#, mostly to have a chance to step through the code and make sense of it all.
One thing has me baffled though:
At one stage they read all digital inputs and then set a checkbox based on the answer to the read digital inputs (which come back in an Integer) and then they AND this with a number:
i = ReadAllDigital
cbi(1).Checked = (i And 1)
cbi(2).Checked = (i And 2) \ 2
cbi(3).Checked = (i And 4) \ 4
cbi(4).Checked = (i And 8) \ 8
cbi(5).Checked = (i And 16) \ 16
I have not done Digital systems in a while and I understand what they are trying to do but what effect would it have to AND two numbers? Doesn't everything above 0 equate to true?
How would you translate this to C#?
This is doing a bitwise AND, not a logical AND.
Each of those basically determines whether a single bit in i is set, for instance:
5 AND 4 = 4
5 AND 2 = 0
5 AND 1 = 1
(Because 5 = binary 101, and 4, 2 and 1 are the decimal values of binary 100, 010 and 001 respectively.)
I think you 'll have to translate it to this:
i & 1 == 1
i & 2 == 2
i & 4 == 4
etc...
This is using the bitwise AND operator.
When you use the bitwise AND operator, this operator will compare the binary representation of the two given values, and return a binary value where only those bits are set, that are also set in the two operands.
For instance, when you do this:
2 & 2
It will do this:
0010 & 0010
And this will result in:
0010
0010
&----
0010
Then if you compare this result with 2 (0010), it will ofcourse return true.
Just to add:
It's called bitmasking
http://en.wikipedia.org/wiki/Mask_(computing)
A boolean only require 1 bit. In the implementation most programming language, a boolean takes more than a single bit. In PC this won't be a big waste, but embedded system usually have very limited memory space, so the waste is really significant. To save space, the booleans are packed together, this way a boolean variable only takes up 1 bit.
You can think of it as doing something like an array indexing operation, with a byte (= 8 bits) becoming like an array of 8 boolean variables, so maybe that's your answer: use an array of booleans.
Think of this in binary e.g.
10101010
AND
00000010
yields 00000010
i.e. not zero. Now if the first value was
10101000
you'd get
00000000
i.e. zero.
Note the further division to reduce everything to 1 or 0.
(i and 16) / 16 extracts the value (1 or 0) of the 5th bit.
1xxxx and 16 = 16 / 16 = 1
0xxxx and 16 = 0 / 16 = 0
And operator performs "...bitwise conjunction on two numeric expressions", which maps to '|' in C#. The '` is an integer division, and equivalent in C# is /, provided that both operands are integer types.
The constant numbers are masks (think of them in binary). So what the code does is apply the bitwise AND operator on the byte and the mask and divide by the number, in order to get the bit.
For example:
xxxxxxxx & 00000100 = 00000x000
if x == 1
00000x00 / 00000100 = 000000001
else if x == 0
00000x00 / 00000100 = 000000000
In C# use the BitArray class to directly index individual bits.
To set an individual bit i is straightforward:
b |= 1 << i;
To reset an individual bit i is a little more awkward:
b &= ~(1 << i);
Be aware that both the bitwise operators and the shift operators tend to promote everything to int which may unexpectedly require casting.
As said this is a bitwise AND, not a logical AND. I do see that this has been said quite a few times before me, but IMO the explanations are not so easy to understand.
I like to think of it like this:
Write up the binary numbers under each other (here I'm doing 5 and 1):
101
001
Now we need to turn this into a binary number, where all the 1's from the 1st number, that is also in the second one gets transfered, that is - in this case:
001
In this case we see it gives the same number as the 2nd number, in which this operation (in VB) returns true. Let's look at the other examples (using 5 as i):
(5 and 2)
101
010
----
000
(false)
(5 and 4)
101
100
---
100
(true)
(5 and 8)
0101
1000
----
0000
(false)
(5 and 16)
00101
10000
-----
00000
(false)
EDIT: and obviously I miss the entire point of the question - here's the translation to C#:
cbi[1].Checked = i & 1 == 1;
cbi[2].Checked = i & 2 == 2;
cbi[3].Checked = i & 4 == 4;
cbi[4].Checked = i & 8 == 8;
cbi[5].Checked = i & 16 == 16;
I prefer to use hexadecimal notation when bit twiddling (e.g. 0x10 instead of 16). It makes more sense as you increase your bit depths as 0x20000 is better than 131072.