C# similar bitwise operations resulting in different answers - c#

I'm currently writing a solution to a project in my CS class, and I'm getting conflicting results.
Basically, I have to read a single BIT from a BYTE that is read from a file.
These are the relevant methods (ignore the naming standards, I didn't make them and I hate them too):
static bool no_more_bytes()
/**********************/
/*** NO MORE BYTES? ***/
/**********************/
{
return (in_stream.PeekChar() == -1);
}
static byte read_a_byte()
/********************************************************************************/
/*** Function to read a single byte from the input stream and set end_of_file ***/
/********************************************************************************/
{
byte abyte;
if (!no_more_bytes())
abyte = in_stream.ReadByte();
else
{
abyte = 0;
end_of_file = true;
}
return abyte;
}
static byte getbit()
/**********************************************************/
/*** Function to get a single BIT from the input stream ***/
/**********************************************************/
{
byte mask;
if (current_byte == 0 || current_bit_position == 8)
{
current_byte = read_a_byte();
current_bit_position = 0;
}
mask = current_bit_position;
current_bit_position++;
//Your job is to fill in this function so it returns
//either a zero or a one for each bit in the file.
bool set = (current_byte & (128 >> current_bit_position - 1)) != 0; // This is the line in question
if (set)
{
return 1;
} else
{
return 0;
}
}
getbit() is the method I wrote, and it works properly. However, my first solution didn't work.
When the input file contains "ABC", it correctly outputs 01000001 00100001 01100001 (65, 66, 67) by reading 1 bit at a time.
However, my original solution was
bool set = (current_byte & (1 << current_bit_position - 1)) != 0;
So the question is: why does shifting 128 right by the current_bit_position get a different result than shifting 1 left by the current_bit_position

I'm going to interpret this question as one about the order of the bits because, as comments suggest, it doesn't make much sense to expect different operations on the same data to return the same result, in most cases.
So why are we starting with 128 and shifting right instead of starting with 1 and shifting left? Both methods would be valid for enumerating through each bit within the byte, but they operate in reverse order.
If you want to shift 1 left (<<) instead of shifting 128 right (>>) you would have to run current_bit_position from 7 to 0 instead of from 0 to 7.
Shifting 1
1 << 7 == 10000000
1 << 6 == 01000000
1 << 5 == 00100000
1 << 4 == 00010000
1 << 3 == 00001000
1 << 2 == 00000100
1 << 1 == 00000010
1 << 0 == 00000001
Shifting 128
128 >> 0 == 10000000
128 >> 1 == 01000000
128 >> 2 == 00100000
128 >> 3 == 00010000
128 >> 4 == 00001000
128 >> 5 == 00000100
128 >> 6 == 00000010
128 >> 7 == 00000001
Since we generally represent numbers with the most significant digits on the left and the least significant on the right the above sequences are what you would need to use to get the bits in the right order.

Related

Make a 16 bit integer from 3 bytes with 31 max value (5 bits)

I want to make a 16 bit value from 3 little endian bytes with a max value of 31 (this means they're maximum of 5 set bits). How would I get the last 5 bits of the bytes, then put them all together?
e.g. bytes : 0011111 0010101 0011100 into 1111110101111000
I tried this but I think I'm just overwriting my old bits
cp = (bar << 3) | (bag >> 2) | (bab >> 7);
You are not overwriting bits, but you are shifting bits out of the values before even putting them together. bag >> 2 leaves only three bits of the original and bab >> 7 shifts out all five bits plus two more.
Shift the values to the left instead:
cp = (bar << 10) | (bag << 5) | bab;
You want to make room on the right for the other values:
bar << 10 -11111----------
bag << 5 ------10101-----
bab -----------11100

How am I getting a single bit from an int?

I understand that:
int bit = (number >> 3) & 1;
Will give me the bit 3 places from the left, so lets say 8 is 1000 so that would be 0001.
What I don't understand is how "& 1" will remove everything but the last bit to display an output of simply "1". I know that this works, I know how to get a bit from an int but how is it the code is extracting the single bit?
Code...
int number = 8;
int bit = (number >> 3) & 1;
Console.WriteLine(bit);
Unless my boolean algebra from school fails me, what's happening should be equivalent to the following:
*
1100110101101 // last bit is 1
& 0000000000001 // & 1
= 0000000000001 // = 1
*
1100110101100 // last bit is 0
& 0000000000001 // & 1
= 0000000000000 // = 0
So when you do & 1, what you're basically doing is to zero out all other bits except for the last one which will remain whatever it was. Or more technically speaking you do a bitwise AND operation between two numbers, where one of them happens to be a 1 with all leading bits set to 0
8 = 00001000
8 >> 1 = 00000100
8 >> 2 = 00000010
8 >> 3 = 00000001
If you use mask 1 = 000000001 then you have:
8 >> 3 = 000000001
1 = 000000001
(8 >> 3) & 1 = 000000001
Actually this is not hard to understand.
the "& 1" operation is just set all bits of the value to the "0", except the bit, which placed in the same position as the valuable bit in the value "1"
previous operation just shifts the all bits to the right. and places the checked bit to the position which won't be setted to "0" after operation "& 1"
fo example
number is 1011101
number >> 3 makes it 0001011
but (number >> 3) & 1 makes it 0000001
When u right shift 8 you get 0001
0001 & 0001 = 0001 which converted to int gives you 1.
So, when a value 0001 has been assigned to an int, it will print 1 and not 0001 or 0000 0001. All the leading zeroes will be discarded.

Merge first n bits of a byte with last 8-n bits of another byte

How can I merge first n bits of a byte with last 8-n bits of another byte?
I know something like below for picking 3 bits from first and 5 from second (Which I have observed in DES encryption algorithm)
zByte=(xByte & 0xE0) | (yByte & 0x1F); But I don't know maths behind why we need to use 0XE0 and 0X1F in this case. So I am trying to understand the details with regards to each bit.
In C#, that would be something like:
int mask = ~((-1) << n);
var result = (x & ~mask) | (y & mask);
i.e. we build a mask that is (for n = 5) : 000....0011111, then we combine (&) one operand with that mask, the other operand with the inverse (~) of the mask, and compose them (|).
You could also probably do something more quickly just using shift operations (avoiding a mask completely) - but only if the data can be treated as unsigned (so Java might struggle here).
It just sounds like you don't understand how boolean arithmetic works? If this is your question it works like this:
0xEO and 0x1F are hexidecimal representations of numbers. If we convert these numbers to binary they would be:
0xE0 = 11100000
0x1F = 00011111
Additionally & (and) and | (or) are bitwise logical operators. To understand logical operators, first remember the 1 = true and 0 = false.
The truth table for & is:
0 & 0 = 0
0 & 1 = 0
1 & 0 = 0
1 & 1 = 1
The truth table for | is:
0 | 0 = 0
0 | 1 = 1
1 | 0 = 1
1 | 1 = 1
So let's breakdown your equation piece by piece. First we will evaluate the code in parenthesis first. We will walk through each number in binary and for the & operator if each operand has a 1 in the same bit position we will return 1. If either number has a 0 then we will return 0. After we finish the evaluation of the operands in the parenthesis we will then take the 2 resulting numbers and apply the | operator bit by bit. If either number has a 1 in the same bit position we will return 1. If both numbers have a 0 in the same bit position we will return 0.
For the sake of discussion, let's say that
xByte = 255 or (FF in hex and 11111111 in binary)
yByte = 0 or (00 in hex and 00000000 in binary)
When you apply the & and | operators we are going to compare each bit one at a time:
zByte = (xByte & 0xEO) | (yByte & 0x1F)
becomes:
zByte = (11111111 & 11100000) | (00000000 & 00011111)
zByte = 111000000 | 00000000
zByte = 11100000
If you understand this and how boolean logic works then you can use Marc Gravell's answer.
The math behind those numbers (0xE0 and 0x1F) is quite simple. First we are exploiting the fact that 0 & <bit> always equals 0 and 1 & <bit> always equals <bit>.
0x1F is 00011111 binary, which means that the first 3 bits will always be 0 after an & operation with another byte - and the last 5 bits will be the same they were in the other byte. Remember that every 1 in a binary number represents a power of 2, so if you want to find the mask mathematically it would be the sum of 2^x from x = 0 to n-1. Then you can find the opposite mask (the one that is 11100000) to extract the first 3 bit, you simply need to subtract the mask from 11111111, and you will get 11100000 (0xE0).
In java,
By using the following function we can get the first n bits of the first Byte and last 8 n bits of the second byte.
public class BitExample {
public static void main(String[] args) {
Byte a = 15;
Byte b = 16;
String mergedValue=merge(4, a, b);
System.out.println(mergedValue);
}
public static String merge(int n, Byte a, Byte b) {
String mergedString = "";
String sa = Integer.toBinaryString(a);
String sb = Integer.toBinaryString(b);
if(n>sa.length()) {
for(int i=0; i<(n-sa.length()); i++) {
mergedString+="0";
}
mergedString+=sa;
}else{
mergedString+=sa.substring(0, n);
}
if(8*n>sb.length()) {
for(int i=0; i<(8*n-sb.length()); i++) {
mergedString+="0";
}
mergedString+=sb;
}
return mergedString;
}
}

Strange behavior with << operator

When dealing with the C# shift operators, I encountered a unexpected behavior of the shift-left operator.
Then I tried this simple function:
for (int i = 5; i >= -10; i--) {
int s = 0x10 << i;
Debug.WriteLine(i.ToString().PadLeft(3) + " " + s.ToString("x8"));
}
With this result:
5 00000200
4 00000100
3 00000080
2 00000040
1 00000020
0 00000010
-1 00000000 -> 00000008 expected
-2 00000000 -> 00000004 expected
-3 00000000 -> 00000002 expected
-4 00000000 -> 00000001 expected
-5 80000000
-6 40000000
-7 20000000
-8 10000000
-9 08000000
-10 04000000
Until today, I expected that the << operator can deal with negative values of the second operand.
MSDN tells nothing about the behavior when using negative values of the second operand. But MSDN says that only the lower 5 bits (0-31) are used by the operator, which should fit for negative values.
I also tried with long values: long s = 0x10L << i;, but with the same result.
So what happens here?
EDIT
As stated in your answers, the negative value representation is not the reason for this.
I got the same wrong result for all cases:
0x10<<-3 = 0x00000000 (wrong!)
0x10<<(int)(0xfffffffd) = 0x00000000 (wrong!)
0x10<<(0x0000001d = 0x00000000 (wrong!)
expected = 0x00000002
EDIT #2
One of these two should be true:
1) The shift operator is a real shift-operator, so the results should be:
1a) 0x10 << -3 = 00000002
1b) 0x10 << -6 = 00000000
2) The shift operator is a rotate-operator, so the results should be:
2a) 0x10 << -3 = 00000002 (same as 1a)
2b) 0x10 << -6 = 40000000
But the shown results does not fit either to 1) nor to 2) !!!
Negative numbers are two complemented, so -1 == 0xFFFFFFFF, and 0xFFFFFFFF & 31 == 31, -2 == 0xFFFFFFFE, and 0xFFFFFFFE & 31 == 30 and so on.
-10 == 0xFFFFFFF6, and 0xFFFFFFF6 & 31 == 22, in fact:
(0x10 << 22) == 04000000
Some code to show:
const int num = 0x10;
int maxShift = 31;
for (int i = 5; i >= -10; i--)
{
int numShifted = num << i;
uint ui = (uint)i;
int uiWithMaxShift = (int)(ui & maxShift);
int numShifted2 = num << uiWithMaxShift;
Console.WriteLine("{0,3}: {1,8:x} {2,2} {3,8:x} {4,8:x} {5}",
i,
ui,
uiWithMaxShift,
numShifted,
numShifted2,
numShifted == numShifted2);
}
With long it's the same, but now instead of & 31 you have & 63. -1 == 63, -2 == 62 and -10 == 54
Some example code:
const long num = 0x10;
int maxShift = 63;
for (int i = 5; i >= -10; i--)
{
long numShifted = num << i;
uint ui = (uint)i;
int uiWithMaxShift = (int)(ui & maxShift);
long numShifted2 = num << uiWithMaxShift;
Console.WriteLine("{0,3}: {1,8:x} {2,2} {3,16:x} {4,16:x} {5}",
i,
ui,
uiWithMaxShift,
numShifted,
numShifted2,
numShifted == numShifted2);
}
Just to be clear:
(int x) << y == (int x) << (int)(((uint)y) & 31)
(long x) << y == (long x) << (int)(((uint)y) & 63)
and not
(int x) << y == (int x) << (Math.Abs(y) & 63)
(long x) << y == (long x) << (Math.Abs(y) & 63)
And what you think "should be" "it would be beautiful if it was" "has to be" ecc is irrelevant. While 1 and 0 are "near" (their binary representation has a "distance" of 1 in number of different bits), 0 and -1 are "far" (their binary representation has a "distance" of 32 or 64 in number of different bits)
You think you should get this:
-1 00000000 -> 00000008 expected
-2 00000000 -> 00000004 expected
-3 00000000 -> 00000002 expected
-4 00000000 -> 00000001 expected
but in truth what you don't see is that you are getting this:
-1 (00000008) 00000000
-2 (00000004) 00000000
-3 (00000002) 00000000
-4 (00000001) 00000000
-5 (00000000) 80000000 <-- To show that "symmetry" and "order" still exist
-6 (00000000) 40000000 <-- To show that "symmetry" and "order" still exist
where the part in (...) is the part that is to the "left" of the int and that doesn't exist.
It has to do with the representation of negative numbers. -1 corresponds to all-ones, so its five least significant bits sum to 31 and left-shifting 0x10 by 31 bits gives all zeroes (the high-order bit that is already set is discarded as per the documentation).
Increasingly larger negative numbers correspond to shifting by 30, 29, etc bits. The bit that is already set in 0x10 is in zero-based position 4, so in order for it to not be discarded the shift has to be at most 31 - 4 = 27 bits, which happens when i == -5.
You can easily see what's going on if you try e.g. Console.WriteLine((-1).ToString("x8")):
ffffffff
Update: when first operand is a long you see similar behavior because now six least significant bits are counted from the second operand: 0x10L << -1 shifts left 63 bits, etc.
The left shift operator doesn't see a negative second operand as a right shift. It will just use the lower five bits of the value and make a left shift using that.
The lower five bits of the value -1 (0xFFFFFFFF) will be 31 (0x0000001F), so the first operand 0x10 is shifted to the left 31 steps leaving just the least significant bit in the most significant bit of the result.
In other words, 0x10 << -1 is the same as 0x10 << 31 which will be 0x800000000, but the result is just 32 bits so it will be truncated to 0x00000000.
When you are using a long value, the six least significant bits are used of the second operand. The value -1 becomes 63, and the bits are still shifted out outside the range of the long.

Bit manipulation in C# using a mask

I need a little help with bitmap operations in C#
I want to take a UInt16, isolate an arbitrary number of bits, and set them using another UInt16 value.
Example:
10101010 -- Original Value
00001100 -- Mask - Isolates bits 2 and 3
Input Output
00000000 -- 10100010
00000100 -- 10100110
00001000 -- 10101010
00001100 -- 10101110
^^
It seems like you want:
(orig & ~mask) | (input & mask)
The first half zeroes the bits of orig which are in mask. Then you do a bitwise OR against the bits from input that are in mask.
newValue = (originalValue & ~mask) | (inputValue & mask);
originalValue -> 10101010
inputValue -> 00001000
mask -> 00001100
~mask -> 11110011
(originalValue & ~mask)
10101010
& 11110011
----------
10100010
^^
Cleared isolated bits from the original value
(inputValue & mask)
00001000
& 00001100
----------
00001000
newValue =
10100010
| 00001000
----------
10101010
Something like this?
static ushort Transform(ushort value){
return (ushort)(value & 0x0C/*00001100*/ | 0xA2/*10100010*/);
}
This will convert all your sample inputs to your sample outputs. To be more general, you'd want something like this:
static ushort Transform(ushort input, ushort mask, ushort bitsToSet){
return (ushort)(input & mask | bitsToSet & ~mask);
}
And you would call this with:
Transform(input, 0x0C, 0xA2);
For the equivalent behavior of the first function.
A number of the terser solutions here look plausible, especially JS Bangs', but don't forget that you also have a handy BitArray collection to use in the System.Collections namespace: http://msdn.microsoft.com/en-us/library/system.collections.bitarray.aspx
If you want to do bitwise manipulations, I have written a very versatile method to copy any number of bits from one byte (source byte) to another byte (target byte). The bits can be put to another starting bit in the target byte.
In this example, I want to copy 3 bits (bitCount=3) from bit #4 (sourceStartBit) to bit #3 (destinationStartBit). Please note that the numbering of bits starts with "0" and that in my method, the numbering starts with the most significant bit = 0 (reading from left to right).
byte source = 0b10001110;
byte destination = 0b10110001;
byte result = CopyByteIntoByte(source, destination, 4, 1, 3);
Console.WriteLine("The binary result: " + Convert.ToString(result, toBase: 2));
//The binary result: 11110001
byte CopyByteIntoByte(byte sourceByte, byte destinationByte, int sourceStartBit, int destStartBit, int bitCount)
{
int[] mask = { 0, 1, 3, 7, 15, 31, 63, 127, 255 };
byte sourceMask = (byte)(mask[bitCount] << (8 - sourceStartBit - bitCount));
byte destinationMask = (byte)(~(mask[bitCount] << (8-destStartBit - bitCount)));
byte destinationToCopy = (byte)(destinationByte & destinationMask);
int diff = destStartBit - sourceStartBit;
byte sourceToCopy;
if(diff > 0)
{
sourceToCopy = (byte)((sourceByte & sourceMask) >> (diff));
}
else
{
sourceToCopy = (byte)((sourceByte & sourceMask) << (diff * (-1)));
}
return (byte)(sourceToCopy | destinationToCopy);
}

Categories

Resources