I need a little help with bitmap operations in C#
I want to take a UInt16, isolate an arbitrary number of bits, and set them using another UInt16 value.
Example:
10101010 -- Original Value
00001100 -- Mask - Isolates bits 2 and 3
Input Output
00000000 -- 10100010
00000100 -- 10100110
00001000 -- 10101010
00001100 -- 10101110
^^
It seems like you want:
(orig & ~mask) | (input & mask)
The first half zeroes the bits of orig which are in mask. Then you do a bitwise OR against the bits from input that are in mask.
newValue = (originalValue & ~mask) | (inputValue & mask);
originalValue -> 10101010
inputValue -> 00001000
mask -> 00001100
~mask -> 11110011
(originalValue & ~mask)
10101010
& 11110011
----------
10100010
^^
Cleared isolated bits from the original value
(inputValue & mask)
00001000
& 00001100
----------
00001000
newValue =
10100010
| 00001000
----------
10101010
Something like this?
static ushort Transform(ushort value){
return (ushort)(value & 0x0C/*00001100*/ | 0xA2/*10100010*/);
}
This will convert all your sample inputs to your sample outputs. To be more general, you'd want something like this:
static ushort Transform(ushort input, ushort mask, ushort bitsToSet){
return (ushort)(input & mask | bitsToSet & ~mask);
}
And you would call this with:
Transform(input, 0x0C, 0xA2);
For the equivalent behavior of the first function.
A number of the terser solutions here look plausible, especially JS Bangs', but don't forget that you also have a handy BitArray collection to use in the System.Collections namespace: http://msdn.microsoft.com/en-us/library/system.collections.bitarray.aspx
If you want to do bitwise manipulations, I have written a very versatile method to copy any number of bits from one byte (source byte) to another byte (target byte). The bits can be put to another starting bit in the target byte.
In this example, I want to copy 3 bits (bitCount=3) from bit #4 (sourceStartBit) to bit #3 (destinationStartBit). Please note that the numbering of bits starts with "0" and that in my method, the numbering starts with the most significant bit = 0 (reading from left to right).
byte source = 0b10001110;
byte destination = 0b10110001;
byte result = CopyByteIntoByte(source, destination, 4, 1, 3);
Console.WriteLine("The binary result: " + Convert.ToString(result, toBase: 2));
//The binary result: 11110001
byte CopyByteIntoByte(byte sourceByte, byte destinationByte, int sourceStartBit, int destStartBit, int bitCount)
{
int[] mask = { 0, 1, 3, 7, 15, 31, 63, 127, 255 };
byte sourceMask = (byte)(mask[bitCount] << (8 - sourceStartBit - bitCount));
byte destinationMask = (byte)(~(mask[bitCount] << (8-destStartBit - bitCount)));
byte destinationToCopy = (byte)(destinationByte & destinationMask);
int diff = destStartBit - sourceStartBit;
byte sourceToCopy;
if(diff > 0)
{
sourceToCopy = (byte)((sourceByte & sourceMask) >> (diff));
}
else
{
sourceToCopy = (byte)((sourceByte & sourceMask) << (diff * (-1)));
}
return (byte)(sourceToCopy | destinationToCopy);
}
Related
I'm currently writing a solution to a project in my CS class, and I'm getting conflicting results.
Basically, I have to read a single BIT from a BYTE that is read from a file.
These are the relevant methods (ignore the naming standards, I didn't make them and I hate them too):
static bool no_more_bytes()
/**********************/
/*** NO MORE BYTES? ***/
/**********************/
{
return (in_stream.PeekChar() == -1);
}
static byte read_a_byte()
/********************************************************************************/
/*** Function to read a single byte from the input stream and set end_of_file ***/
/********************************************************************************/
{
byte abyte;
if (!no_more_bytes())
abyte = in_stream.ReadByte();
else
{
abyte = 0;
end_of_file = true;
}
return abyte;
}
static byte getbit()
/**********************************************************/
/*** Function to get a single BIT from the input stream ***/
/**********************************************************/
{
byte mask;
if (current_byte == 0 || current_bit_position == 8)
{
current_byte = read_a_byte();
current_bit_position = 0;
}
mask = current_bit_position;
current_bit_position++;
//Your job is to fill in this function so it returns
//either a zero or a one for each bit in the file.
bool set = (current_byte & (128 >> current_bit_position - 1)) != 0; // This is the line in question
if (set)
{
return 1;
} else
{
return 0;
}
}
getbit() is the method I wrote, and it works properly. However, my first solution didn't work.
When the input file contains "ABC", it correctly outputs 01000001 00100001 01100001 (65, 66, 67) by reading 1 bit at a time.
However, my original solution was
bool set = (current_byte & (1 << current_bit_position - 1)) != 0;
So the question is: why does shifting 128 right by the current_bit_position get a different result than shifting 1 left by the current_bit_position
I'm going to interpret this question as one about the order of the bits because, as comments suggest, it doesn't make much sense to expect different operations on the same data to return the same result, in most cases.
So why are we starting with 128 and shifting right instead of starting with 1 and shifting left? Both methods would be valid for enumerating through each bit within the byte, but they operate in reverse order.
If you want to shift 1 left (<<) instead of shifting 128 right (>>) you would have to run current_bit_position from 7 to 0 instead of from 0 to 7.
Shifting 1
1 << 7 == 10000000
1 << 6 == 01000000
1 << 5 == 00100000
1 << 4 == 00010000
1 << 3 == 00001000
1 << 2 == 00000100
1 << 1 == 00000010
1 << 0 == 00000001
Shifting 128
128 >> 0 == 10000000
128 >> 1 == 01000000
128 >> 2 == 00100000
128 >> 3 == 00010000
128 >> 4 == 00001000
128 >> 5 == 00000100
128 >> 6 == 00000010
128 >> 7 == 00000001
Since we generally represent numbers with the most significant digits on the left and the least significant on the right the above sequences are what you would need to use to get the bits in the right order.
I am working with a proprietary binary messaging protocol, where in one message a single byte is used to store 3 different values, like so:
Bit
7 IsArray (1 bit)
6 ArrayLength, MSB (4 bits, max size 2^4 = 16)
5
4
3
2 DataType, MSB (3 bits, max size = 2^3 = 8)
1
0
I want to extract these three values, and store them in three different properties in an object, bool IsArray, byte ArrayLength and byte DataType. I also need to go back from these three properties to a single byte
I seldom work at this level, and things got a bit messy when I went beyond setting or getting the single bit for IsArray, to trying to set several at once. I created three different masks that I thought would help me:
var IsArrayMask = 0x80; // 1000 0000
var ArrayLengthMask = 0x78; // 0111 1000
var DataTypeMask = 0x07; // 0000 0111
Is there an elegant way to achieve what I'm going for?
Edit: With some help from #stefankmitph, I discovered I had my shifting all messed up. This is how I go from 3 properties to a single byte now:
bool IsArray = true;
byte ArrayLength = 6;
byte DataType = 3;
byte serialized = 0x00; // Should end up as 1011 0011 / 0xB3
serialized |= (byte)((IsArray ? 1 : 0) << 7 & IsArrayMask);
serialized |= (byte)(ArrayLength << 3 & ArrayLengthMask);
serialized |= (byte)(DataType & DataTypeMask);
And back again, as per the answer below:
bool isArray = (serialized & IsArrayMask) == IsArrayMask;
int arrayLength = (serialized & ArrayLengthMask) >> 3;
int dataType = (serialized & DataTypeMask);
int val = 0xBE; //f.e. 1011 | 1110
var IsArrayMask = 0x80; // 1000 0000
var ArrayLengthMask = 0x78; // 0111 1000
var DataTypeMask = 0x07; // 0000 0111
bool isArray = ((val & IsArrayMask) >> 7) == 1; // output: true
// as pointed out by #PeterSchneider & #knittl
// you can get isArray in a probably more elegant way:
isArray = (val & IsArrayMask) == IsArrayMask;
// i keep both ways in my answer, because i think
// the first one illustrates that you have to shift by 7 to get 1 (true) or 0 (false)
int arrayLength = (val & ArrayLengthMask) >> 3; // output: 7
int dataType = (val & DataTypeMask); // output: 6
How effective get n bits, starting from startPos from the UInt64 Number.
i know woh get bit by bit, but i want to do in more effective way.
public static ulong GetBits(ulong value, int startPos)
{
int mask = 1 << startPos;
ulong masked_n = value & (ulong)mask;
ulong thebit = masked_n >> startPos;
return (ulong)thebit;
}
// assuming bit numbers start with 0, and that
// startPos is the position of the desired
// least-significant (lowest numbered) bit
public static ulong GetBits( ulong value, int startPos, int bits )
{
ulong mask = ( ( 1UL << bits ) - 1 ) << startPos;
return ( value & mask ) >> startPos;
}
Ok - so let's say (for sanity's sake, let's talk 8-bits) you have:
10101010
And you want 3 (m) bits starting at bit 2 (n). You you'll need a mask like this:
source: 10101010
mask: 00011100
&result: 00001000
So how to generate the mask? We start with 1 and shift it by the number of bits we want (m)
start: 00000001
start << 3: 00001000
Now we need a three 1's in our mask, so we simply minus one from the last step:
00001000 - 1 = 00000111
So we almost have our mask, now we just need to line it up by shifting it by 2 (n)
00000111 << 2 = 00011100
And we have our answer
How can I merge first n bits of a byte with last 8-n bits of another byte?
I know something like below for picking 3 bits from first and 5 from second (Which I have observed in DES encryption algorithm)
zByte=(xByte & 0xE0) | (yByte & 0x1F); But I don't know maths behind why we need to use 0XE0 and 0X1F in this case. So I am trying to understand the details with regards to each bit.
In C#, that would be something like:
int mask = ~((-1) << n);
var result = (x & ~mask) | (y & mask);
i.e. we build a mask that is (for n = 5) : 000....0011111, then we combine (&) one operand with that mask, the other operand with the inverse (~) of the mask, and compose them (|).
You could also probably do something more quickly just using shift operations (avoiding a mask completely) - but only if the data can be treated as unsigned (so Java might struggle here).
It just sounds like you don't understand how boolean arithmetic works? If this is your question it works like this:
0xEO and 0x1F are hexidecimal representations of numbers. If we convert these numbers to binary they would be:
0xE0 = 11100000
0x1F = 00011111
Additionally & (and) and | (or) are bitwise logical operators. To understand logical operators, first remember the 1 = true and 0 = false.
The truth table for & is:
0 & 0 = 0
0 & 1 = 0
1 & 0 = 0
1 & 1 = 1
The truth table for | is:
0 | 0 = 0
0 | 1 = 1
1 | 0 = 1
1 | 1 = 1
So let's breakdown your equation piece by piece. First we will evaluate the code in parenthesis first. We will walk through each number in binary and for the & operator if each operand has a 1 in the same bit position we will return 1. If either number has a 0 then we will return 0. After we finish the evaluation of the operands in the parenthesis we will then take the 2 resulting numbers and apply the | operator bit by bit. If either number has a 1 in the same bit position we will return 1. If both numbers have a 0 in the same bit position we will return 0.
For the sake of discussion, let's say that
xByte = 255 or (FF in hex and 11111111 in binary)
yByte = 0 or (00 in hex and 00000000 in binary)
When you apply the & and | operators we are going to compare each bit one at a time:
zByte = (xByte & 0xEO) | (yByte & 0x1F)
becomes:
zByte = (11111111 & 11100000) | (00000000 & 00011111)
zByte = 111000000 | 00000000
zByte = 11100000
If you understand this and how boolean logic works then you can use Marc Gravell's answer.
The math behind those numbers (0xE0 and 0x1F) is quite simple. First we are exploiting the fact that 0 & <bit> always equals 0 and 1 & <bit> always equals <bit>.
0x1F is 00011111 binary, which means that the first 3 bits will always be 0 after an & operation with another byte - and the last 5 bits will be the same they were in the other byte. Remember that every 1 in a binary number represents a power of 2, so if you want to find the mask mathematically it would be the sum of 2^x from x = 0 to n-1. Then you can find the opposite mask (the one that is 11100000) to extract the first 3 bit, you simply need to subtract the mask from 11111111, and you will get 11100000 (0xE0).
In java,
By using the following function we can get the first n bits of the first Byte and last 8 n bits of the second byte.
public class BitExample {
public static void main(String[] args) {
Byte a = 15;
Byte b = 16;
String mergedValue=merge(4, a, b);
System.out.println(mergedValue);
}
public static String merge(int n, Byte a, Byte b) {
String mergedString = "";
String sa = Integer.toBinaryString(a);
String sb = Integer.toBinaryString(b);
if(n>sa.length()) {
for(int i=0; i<(n-sa.length()); i++) {
mergedString+="0";
}
mergedString+=sa;
}else{
mergedString+=sa.substring(0, n);
}
if(8*n>sb.length()) {
for(int i=0; i<(8*n-sb.length()); i++) {
mergedString+="0";
}
mergedString+=sb;
}
return mergedString;
}
}
I have to get values from a byte saved in three parts of bit combination.
Bit Combination is following
| - - | - - - | - - - |
first portion contains two bits
Second portion contains 3 bits
Third portion contains 3 bits
sample value is
11010001 = 209 decimal
What I want is create Three different Properties which get me decimal value of three portion of given bit as defined above.
how can i get Bit values from this decimal number and then get decimal value from respective bits..
Just use shifting and masking. Assuming that the two-bit value is in the high bits of the byte:
int value1 = (value >> 6) & 3; // 3 = binary 11
int value2 = (value >> 3) & 7; // 7 = binary 111
int value3 = (value >> 0) & 7;
The final line doesn't have to use the shift operator of course - shifting by 0 bits does nothing. I think it adds to the consistency though.
For your sample value, that would give value1 = 3, value2 = 2, value3 = 1.
Reversing:
byte value = (byte) ((value1 << 6) | (value2 << 3) | (value3 << 0));
You can extract the different parts using bit-masks, like this:
int part1=b & 0x3;
int part2=(b>>2) & 0x7;
int part3=(b>>5) & 0x7;
This shifts each part into the least-significant-bits, and then uses binary and to mask all other bits away.
And I assume you don't want the decimal value of these bits, but an int containing their value. An integer is still represented as a binary number internally. Representing the int in base 10/decimal only happens once you convert to string.