How to clear a bit in a byte? - c#

I need to set and clear some bits in bytes of a byte array.
Set works good but clear doesn't compile (I guess because negation has int as result and it's negative by then and ...).
public const byte BIT_1 = 0x01;
public const byte BIT_2 = 0x02;
byte[] bytes = new byte[] {0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15};
// set 1st bit
for (int i = 0; i < bytes.Length; i++)
bytes[i] |= BIT_1;
// clear 2nd bit (set it to 0)
for (int i = 0; i < bytes.Length; i++)
bytes[i] &= (~BIT_2);
How can I clear bits in a byte in a preferably simple way ?

Try this:
int i = 0;
// Set 5th bit
i |= 1 << 5;
Console.WriteLine(i); // 32
// Clear 5th bit
i &= ~(1 << 5);
Console.WriteLine(i); // 0
Version on bytes (and on bytes only in this case):
byte b = 0;
// Set 5th bit
b |= 1 << 5;
Console.WriteLine(b);
// Clear 5th bit
b &= byte.MaxValue ^ (1 << 5);
Console.WriteLine(b);

It appears that you can't do this without converting the byte to an int first.
byte ClearBit(byte b, int i)
{
int x = Convert.ToInt32(b);
x &= ~(1 << i);
return Convert.ToByte(x);
}

This can be avoided by converting to an int and back again:
bytes[i] = (byte)(bytes[i] & ~(BIT_2));

Bitwise operations are not defined on bytes, and using them with bytes will always return an int. So when you use the bitwise NOT on your byte, it will return an int, and as such yield an error when you are trying to AND it with your other byte.
To solve this, you need to explicitely cast the result as a byte again.
byte bit = 0x02;
byte x = 0x0;
// set
x |= bit;
Console.WriteLine(x); // 2
// clear
x &= (byte)~bit;
Console.WriteLine(x); // 0

Related

How to perform bit copy in C# or byte array left shift operation

I want to store information in concatenated fashion in a byte array (2 Bits + 6 Byte + 14 Bits) but I have no idea how to do it. With Buffer.Blockcopy, I can work with bytes, not bits. Moreover, 6 Byte data is already a part of data structure and referenced everywhere, so I don't wish to change that variable. I was thinking of declaring a byte then making a binary string cutting and concatenating, but I think it will be a very bad implementation. There must be a better and efficient way of doing this.
Here is the C# script that I am currently using, but I think there must be better and efficient way to do this.
byte[] ARC = new byte[7]{0x00,0xB1,0x1C,0x3D,0x4C,0x1A,0xEF};
RotateLeft(ARC);
RotateLeft(ARC);
RotateLeft(ARC);
RotateLeft(ARC);
RotateLeft(ARC);
RotateLeft(ARC);
//temp = Convert.ToString(ARC[1], 2).PadLeft(8,'0');
//MessageBox.Show(temp.ToString());
//MessageBox.Show(ByteArrayToString(ARC));
ARC[0] |= 1 << 6;
ARC[0] |= 1 << 7;
byte SB = 0x02;
string Hex = Convert.ToString(SB, 2);
Hex = Hex.Substring(0, 2);
Hex += "000000";
byte SB_processed = Convert.ToByte(Hex,2);
SB_processed |= 1 << 0;
SB_processed |= 1 << 1;
SB_processed |= 1 << 2;
SB_processed |= 1 << 3;
SB_processed |= 1 << 4;
SB_processed |= 1 << 5;
ARC[0] &= SB_processed;
MessageBox.Show(ByteArrayToString(ARC));
Logic for Left shifting taken from this reference.
c# - left shift an entire byte array
I can't well understand your question, but here I show you how to concatenate bits.
byte bits2 = 0b_000000_10;
byte bits6 = 0b_00_101100;
ushort bits14 = 0b_00_10110010001110;
uint concatenated = ((uint)bits2 << 20) + ((uint)bits6 << 14) + (uint)bits14;
uint test = 0b_00_10_101100_10110010001110;
if (concatenated == test‬)
{
Console.WriteLine("Ok");
}

Split 32 bits integer into 4 parts

i've a byte array of size 16384(SourceArray). i want to split it into 4 parts,4 byte array of size 4096. Now each 32 bits from SourceArray will be divided into 10 bits,10 bits,11 bits and 1 bit (10+10+11+1=32 bits). Taking first 10bits,then next 10bits then 11bits and then 1bit.
lets say for example first 4 bytes from the sourceArray, i get a value 9856325. Now 9856325(dec) = 00000000100101100110010101000101(Binary)-----------------------------
0000000010(10bits) 0101100110(10bits) 01010100010(11bits) 1(1bit). I cannot store 10,10,11 bits in byte array, So i'm storing it into Int16 array and for 1 bit i can store it into a byte array.
`
I hope problem statement is clear. I've created some logic.
Int16[] Array1 = new Int16[4096];
Int16[] Array2 = new Int16[4096];
Int16[] Array3 = new Int16[4096];
byte[] Array4 = new byte[4096];
byte[] SourceArray = File.ReadAllBytes(#"Reading an image file from directory");
//Size of source Array = 16384
for (int i = 0; i < 4096; i++)
{
uint temp = BitConverter.ToUInt32(SourceArray, i * 4);
uint t = temp;
Array1[i] = (Int16)(t >> 22);
t = temp << 10;
Array2[i] = (Int16)(t >> 22);
t = temp << 20;
Array3[i] = (Int16)(t >> 21);
t = temp << 31;
Array4[i] = (byte)(t>>31);
}
But i'm not getting the actual results. i might be missing something in my logic. Please verify it, whats wrong Or if you have some better logic,please suggest me.
In my opinion much easier to process bits starting from the last (least significant bit):
uint t = BitConverter.ToUInt32(SourceArray, i * 4);
Array4[i] = (byte) (t & 0x1);
t >>= 1;
Array3[i] = (Int16) (t & 0x7ff); // 0x7ff - mask for 11 bits
t >>= 11;
Array2[i] = (Int16) (t & 0x3ff); // 0x3ff - mask for 10 bits
t >>= 10;
Array1[i] = (Int16) t; // we don't need mask here
Or in reverse order of bit groups:
uint t = BitConverter.ToUInt32(SourceArray, i * 4);
Array2[i] = (Int16) (t & 0x3ff); // 0x3ff - mask for 10 bits
t >>= 10;
Array2[i] = (Int16) (t & 0x3ff); // 0x3ff - mask for 10 bits
t >>= 10;
Array3[i] = (Int16) (t & 0x7ff); // 0x7ff - mask for 11 bits
t >>= 11;
Array4[i] = (byte) (t & 0x1); // we don't need mask here

Convert a signed int to two unsigned short for purpose of reconstruction

I am currently using BitConverter to package two unsigned shorts inside a signed int. This code executes millions of times for different values and I am thinking the code could be optimized further. Here is what I am currently doing -- you can assume the code is C#/NET.
// to two unsigned shorts from one signed int:
int xy = 343423;
byte[] bytes = BitConverter.GetBytes(xy);
ushort m_X = BitConverter.ToUInt16(bytes, 0);
ushort m_Y = BitConverter.ToUInt16(bytes, 2);
// convet two unsigned shorts to one signed int
byte[] xBytes = BitConverter.GetBytes(m_X);
byte[] yBytes = BitConverter.GetBytes(m_Y);
byte[] bytes = new byte[] {
xBytes[0],
xBytes[1],
yBytes[0],
yBytes[1],
};
return BitConverter.ToInt32(bytes, 0);
So it occurs to me that I can avoid the overhead of constructing arrays if I bitshift. But for the life of me I can't figure out what the correct shift operation is. My first pathetic attempt involved the following code:
int xy = 343423;
const int mask = 0x00000000;
byte b1, b2, b3, b4;
b1 = (byte)((xy >> 24));
b2 = (byte)((xy >> 16));
b3 = (byte)((xy >> 8) & mask);
b4 = (byte)(xy & mask);
ushort m_X = (ushort)((xy << b4) | (xy << b3));
ushort m_Y = (ushort)((xy << b2) | (xy << b1));
Could someone help me? I am thinking I need to mask the upper and lower bytes before shifting. Some of the examples I see include subtraction with type.MaxValue or an arbitrary number, like negative twelve, which is pretty confusing.
** Update **
Thank you for the great answers. Here are the results of a benchmark test:
// 34ms for bit shift with 10M operations
// 959ms for BitConverter with 10M operations
static void Main(string[] args)
{
Stopwatch stopWatch = new Stopwatch();
stopWatch.Start();
for (int i = 0; i < 10000000; i++)
{
ushort x = (ushort)i;
ushort y = (ushort)(i >> 16);
int result = (y << 16) | x;
}
stopWatch.Stop();
Console.WriteLine((int)stopWatch.Elapsed.TotalMilliseconds + "ms");
stopWatch.Start();
for (int i = 0; i < 10000000; i++)
{
byte[] bytes = BitConverter.GetBytes(i);
ushort x = BitConverter.ToUInt16(bytes, 0);
ushort y = BitConverter.ToUInt16(bytes, 2);
byte[] xBytes = BitConverter.GetBytes(x);
byte[] yBytes = BitConverter.GetBytes(y);
bytes = new byte[] {
xBytes[0],
xBytes[1],
yBytes[0],
yBytes[1],
};
int result = BitConverter.ToInt32(bytes, 0);
}
stopWatch.Stop();
Console.WriteLine((int)stopWatch.Elapsed.TotalMilliseconds + "ms");
Console.ReadKey();
}
The simplest way is to do it using two shifts:
int xy = -123456;
// Split...
ushort m_X = (ushort) xy;
ushort m_Y = (ushort)(xy>>16);
// Convert back...
int back = (m_Y << 16) | m_X;
Demo on ideone: link.
int xy = 343423;
ushort low = (ushort)(xy & 0x0000ffff);
ushort high = (ushort)((xy & 0xffff0000) >> 16);
int xxyy = low + (((int)high) << 16);

Converting 3 bytes into signed integer in C#

I'm trying to convert 3 bytes to signed integer (Big-endian) in C#.
I've tried to use BitConverter.ToInt32 method, but my problem is what value should have the lats byte.
Can anybody suggest me how can I do it in different way?
I also need to convert 5 (or 6 or 7) bytes to signed long, is there any general rule how to do it?
Thanks in advance for any help.
As a last resort you could always shift+add yourself:
byte b1, b2, b3;
int r = b1 << 16 | b2 << 8 | b3;
Just swap b1/b2/b3 until you have the desired result.
On second thought, this will never produce negative values.
What result do you want when the msb >= 0x80 ?
Part 2, brute force sign extension:
private static int Bytes2Int(byte b1, byte b2, byte b3)
{
int r = 0;
byte b0 = 0xff;
if ((b1 & 0x80) != 0) r |= b0 << 24;
r |= b1 << 16;
r |= b2 << 8;
r |= b3;
return r;
}
I've tested this with:
byte[] bytes = BitConverter.GetBytes(p);
int r = Bytes2Int(bytes[2], bytes[1], bytes[0]);
Console.WriteLine("{0} == {1}", p, r);
for several p.
The last value should be 0 if it isn't set for a positive number, 256 for a negative.
To know what you should pass in, you can try converting it the other way:
var bytes = BitConverter.GetBytes(i);
int x = BitConverter.ToInt32(bytes, 0);
To add to the existing answers here, there's a bit of a gotcha in that Bitconverter.ToInt32() will throw an ArgumentException if the array is less than sizseof(int) (4) bytes in size;
Destination array is not long enough to copy all the items in the collection. Check array index and length.
Given an array less than sizeof(int) (4) bytes in size, you can compensate for left/right padding like so;
Right-pad
Results in positive Int32 numbers
int intByteSize = sizeof(int);
byte[] padded = new byte[intByteSize];
Array.Copy(sourceBytes, 0, padded, 0, sourceBytes.Length);
sourceBytes = padded;
Left-pad
Results in negative Int32 numbers, assuming non-zero value at byte index sourceBytes.Length - 1.
int intByteSize = sizeof(int);
byte[] padded = new byte[intByteSize];
Array.Copy(sourceBytes, 0, padded, intByteSize - sourceBytes.Length, sourceBytes.Length);
sourceBytes = padded;
Once padded, you can safely call int myValue = BitConverter.ToInt32(sourceBytes, 0);.

Converting a range into a bit array

I'm writing a time-critical piece of code in C# that requires me to convert two unsigned integers that define an inclusive range into a bit field. Ex:
uint x1 = 3;
uint x2 = 9;
//defines the range [3-9]
// 98 7654 3
//must be converted to: 0000 0011 1111 1000
It may help to visualize the bits in reverse order
The maximum value for this range is a parameter given at run-time which we'll call max_val. Therefore, the bit field variable ought to be defined as a UInt32 array with size equal to max_val/32:
UInt32 MAX_DIV_32 = max_val / 32;
UInt32[] bitArray = new UInt32[MAX_DIV_32];
Given a range defined by the variables x1 and x2, what is the fastest way to perform this conversion?
Try this. Calculate the range of array items that must be filled with all ones and do this by iterating over this range. Finally set the items at both borders.
Int32 startIndex = x1 >> 5;
Int32 endIndex = x2 >> 5;
bitArray[startIndex] = UInt32.MaxValue << (x1 & 31);
for (Int32 i = startIndex + 1; i <= endIndex; i++)
{
bitArray[i] = UInt32.MaxValue;
}
bitArray[endIndex] &= UInt32.MaxValue >> (31 - (x2 & 31));
May be the code is not 100% correct, but the idea should work.
Just tested it and found three bugs. The calculation at start index required a mod 32 and at end index the 32 must be 31 and a logical and instead of a assignment to handle the case of start and end index being the same. Should be quite fast.
Just benchmarked it with equal distribution of x1 and x2 over the array.
Intel Core 2 Duo E8400 3.0 GHz, MS VirtualPC with Server 2003 R2 on Windows XP host.
Array length [bits] 320 160 64
Performance [executions/s] 33 million 43 million 54 million
One more optimazation x % 32 == x & 31 but I am unable to meassure a performance gain. Because of only 10.000.000 iterations in my test the fluctuations are quite high. And I am running in VirtualPC making the situation even more unpredictable.
My solution for setting a whole range of bits in a BitArray to true or false:
public static BitArray SetRange(BitArray bitArray, Int32 offset, Int32 length, Boolean value)
{
Int32[] ints = new Int32[(bitArray.Count >> 5) + 1];
bitArray.CopyTo(ints, 0);
var firstInt = offset >> 5;
var lastInt = (offset + length) >> 5;
Int32 mask = 0;
if (value)
{
// set first and last int
mask = (-1 << (offset & 31));
if (lastInt != firstInt)
ints[lastInt] |= ~(-1 << ((offset + length) & 31));
else
mask &= ~(-1 << ((offset + length) & 31));
ints[firstInt] |= mask;
// set all ints in between
for (Int32 i = firstInt + 1; i < lastInt; i++)
ints[i] = -1;
}
else
{
// set first and last int
mask = ~(-1 << (offset & 31));
if (lastInt != firstInt)
ints[lastInt] &= -1 << ((offset + length) & 31);
else
mask |= -1 << ((offset + length) & 31);
ints[firstInt] &= mask;
// set all ints in between
for (Int32 i = firstInt + 1; i < lastInt; i++)
ints[i] = 0;
}
return new BitArray(ints) { Length = bitArray.Length };
}
You could try:
UInt32 x1 = 3;
UInt32 x2 = 9;
UInt32 newInteger = (UInt32)(Math.Pow(2, x2 + 1) - 1) &
~(UInt32)(Math.Pow(2, x1)-1);
Is there a reason not to use the System.Collections.BitArray class instead of a UInt32[]? Otherwise, I'd try something like this:
int minIndex = (int)x1/32;
int maxIndex = (int)x2/32;
// first handle the all zero regions and the all one region (if any)
for (int i = 0; i < minIndex; i++) {
bitArray[i] = 0;
}
for (int i = minIndex + 1; i < maxIndex; i++) {
bitArray[i] = UInt32.MaxValue; // set to all 1s
}
for (int i = maxIndex + 1; i < MAX_DIV_32; i++) {
bitArray[i] = 0;
}
// now handle the tricky parts
uint maxBits = (2u << ((int)x2 - 32 * maxIndex)) - 1; // set to 1s up to max
uint minBits = ~((1u << ((int)x1 - 32 * minIndex)) - 1); // set to 1s after min
if (minIndex == maxIndex) {
bitArray[minIndex] = maxBits & minBits;
}
else {
bitArray[minIndex] = minBits;
bitArray[maxIndex] = maxBits;
}
I was bored enough to try doing it with a char array and using Convert.ToUInt32(string, int) to convert to a uint from base 2.
uint Range(int l, int h)
{
char[] buffer = new char[h];
for (int i = 0; i < buffer.Length; i++)
{
buffer[i] = i < h - l ? '1' : '0';
}
return Convert.ToUInt32(new string(buffer), 2);
}
A simple benchmark shows that my method is about 5% faster than Angrey Jim's (even if you replace second Pow with a bit shift.)
It is probably the easiest to convert to producing a uint array if the upper bound is too big to fit into a single int. It's a little cryptic but I believe it works.
uint[] Range(int l, int h)
{
char[] buffer = new char[h];
for (int i = 0; i < buffer.Length; i++)
{
buffer[i] = i < h - l ? '1' : '0';
}
int bitsInUInt = sizeof(uint) * 8;
int numNeededUInts = (int)Math.Ceiling((decimal)buffer.Length /
(decimal)bitsInUInt);
uint[] uints = new uint[numNeededUInts];
for (int j = uints.Length - 1, s = buffer.Length - bitsInUInt;
j >= 0 && s >= 0;
j--, s -= bitsInUInt)
{
uints[j] = Convert.ToUInt32(new string(buffer, s, bitsInUInt), 2);
}
int remainder = buffer.Length % bitsInUInt;
if (remainder > 0)
{
uints[0] = Convert.ToUInt32(new string(buffer, 0, remainder), 2);
}
return uints;
}
Try this:
uint x1 = 3;
uint x2 = 9;
int cbToShift = x2 - x1; // 6
int nResult = ((1 << cbToShift) - 1) << x1;
/*
(1<<6)-1 gives you 63 = 111111, then you shift it on 3 bits left
*/

Categories

Resources