i've a byte array of size 16384(SourceArray). i want to split it into 4 parts,4 byte array of size 4096. Now each 32 bits from SourceArray will be divided into 10 bits,10 bits,11 bits and 1 bit (10+10+11+1=32 bits). Taking first 10bits,then next 10bits then 11bits and then 1bit.
lets say for example first 4 bytes from the sourceArray, i get a value 9856325. Now 9856325(dec) = 00000000100101100110010101000101(Binary)-----------------------------
0000000010(10bits) 0101100110(10bits) 01010100010(11bits) 1(1bit). I cannot store 10,10,11 bits in byte array, So i'm storing it into Int16 array and for 1 bit i can store it into a byte array.
`
I hope problem statement is clear. I've created some logic.
Int16[] Array1 = new Int16[4096];
Int16[] Array2 = new Int16[4096];
Int16[] Array3 = new Int16[4096];
byte[] Array4 = new byte[4096];
byte[] SourceArray = File.ReadAllBytes(#"Reading an image file from directory");
//Size of source Array = 16384
for (int i = 0; i < 4096; i++)
{
uint temp = BitConverter.ToUInt32(SourceArray, i * 4);
uint t = temp;
Array1[i] = (Int16)(t >> 22);
t = temp << 10;
Array2[i] = (Int16)(t >> 22);
t = temp << 20;
Array3[i] = (Int16)(t >> 21);
t = temp << 31;
Array4[i] = (byte)(t>>31);
}
But i'm not getting the actual results. i might be missing something in my logic. Please verify it, whats wrong Or if you have some better logic,please suggest me.
In my opinion much easier to process bits starting from the last (least significant bit):
uint t = BitConverter.ToUInt32(SourceArray, i * 4);
Array4[i] = (byte) (t & 0x1);
t >>= 1;
Array3[i] = (Int16) (t & 0x7ff); // 0x7ff - mask for 11 bits
t >>= 11;
Array2[i] = (Int16) (t & 0x3ff); // 0x3ff - mask for 10 bits
t >>= 10;
Array1[i] = (Int16) t; // we don't need mask here
Or in reverse order of bit groups:
uint t = BitConverter.ToUInt32(SourceArray, i * 4);
Array2[i] = (Int16) (t & 0x3ff); // 0x3ff - mask for 10 bits
t >>= 10;
Array2[i] = (Int16) (t & 0x3ff); // 0x3ff - mask for 10 bits
t >>= 10;
Array3[i] = (Int16) (t & 0x7ff); // 0x7ff - mask for 11 bits
t >>= 11;
Array4[i] = (byte) (t & 0x1); // we don't need mask here
Related
I have one byte of data and from there I have to extract it in the following manner.
data[0] has to extract
id(5 bit)
Sequence(2 bit)
HashAppData(1 bit)
data[1] has to extract
id(6 bit)
offset(2 bit)
Required functions are below where byte array length is 2 and I have to extract to the above manner.
public static int ParseData(byte[] data)
{
// All code goes here
}
Couldn't find any suitable solution to how do I make it. Can you please extract it?
EDIT: Fragment datatype should be in Integer
Something like this?
int id = (data[0] >> 3) & 31;
int sequence = (data[0] >> 1) & 3;
int hashAppData = data[0] & 1;
int id2 = (data[1] >> 2) & 63;
int offset = data[1] & 3;
This is how I'd do it for the first byte:
byte value = 155;
byte maskForHighest5 = 128+64+32+16+8;
byte maskForNext2 = 4+2;
byte maskForLast = 1;
byte result1 = (byte)((value & maskForHighest5) >> 3); // shift right 3 bits
byte result2 = (byte)((value & maskForNext2) >> 1); // shift right 1 bit
byte result3 = (byte)(value & maskForLast);
Working demo (.NET Fiddle):
https://dotnetfiddle.net/lNZ9TR
Code for the 2nd byte will be very similar.
If you're uncomfortable with bit manipulation, use an extension method to keep the intent of ParseData clear. This extension can be adapted for other integers by replacing both uses of byte with the necessary type.
public static int GetBitValue(this byte b, int offset, int length)
{
const int ByteWidth = sizeof(byte) * 8;
// System.Diagnostics validation - Excluded in release builds
Debug.Assert(offset >= 0);
Debug.Assert(offset < ByteWidth);
Debug.Assert(length > 0);
Debug.Assert(length <= ByteWidth);
Debug.Assert(offset + length <= ByteWidth);
var shift = ByteWidth - offset - length;
var mask = (1 << length) - 1;
return (b >> shift) & mask;
}
Usage in this case:
public static int ParseData(byte[] data)
{
{ // data[0]
var id = data[0].GetBitValue(0, 5);
var sequence = data[0].GetBitValue(5, 2);
var hashAppData = data[0].GetBitValue(7, 1);
}
{ // data[1]
var id = data[1].GetBitValue(0, 6);
var offset = data[1].GetBitValue(6, 2);
}
// ... return necessary data
}
I use the biginteger class whose source , and I want to generate a biginteger number between two values min and max randomly so i used this method found on stackoverflow :
public BigInteger getRandom(int n)
{
var rng = new RNGCryptoServiceProvider();
byte[] bytes = new byte[n / 8];
rng.GetBytes(bytes);
return new BigInteger(bytes);
}
But I can not generate numbers between min and max because the parameters of this function represent the number of bits, can someone help me, thank you in advance!
min and max are also a biginteger.
Try this one:
// max exclusive (not included!)
public static BigInteger GetRandom(RNGCryptoServiceProvider rng, BigInteger min, BigInteger max)
{
// shift to 0...max-min
BigInteger max2 = max - min;
int bits = max2.bitCount();
// 1 bit for sign (that we will ignore, we only want positive numbers!)
bits++;
// we round to the next byte
int bytes = (bits + 7) / 8;
int uselessBits = bytes * 8 - bits;
var bytes2 = new byte[bytes];
while (true)
{
rng.GetBytes(bytes2);
// The maximum number of useless bits is 1 (sign) + 7 (rounding) == 8
if (uselessBits == 8)
{
// and it is exactly one byte!
bytes2[0] = 0;
}
else
{
// Remove the sign and the useless bits
for (int i = 0; i < uselessBits; i++)
{
//Equivalent to
//byte bit = (byte)(1 << (7 - (i % 8)));
byte bit = (byte)(1 << (7 & (~i)));
//Equivalent to
//bytes2[i / 8] &= (byte)~bit;
bytes2[i >> 3] &= (byte)~bit;
}
}
var bi = new BigInteger(bytes2);
// If it is too much big, then retry!
if (bi >= max2)
{
continue;
}
// unshift the number
bi += min;
return bi;
}
}
There are some comments that explain a little how it work.
I'm having some difficulty reversing the following functions done before storing data to a device.
ss = enum + 16
u32ts = number << 8
u32timestamp = ss+u32ts
enum and number are the two vaules I'm trying to get back but I'm unawaire of what either of them are when I start with u32timestamp.
What I have tried to do is
uint temp = u32timestamp;
number = 0;
if (u32timestamp > 100)
{
number = (u32timestamp >> 8 & 8 );
temp = u32timestamp - number ;
}
enum = temp - 16;
But I keep getting out the incorrect values. Please help me fix this. enum is always between 16 and 21 but number can be positive any value.
// sample a and b
int a = 5, b = 7;
int ss = a + 16;
int u32ts = b << 8;
int u32timestamp = ss + u32ts;
// reversing...
int rev_b = u32timestamp >> 8;
int rev_ss = u32timestamp - (rev_b << 8);
int rev_a = rev_ss - 16;
var a = u32timestamp&0xFF;
var number= u32timestamp >>8;
var en = a-16;
the coding place "number" shifting eight bit to the left, so a regular 8 shift on the right will recover it by trashing the LS bits. Then we mask the lower bit to recover en.
public void Decode
{
uint u32timestamp = 31778;
var number = u32timestamp >> 8;
var temp = u32timestamp - (number << 8);
var en = temp - 16;
}
I need to set and clear some bits in bytes of a byte array.
Set works good but clear doesn't compile (I guess because negation has int as result and it's negative by then and ...).
public const byte BIT_1 = 0x01;
public const byte BIT_2 = 0x02;
byte[] bytes = new byte[] {0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15};
// set 1st bit
for (int i = 0; i < bytes.Length; i++)
bytes[i] |= BIT_1;
// clear 2nd bit (set it to 0)
for (int i = 0; i < bytes.Length; i++)
bytes[i] &= (~BIT_2);
How can I clear bits in a byte in a preferably simple way ?
Try this:
int i = 0;
// Set 5th bit
i |= 1 << 5;
Console.WriteLine(i); // 32
// Clear 5th bit
i &= ~(1 << 5);
Console.WriteLine(i); // 0
Version on bytes (and on bytes only in this case):
byte b = 0;
// Set 5th bit
b |= 1 << 5;
Console.WriteLine(b);
// Clear 5th bit
b &= byte.MaxValue ^ (1 << 5);
Console.WriteLine(b);
It appears that you can't do this without converting the byte to an int first.
byte ClearBit(byte b, int i)
{
int x = Convert.ToInt32(b);
x &= ~(1 << i);
return Convert.ToByte(x);
}
This can be avoided by converting to an int and back again:
bytes[i] = (byte)(bytes[i] & ~(BIT_2));
Bitwise operations are not defined on bytes, and using them with bytes will always return an int. So when you use the bitwise NOT on your byte, it will return an int, and as such yield an error when you are trying to AND it with your other byte.
To solve this, you need to explicitely cast the result as a byte again.
byte bit = 0x02;
byte x = 0x0;
// set
x |= bit;
Console.WriteLine(x); // 2
// clear
x &= (byte)~bit;
Console.WriteLine(x); // 0
I'm writing a time-critical piece of code in C# that requires me to convert two unsigned integers that define an inclusive range into a bit field. Ex:
uint x1 = 3;
uint x2 = 9;
//defines the range [3-9]
// 98 7654 3
//must be converted to: 0000 0011 1111 1000
It may help to visualize the bits in reverse order
The maximum value for this range is a parameter given at run-time which we'll call max_val. Therefore, the bit field variable ought to be defined as a UInt32 array with size equal to max_val/32:
UInt32 MAX_DIV_32 = max_val / 32;
UInt32[] bitArray = new UInt32[MAX_DIV_32];
Given a range defined by the variables x1 and x2, what is the fastest way to perform this conversion?
Try this. Calculate the range of array items that must be filled with all ones and do this by iterating over this range. Finally set the items at both borders.
Int32 startIndex = x1 >> 5;
Int32 endIndex = x2 >> 5;
bitArray[startIndex] = UInt32.MaxValue << (x1 & 31);
for (Int32 i = startIndex + 1; i <= endIndex; i++)
{
bitArray[i] = UInt32.MaxValue;
}
bitArray[endIndex] &= UInt32.MaxValue >> (31 - (x2 & 31));
May be the code is not 100% correct, but the idea should work.
Just tested it and found three bugs. The calculation at start index required a mod 32 and at end index the 32 must be 31 and a logical and instead of a assignment to handle the case of start and end index being the same. Should be quite fast.
Just benchmarked it with equal distribution of x1 and x2 over the array.
Intel Core 2 Duo E8400 3.0 GHz, MS VirtualPC with Server 2003 R2 on Windows XP host.
Array length [bits] 320 160 64
Performance [executions/s] 33 million 43 million 54 million
One more optimazation x % 32 == x & 31 but I am unable to meassure a performance gain. Because of only 10.000.000 iterations in my test the fluctuations are quite high. And I am running in VirtualPC making the situation even more unpredictable.
My solution for setting a whole range of bits in a BitArray to true or false:
public static BitArray SetRange(BitArray bitArray, Int32 offset, Int32 length, Boolean value)
{
Int32[] ints = new Int32[(bitArray.Count >> 5) + 1];
bitArray.CopyTo(ints, 0);
var firstInt = offset >> 5;
var lastInt = (offset + length) >> 5;
Int32 mask = 0;
if (value)
{
// set first and last int
mask = (-1 << (offset & 31));
if (lastInt != firstInt)
ints[lastInt] |= ~(-1 << ((offset + length) & 31));
else
mask &= ~(-1 << ((offset + length) & 31));
ints[firstInt] |= mask;
// set all ints in between
for (Int32 i = firstInt + 1; i < lastInt; i++)
ints[i] = -1;
}
else
{
// set first and last int
mask = ~(-1 << (offset & 31));
if (lastInt != firstInt)
ints[lastInt] &= -1 << ((offset + length) & 31);
else
mask |= -1 << ((offset + length) & 31);
ints[firstInt] &= mask;
// set all ints in between
for (Int32 i = firstInt + 1; i < lastInt; i++)
ints[i] = 0;
}
return new BitArray(ints) { Length = bitArray.Length };
}
You could try:
UInt32 x1 = 3;
UInt32 x2 = 9;
UInt32 newInteger = (UInt32)(Math.Pow(2, x2 + 1) - 1) &
~(UInt32)(Math.Pow(2, x1)-1);
Is there a reason not to use the System.Collections.BitArray class instead of a UInt32[]? Otherwise, I'd try something like this:
int minIndex = (int)x1/32;
int maxIndex = (int)x2/32;
// first handle the all zero regions and the all one region (if any)
for (int i = 0; i < minIndex; i++) {
bitArray[i] = 0;
}
for (int i = minIndex + 1; i < maxIndex; i++) {
bitArray[i] = UInt32.MaxValue; // set to all 1s
}
for (int i = maxIndex + 1; i < MAX_DIV_32; i++) {
bitArray[i] = 0;
}
// now handle the tricky parts
uint maxBits = (2u << ((int)x2 - 32 * maxIndex)) - 1; // set to 1s up to max
uint minBits = ~((1u << ((int)x1 - 32 * minIndex)) - 1); // set to 1s after min
if (minIndex == maxIndex) {
bitArray[minIndex] = maxBits & minBits;
}
else {
bitArray[minIndex] = minBits;
bitArray[maxIndex] = maxBits;
}
I was bored enough to try doing it with a char array and using Convert.ToUInt32(string, int) to convert to a uint from base 2.
uint Range(int l, int h)
{
char[] buffer = new char[h];
for (int i = 0; i < buffer.Length; i++)
{
buffer[i] = i < h - l ? '1' : '0';
}
return Convert.ToUInt32(new string(buffer), 2);
}
A simple benchmark shows that my method is about 5% faster than Angrey Jim's (even if you replace second Pow with a bit shift.)
It is probably the easiest to convert to producing a uint array if the upper bound is too big to fit into a single int. It's a little cryptic but I believe it works.
uint[] Range(int l, int h)
{
char[] buffer = new char[h];
for (int i = 0; i < buffer.Length; i++)
{
buffer[i] = i < h - l ? '1' : '0';
}
int bitsInUInt = sizeof(uint) * 8;
int numNeededUInts = (int)Math.Ceiling((decimal)buffer.Length /
(decimal)bitsInUInt);
uint[] uints = new uint[numNeededUInts];
for (int j = uints.Length - 1, s = buffer.Length - bitsInUInt;
j >= 0 && s >= 0;
j--, s -= bitsInUInt)
{
uints[j] = Convert.ToUInt32(new string(buffer, s, bitsInUInt), 2);
}
int remainder = buffer.Length % bitsInUInt;
if (remainder > 0)
{
uints[0] = Convert.ToUInt32(new string(buffer, 0, remainder), 2);
}
return uints;
}
Try this:
uint x1 = 3;
uint x2 = 9;
int cbToShift = x2 - x1; // 6
int nResult = ((1 << cbToShift) - 1) << x1;
/*
(1<<6)-1 gives you 63 = 111111, then you shift it on 3 bits left
*/