Byte shift issue in an integer conversion - c#

I read 3 bytes in a binary file which I need to convert into an integer.
I use this code to read the bytes :
LastNum last1Hz = new LastNum();
last1Hz.Freq = 1;
Byte[] LastNumBytes1Hz = new Byte[3];
Array.Copy(lap_info, (8 + (32 * k)), LastNumBytes1Hz, 0, 3);
last1Hz.NumData = LastNumBytes1Hz[2] << 16 + LastNumBytes1Hz[1] << 8 + LastNumBytes1Hz[0];
last1Hz.NumData is an integer.
This seems to be the good way to convert bytes into integers in the posts i have seen.
Here is a capture of the values read:
But the integer last1Hz.NumData is always 0.
I'm missing something but can't figure out what.

You need to use brackets (because addition has a higher priority than bit shifting):
int a = 0x87;
int b = 0x00;
int c = 0x00;
int x = c << 16 + b << 8 + a; // result 0
int z = (c << 16) + (b << 8) + a; // result 135
Your code should look like this:
last1Hz.NumData = (LastNumBytes1Hz[2] << 16) + (LastNumBytes1Hz[1] << 8) + LastNumBytes1Hz[0];

I think the problem is an order of precedence issue. + is evaluated before <<
Put brackets in to force the bit shift to be evaluated first.
last1Hz.NumData = (LastNumBytes1Hz[2] << 16) + (LastNumBytes1Hz[1] << 8) + LastNumBytes1Hz[0];

Related

Storing 4x 0-16 numbers in a short (or 2 numbers in a byte)

I'm packing some binary data as a short, but want to have 4x values of 0-F.. And would like to do this without having a bunch of switch() cases reading the string.split of a hex
Someone have a clever, elegant solution for this or should I just long-hand it?
eg; 1C4A = (1, 12, 4, 10)
Shift in and out
var a = 1;
var b = 12;
var c = 4;
var d = 10;
// in
var packed = (short) ((a << 12) | (b << 8) | (c << 4) | d);
// out
a = (packed >> 12) & 0xf;
b = (packed >> 8) & 0xf;
c = (packed >> 4) & 0xf;
d = packed & 0xF;
Console.WriteLine(a);
Console.WriteLine(b);
Console.WriteLine(c);
Console.WriteLine(d);
Output
1
12
4
10
You can shift by 4 (or divide and multiply by 16) to move numbers into different place values. Then mask and shift your packed number to get your original numbers back.
Eg if you want to store 1 and 2 you could do:
int packed = (1 << 4) + 2;
int v1 = (packed & 0xF0) >> 4;
int v2 = packed & 0x0F;
Console.WriteLine($"{v1}, {v2}");
>>> 1, 2

How do I uniquely store 4 ints in one int (C#)? [duplicate]

This question already has answers here:
How can I store 4 8 bit coordinates into one integer (C#)?
(4 answers)
Closed 2 years ago.
Lets say that I have the following four ints:
int a = 4;
int b = 5;
int c = 6;
int d = 7;
I want to store these values in one int:
int whole;
How would I do this using bitwise / shift operations? I tried:
int whole = a + (b << 8) + (c << 16) + (d << 24);
But I'm not sure if this will create unique values for whole, because I also want to retrieve the ints back from whole. So if I have, for example, whole = 5919835 I want to get the value of c back.
You can compress a, b, c and d into a single int if all a, b, c and d are all in a range of [0..255] i.e. if they are in fact bytes:
int whole = unchecked(a | (b << 8) | (c << 16) | (d << 24));
note unchecked (when d > 127 you can have integer overflow since int is signed integer). Technically, + will do, but | (bitwise or) seems to be more readable.
Reverse:
a = whole & 0xFF;
b = (whole >> 8) & 0xFF;
c = (whole >> 16) & 0xFF;
d = (whole >> 24) & 0xFF;

How to perform bit copy in C# or byte array left shift operation

I want to store information in concatenated fashion in a byte array (2 Bits + 6 Byte + 14 Bits) but I have no idea how to do it. With Buffer.Blockcopy, I can work with bytes, not bits. Moreover, 6 Byte data is already a part of data structure and referenced everywhere, so I don't wish to change that variable. I was thinking of declaring a byte then making a binary string cutting and concatenating, but I think it will be a very bad implementation. There must be a better and efficient way of doing this.
Here is the C# script that I am currently using, but I think there must be better and efficient way to do this.
byte[] ARC = new byte[7]{0x00,0xB1,0x1C,0x3D,0x4C,0x1A,0xEF};
RotateLeft(ARC);
RotateLeft(ARC);
RotateLeft(ARC);
RotateLeft(ARC);
RotateLeft(ARC);
RotateLeft(ARC);
//temp = Convert.ToString(ARC[1], 2).PadLeft(8,'0');
//MessageBox.Show(temp.ToString());
//MessageBox.Show(ByteArrayToString(ARC));
ARC[0] |= 1 << 6;
ARC[0] |= 1 << 7;
byte SB = 0x02;
string Hex = Convert.ToString(SB, 2);
Hex = Hex.Substring(0, 2);
Hex += "000000";
byte SB_processed = Convert.ToByte(Hex,2);
SB_processed |= 1 << 0;
SB_processed |= 1 << 1;
SB_processed |= 1 << 2;
SB_processed |= 1 << 3;
SB_processed |= 1 << 4;
SB_processed |= 1 << 5;
ARC[0] &= SB_processed;
MessageBox.Show(ByteArrayToString(ARC));
Logic for Left shifting taken from this reference.
c# - left shift an entire byte array
I can't well understand your question, but here I show you how to concatenate bits.
byte bits2 = 0b_000000_10;
byte bits6 = 0b_00_101100;
ushort bits14 = 0b_00_10110010001110;
uint concatenated = ((uint)bits2 << 20) + ((uint)bits6 << 14) + (uint)bits14;
uint test = 0b_00_10_101100_10110010001110;
if (concatenated == test‬)
{
Console.WriteLine("Ok");
}

How to get amount of 1s from 64 bit number [duplicate]

This question already has answers here:
Count number of bits in a 64-bit (long, big) integer?
(3 answers)
Closed 9 years ago.
Possible duplicate: Count number of bits in a 64-bit (long, big)
integer?
For an image comparison algorithm I get a 64bit number as result. The amount of 1s in the number (ulong) (101011011100...) tells me how similar two images are, so I need to count them. How would I best do this in C#?
I'd like to use this in a WinRT & Windows Phone App, so I'm also looking for a low-cost method.
EDIT: As I have to count the bits for a large number of Images, I'm wondering if the lookup-table-approach might be best. But I'm not really sure how that works...
The Sean Eron Anderson's Bit Twiddling Hacks has this trick, among others:
Counting bits set, in parallel
unsigned int v; // count bits set in this (32-bit value)
unsigned int c; // store the total here
static const int S[] = {1, 2, 4, 8, 16}; // Magic Binary Numbers
static const int B[] = {0x55555555, 0x33333333, 0x0F0F0F0F, 0x00FF00FF, 0x0000FFFF};
c = v - ((v >> 1) & B[0]);
c = ((c >> S[1]) & B[1]) + (c & B[1]);
c = ((c >> S[2]) + c) & B[2];
c = ((c >> S[3]) + c) & B[3];
c = ((c >> S[4]) + c) & B[4];
The B array, expressed as binary, is:
B[0] = 0x55555555 = 01010101 01010101 01010101 01010101
B[1] = 0x33333333 = 00110011 00110011 00110011 00110011
B[2] = 0x0F0F0F0F = 00001111 00001111 00001111 00001111
B[3] = 0x00FF00FF = 00000000 11111111 00000000 11111111
B[4] = 0x0000FFFF = 00000000 00000000 11111111 11111111
We can adjust the method for larger integer sizes by continuing with the patterns for the Binary Magic Numbers, B and S. If there are k bits, then we need the arrays S and B to be ceil(lg(k)) elements long, and we must compute the same number of expressions for c as S or B are long. For a 32-bit v, 16 operations are used.
The best method for counting bits in a 32-bit integer v is the following:
v = v - ((v >> 1) & 0x55555555); // reuse input as temporary
v = (v & 0x33333333) + ((v >> 2) & 0x33333333); // temp
c = ((v + (v >> 4) & 0xF0F0F0F) * 0x1010101) >> 24; // count
The best bit counting method takes only 12 operations, which is the same as the lookup-table method, but avoids the memory and potential cache misses of a table. It is a hybrid between the purely parallel method above and the earlier methods using multiplies (in the section on counting bits with 64-bit instructions), though it doesn't use 64-bit instructions. The counts of bits set in the bytes is done in parallel, and the sum total of the bits set in the bytes is computed by multiplying by 0x1010101 and shifting right 24 bits.
A generalization of the best bit counting method to integers of bit-widths upto 128 (parameterized by type T) is this:
v = v - ((v >> 1) & (T)~(T)0/3); // temp
v = (v & (T)~(T)0/15*3) + ((v >> 2) & (T)~(T)0/15*3); // temp
v = (v + (v >> 4)) & (T)~(T)0/255*15; // temp
c = (T)(v * ((T)~(T)0/255)) >> (sizeof(T) - 1) * CHAR_BIT; // count
Something along these lines would do (note that this isn't tested code, I just wrote it here, so it may and probably will require tweaking).
int numberOfOnes = 0;
for (int i = 63; i >= 0; i--)
{
if ((yourUInt64 >> i) & 1 == 1) numberOfOnes++;
else continue;
}
Option 1 - less iterations if 64bit result < 2^63:
byte numOfOnes;
while (result != 0)
{
numOfOnes += (result & 0x1);
result = (result >> 1);
}
return numOfOnes;
Option 2 - constant number of interations - can use loop unrolling:
byte NumOfOnes;
for (int i = 0; i < 64; i++)
{
numOfOnes += (result & 0x1);
result = (result >> 1);
}
this is a 32-bit version of BitCount, you could easily extend this to 64-bit version by add one more right shift by 32, and it would be very efficient.
int bitCount(int x) {
/* first let res = x&0xAAAAAAAA >> 1 + x&55555555
* after that the (2k)th and (2k+1)th bits of the res
* will be the number of 1s that contained by the (2k)th
* and (2k+1)th bits of x
* we can use a similar way to caculate the number of 1s
* that contained by the (4k)th and (4k+1)th and (4k+2)th
* and (4k+3)th bits of x, so as 8, 16, 32
*/
int varA = (85 << 8) | 85;
varA = (varA << 16) | varA;
int res = ((x>>1) & varA) + (x & varA);
varA = (51 << 8) | 51;
varA = (varA << 16) | varA;
res = ((res>>2) & varA) + (res & varA);
varA = (15 << 8) | 15;
varA = (varA << 16) | varA;
res = ((res>>4) & varA) + (res & varA);
varA = (255 << 16) | 255;
res = ((res>>8) & varA) + (res & varA);
varA = (255 << 8) | 255;
res = ((res>>16) & varA) + (res & varA);
return res;
}

Perform signed arithmetic on numbers defined as bit ranges of unsigned bytes

I have two bytes. I need to turn them into two integers where the first 12 bits make one int and the last 4 make the other. I figure i can && the 2nd byte with 0x0f to get the 4 bits, but I'm not sure how to make that into a byte with the correct sign.
update:
just to clarify I have 2 bytes
byte1 = 0xab
byte2 = 0xcd
and I need to do something like this with it
var value = 0xabc * 10 ^ 0xd;
sorry for the confusion.
thanks for all of the help.
int a = 10;
int a1 = a&0x000F;
int a2 = a&0xFFF0;
try to use this code
For kicks:
public static partial class Levitate
{
public static Tuple<int, int> UnPack(this int value)
{
uint sign = (uint)value & 0x80000000;
int small = ((int)sign >> 28) | (value & 0x0F);
int big = value & 0xFFF0;
return new Tuple<int, int>(small, big);
}
}
int a = 10;
a.UnPack();
Ok, let's try this again knowing what we're shooting for. I tried the following out in VS2008 and it seems to work fine, that is, both outOne and outTwo = -1 at the end. Is that what you're looking for?
byte b1 = 0xff;
byte b2 = 0xff;
ushort total = (ushort)((b1 << 8) + b2);
short outOne = (short)((short)(total & 0xFFF0) >> 4);
sbyte outTwo = (sbyte)((sbyte)((total & 0xF) << 4) >> 4);
Assuming you have the following to bytes:
byte a = 0xab;
byte b = 0xcd;
and consider 0xab the first 8 bits and 0xcd the second 8 bits, or 0xabc the first 12 bits and 0xd the last four bits. Then you can get the these bits as follows;
int x = (a << 4) | (b >> 4); // x == 0x0abc
int y = b & 0x0f; // y == 0x000d
edited to take into account clarification of "signing" rules:
public void unpack( byte[] octets , out int hiNibbles , out int loNibble )
{
if ( octets == null ) throw new ArgumentNullException("octets");
if ( octets.Length != 2 ) throw new ArgumentException("octets") ;
int value = (int) BitConverter.ToInt16( octets , 0 ) ;
// since the value is signed, right shifts sign-extend
hiNibbles = value >> 4 ;
loNibble = ( value << 28 ) >> 28 ;
return ;
}

Categories

Resources