How to get an unsigned long from a byte array - c#

I have an incoming byte array from a piece of test equipment. The byte array can either be two or four bytes long. I wrote the following code to convert these byte array's into unsigned longs:
private ulong GetUlongFrom2Bytes(byte MSB, byte LSB)
{
return (ulong)((MSB << 8) + (LSB));
}
private ulong GetUlongFrom4Bytes(byte MSB, byte msb, byte lsb, byte LSB)
{
return (ulong)((MSB << 24) + (msb << 16) + (lsb << 8) + (LSB));
}
Conversely, for going the opposite direction, I do the following code:
private byte[] Get4Bytes(ulong parm1)
{
byte[] retVal = new byte[4];
retVal[0] = (byte)((parm1 >> 24) & 0xFF);
retVal[1] = (byte)((parm1 >> 16) & 0xFF);
retVal[2] = (byte)((parm1 >> 8) & 0xFF);
retVal[3] = (byte)(parm1 & 0xFF);
return retVal;
}
private byte[] Get8Bytes(ulong parm1, ulong parm2)
{
byte[] retVal = new byte[8];
Array.Copy(Get4Bytes(parm1), 0, retVal, 0, 4);
Array.Copy(Get4Bytes(parm2), 0, retVal, 4, 4);
return retVal;
}
I'm trying to debug my code for controlling this piece of equipment and I'd just like a sanity check from you guys here on SO to confirm that this code is written correctly for what I'm trying to do.

Assuming you want big-endian encoding, then yes: that'll be fine. You can also use BitConverter, but I think you are right not to - it involves extra array allocations, and forces the system's endianness on you (often little-endian).
Generally, I would recommend such code works with a buffer/offset API, though, for simplicity and efficiency - i.e.
private void Write32(ulong value, byte[] buffer, int offset)
{
buffer[offset++] = (byte)((value >> 24) & 0xFF);
buffer[offset++] = (byte)((value >> 16) & 0xFF);
buffer[offset++] = (byte)((value >> 8) & 0xFF);
buffer[offset] = (byte)(value & 0xFF);
}

This would do it:
static ulong SliceValue(byte[] bytes, int start, int length)
{
var bytes = bytes.Skip(start).Take(length);
ulong acc = 0;
foreach (var b in bytes) acc = (acc * 0x100) + b;
return acc;
}

Related

Extract a byte into a specific bit

I have one byte of data and from there I have to extract it in the following manner.
data[0] has to extract
id(5 bit)
Sequence(2 bit)
HashAppData(1 bit)
data[1] has to extract
id(6 bit)
offset(2 bit)
Required functions are below where byte array length is 2 and I have to extract to the above manner.
public static int ParseData(byte[] data)
{
// All code goes here
}
Couldn't find any suitable solution to how do I make it. Can you please extract it?
EDIT: Fragment datatype should be in Integer
Something like this?
int id = (data[0] >> 3) & 31;
int sequence = (data[0] >> 1) & 3;
int hashAppData = data[0] & 1;
int id2 = (data[1] >> 2) & 63;
int offset = data[1] & 3;
This is how I'd do it for the first byte:
byte value = 155;
byte maskForHighest5 = 128+64+32+16+8;
byte maskForNext2 = 4+2;
byte maskForLast = 1;
byte result1 = (byte)((value & maskForHighest5) >> 3); // shift right 3 bits
byte result2 = (byte)((value & maskForNext2) >> 1); // shift right 1 bit
byte result3 = (byte)(value & maskForLast);
Working demo (.NET Fiddle):
https://dotnetfiddle.net/lNZ9TR
Code for the 2nd byte will be very similar.
If you're uncomfortable with bit manipulation, use an extension method to keep the intent of ParseData clear. This extension can be adapted for other integers by replacing both uses of byte with the necessary type.
public static int GetBitValue(this byte b, int offset, int length)
{
const int ByteWidth = sizeof(byte) * 8;
// System.Diagnostics validation - Excluded in release builds
Debug.Assert(offset >= 0);
Debug.Assert(offset < ByteWidth);
Debug.Assert(length > 0);
Debug.Assert(length <= ByteWidth);
Debug.Assert(offset + length <= ByteWidth);
var shift = ByteWidth - offset - length;
var mask = (1 << length) - 1;
return (b >> shift) & mask;
}
Usage in this case:
public static int ParseData(byte[] data)
{
{ // data[0]
var id = data[0].GetBitValue(0, 5);
var sequence = data[0].GetBitValue(5, 2);
var hashAppData = data[0].GetBitValue(7, 1);
}
{ // data[1]
var id = data[1].GetBitValue(0, 6);
var offset = data[1].GetBitValue(6, 2);
}
// ... return necessary data
}

C# Convert long to bytes gives error: & cannot be applied to long and ulong

I want to convert a long to 8 bytes in C# with 100% reliability.
public static byte[] ToBytes(this long thisLong)
{
byte[] bytes = new byte[8];
bytes[0] = (byte)(thisLong & 0xFF);
bytes[1] = (byte)((thisLong & 0xFF00) >> 8);
bytes[2] = (byte)((thisLong & 0xFF0000) >> 16);
bytes[3] = (byte)((thisLong & 0xFF000000) >> 24);
bytes[4] = (byte)((thisLong & 0xFF00000000) >> 32);
bytes[5] = (byte)((thisLong & 0xFF0000000000) >> 40);
bytes[6] = (byte)((thisLong & 0xFF000000000000) >> 48);
bytes[7] = (byte)((thisLong & 0xFF00000000000000) >> 56);
return bytes;
}
but I get an error on the last line at "thisLong & 0xFF00000000000000":
CS0019: Operator '&' cannot be applied to operands of type 'long' and 'ulong' (CS0019)
It works OK for int, so any not for long?:
public static byte[] ToBytes(this int thisInt)
{
byte[] bytes = new byte[4];
bytes[0] = (byte)(thisInt & 0xFF);
bytes[1] = (byte)((thisInt & 0xFF00) >> 8);
bytes[2] = (byte)((thisInt & 0xFF0000) >> 16);
bytes[3] = (byte)((thisInt & 0xFF000000) >> 24);
return bytes;
}
I don't want to use BitConvertor to avoid issues with endianness.
I don't want to use ulong as converting may fail.

C# - Fastest way to convert an int and put it in a byte array with an offset

I'm writing custom byte stream and I want the most the write/read methods that works the fastest way possible. This is my current implementation of the write and read methods for int32:
public void write(int value)
{
unchecked
{
bytes[index++] = (byte)(value);
bytes[index++] = (byte)(value >> 8);
bytes[index++] = (byte)(value >> 16);
bytes[index++] = (byte)(value >> 24);
}
}
public int readInt()
{
unchecked
{
return bytes[index++] |
(bytes[index++] << 8) |
(bytes[index++] << 16) |
(bytes[index++] << 24);
}
}
But what I really want is to do is cast the "int" into a byte pointer (or something like that) and copy the memory into the "bytes" array with the given "index" as the offset. Is it even possible in C#?
The goal is to:
Avoid creating new arrays.
Avoid loops.
Avoid multiple assignment of the "index" variable.
Reduce the number of instructions.
unsafe
{
fixed (byte* pbytes = &bytes[index])
{
*(int*)pbytes = value;
value = *(int*)pbytes;
}
}
But be careful with possible array index overflow.
Your code is very fast, but you can speed it even more (almost 2x faster) with the following change:
bytes[index] = (byte)(value);
bytes[index+1] = (byte)(value >> 8);
bytes[index+2] = (byte)(value >> 16);
bytes[index+3] = (byte)(value >> 24);
index = index + 4;

Porting CRC16 Code in C to C# .NET

So I have this C code that I need to port to C#:
C Code:
uint16 crc16_calc(volatile uint8* bytes, uint32 length)
{
uint32 i;
uint32 j;
uint16 crc = 0xFFFF;
uint16 word;
for (i=0; i < length/2 ; i++)
{
word = ((uint16*)bytes)[i];
// upper byte
j = (uint8)((word ^ crc) >> 8);
crc = (crc << 8) ^ crc16_table[j];
// lower byte
j = (uint8)((word ^ (crc >> 8)) & 0x00FF);
crc = (crc << 8) ^ crc16_table[j];
}
return crc;
}
Ported C# Code:
public ushort CalculateChecksum(byte[] bytes)
{
uint j = 0;
ushort crc = 0xFFFF;
ushort word;
for (uint i = 0; i < bytes.Length / 2; i++)
{
word = bytes[i];
// Upper byte
j = (byte)((word ^ crc) >> 8);
crc = (ushort)((crc << 8) ^ crc16_table[j]);
// Lower byte
j = (byte)((word ^ (crc >> 8)) & 0x00FF);
crc = (ushort)((crc << 8) ^ crc16_table[j]);
}
return crc;
}
This C algorithm calculates the CRC16 of the supplied bytes using a lookup table crc16_table[j]
However the Ported C# code does not produce the same results as the C code, am I doing something wrong?
word = ((uint16*)bytes)[i];
reads two bytes from bytes into a uint16, whereas
word = bytes[i];
just reads a single byte.
Assuming you're running on a little endian machine, your C# code could change to
word = bytes[i++];
word += bytes[i] << 8;
Or, probably better, as suggested by MerickOWA
word = BitConverter.ToInt16(bytes, i++);
Note that you could avoid the odd-looking extra increment of i by changing your loop:
for (uint i = 0; i < bytes.Length; i+=2)
{
word = BitConverter.ToInt16(bytes, i);

Convert a signed int to two unsigned short for purpose of reconstruction

I am currently using BitConverter to package two unsigned shorts inside a signed int. This code executes millions of times for different values and I am thinking the code could be optimized further. Here is what I am currently doing -- you can assume the code is C#/NET.
// to two unsigned shorts from one signed int:
int xy = 343423;
byte[] bytes = BitConverter.GetBytes(xy);
ushort m_X = BitConverter.ToUInt16(bytes, 0);
ushort m_Y = BitConverter.ToUInt16(bytes, 2);
// convet two unsigned shorts to one signed int
byte[] xBytes = BitConverter.GetBytes(m_X);
byte[] yBytes = BitConverter.GetBytes(m_Y);
byte[] bytes = new byte[] {
xBytes[0],
xBytes[1],
yBytes[0],
yBytes[1],
};
return BitConverter.ToInt32(bytes, 0);
So it occurs to me that I can avoid the overhead of constructing arrays if I bitshift. But for the life of me I can't figure out what the correct shift operation is. My first pathetic attempt involved the following code:
int xy = 343423;
const int mask = 0x00000000;
byte b1, b2, b3, b4;
b1 = (byte)((xy >> 24));
b2 = (byte)((xy >> 16));
b3 = (byte)((xy >> 8) & mask);
b4 = (byte)(xy & mask);
ushort m_X = (ushort)((xy << b4) | (xy << b3));
ushort m_Y = (ushort)((xy << b2) | (xy << b1));
Could someone help me? I am thinking I need to mask the upper and lower bytes before shifting. Some of the examples I see include subtraction with type.MaxValue or an arbitrary number, like negative twelve, which is pretty confusing.
** Update **
Thank you for the great answers. Here are the results of a benchmark test:
// 34ms for bit shift with 10M operations
// 959ms for BitConverter with 10M operations
static void Main(string[] args)
{
Stopwatch stopWatch = new Stopwatch();
stopWatch.Start();
for (int i = 0; i < 10000000; i++)
{
ushort x = (ushort)i;
ushort y = (ushort)(i >> 16);
int result = (y << 16) | x;
}
stopWatch.Stop();
Console.WriteLine((int)stopWatch.Elapsed.TotalMilliseconds + "ms");
stopWatch.Start();
for (int i = 0; i < 10000000; i++)
{
byte[] bytes = BitConverter.GetBytes(i);
ushort x = BitConverter.ToUInt16(bytes, 0);
ushort y = BitConverter.ToUInt16(bytes, 2);
byte[] xBytes = BitConverter.GetBytes(x);
byte[] yBytes = BitConverter.GetBytes(y);
bytes = new byte[] {
xBytes[0],
xBytes[1],
yBytes[0],
yBytes[1],
};
int result = BitConverter.ToInt32(bytes, 0);
}
stopWatch.Stop();
Console.WriteLine((int)stopWatch.Elapsed.TotalMilliseconds + "ms");
Console.ReadKey();
}
The simplest way is to do it using two shifts:
int xy = -123456;
// Split...
ushort m_X = (ushort) xy;
ushort m_Y = (ushort)(xy>>16);
// Convert back...
int back = (m_Y << 16) | m_X;
Demo on ideone: link.
int xy = 343423;
ushort low = (ushort)(xy & 0x0000ffff);
ushort high = (ushort)((xy & 0xffff0000) >> 16);
int xxyy = low + (((int)high) << 16);

Categories

Resources