Improving upon bit masking and shifting function - c#

Can this function be improved upon to make it more efficient?:
private unsafe uint GetValue(uint value, int bitsToGrab, int bitsToMoveOver)
{
byte[] bytes = BitConverter.GetBytes(value);
uint myBitMask = 0x80; //MSB of 8 bits (byte)
int arrayIndex = 0;
for (int i = 0; i < bitsToMoveOver; i++)
{
if (myBitMask == 0)
{
arrayIndex++;
myBitMask = 0x80;
}
myBitMask >>= 1;
}
uint outputMask1 = (uint)(1 << (bitsToGrab - 1));
uint returnVal = 0;
for (int i = 0; i < bitsToGrab; i++)
{
if (myBitMask == 0)
{
arrayIndex++;
myBitMask = 0x80;
}
if ((bytes[arrayIndex] & myBitMask) > 0)
{
returnVal |= outputMask1;
}
outputMask1 >>= 1;
myBitMask >>= 1;
}
return returnVal;
}
i have an array of uints. each uint contains multiple pieces of data. In order to get the information, i pass in the number of bits, and the offset of those bits. Using that information, i build an output value.
The offset is generally on a byte boundary, but i cannot guarantee that it will be.
I'm actually really looking to see if i can simplify the code. Am i unnecessarily verbose in the code, or could it be done a bit cleaner?
Updated function: How do you guys feel about this?
private unsafe uint GetValue(uint value, int bitsToGrab, int bitsToMoveOver)
{
if (bitsToGrab + bitsToMoveOver >= 32)
{
return 0;
}
byte[] bytes = BitConverter.GetBytes(value);
Array.Reverse(bytes);
uint newValue = BitConverter.ToUInt32(bytes, 0);
uint grabMask = (0xFFFFFFFF << (32 - bitsToGrab));
grabMask >>= bitsToMoveOver;
uint returnVal = (newValue & grabMask) >> (32 - bitsToMoveOver - bitsToGrab);
return returnVal;
}

This needs testing (and assumes that bitsToGrab + bitsToMoveOver <= 32), but I think you can do this:
uint grabMask = ~(0xFFFFFFFF << (bitsToGrab + bitsToMoveOver));
return (value & grabMask) >> bitsToMoveOver;
Since the OP has indicated that it should be sampling bits from an internal binary representation of the number (including endian encoding), with byte order swapping within each word, you can swap bytes first like this:
uint reorderedValue = ((value << 8) & 0xFF00FF00) | ((value >> 8) & 0x00FF00FF);
uint grabMask = ~(0xFFFFFFFF << (bitsToGrab + bitsToMoveOver));
return (reorderedValue & grabMask) >> bitsToMoveOver;

Related

Extract a byte into a specific bit

I have one byte of data and from there I have to extract it in the following manner.
data[0] has to extract
id(5 bit)
Sequence(2 bit)
HashAppData(1 bit)
data[1] has to extract
id(6 bit)
offset(2 bit)
Required functions are below where byte array length is 2 and I have to extract to the above manner.
public static int ParseData(byte[] data)
{
// All code goes here
}
Couldn't find any suitable solution to how do I make it. Can you please extract it?
EDIT: Fragment datatype should be in Integer
Something like this?
int id = (data[0] >> 3) & 31;
int sequence = (data[0] >> 1) & 3;
int hashAppData = data[0] & 1;
int id2 = (data[1] >> 2) & 63;
int offset = data[1] & 3;
This is how I'd do it for the first byte:
byte value = 155;
byte maskForHighest5 = 128+64+32+16+8;
byte maskForNext2 = 4+2;
byte maskForLast = 1;
byte result1 = (byte)((value & maskForHighest5) >> 3); // shift right 3 bits
byte result2 = (byte)((value & maskForNext2) >> 1); // shift right 1 bit
byte result3 = (byte)(value & maskForLast);
Working demo (.NET Fiddle):
https://dotnetfiddle.net/lNZ9TR
Code for the 2nd byte will be very similar.
If you're uncomfortable with bit manipulation, use an extension method to keep the intent of ParseData clear. This extension can be adapted for other integers by replacing both uses of byte with the necessary type.
public static int GetBitValue(this byte b, int offset, int length)
{
const int ByteWidth = sizeof(byte) * 8;
// System.Diagnostics validation - Excluded in release builds
Debug.Assert(offset >= 0);
Debug.Assert(offset < ByteWidth);
Debug.Assert(length > 0);
Debug.Assert(length <= ByteWidth);
Debug.Assert(offset + length <= ByteWidth);
var shift = ByteWidth - offset - length;
var mask = (1 << length) - 1;
return (b >> shift) & mask;
}
Usage in this case:
public static int ParseData(byte[] data)
{
{ // data[0]
var id = data[0].GetBitValue(0, 5);
var sequence = data[0].GetBitValue(5, 2);
var hashAppData = data[0].GetBitValue(7, 1);
}
{ // data[1]
var id = data[1].GetBitValue(0, 6);
var offset = data[1].GetBitValue(6, 2);
}
// ... return necessary data
}

What's the importance of the Offset variable in this algorithm?

What's the meaning of this variable named Offset in this algorithm ?
It's declared in the second calcCrc16 parameter.
For me it's useless bcause it's aways zero and it's used in a sum.
this algorithm generates a crc-16. I'm trying to understand this algorithm bcause a have to create a algorithm that verify crc-16, and i want to use this code as base.
public sealed class CRC
{
private readonly int _polynom;
public static readonly CRC Default = new CRC(0xA001);
public CRC(int polynom)
{
_polynom = polynom;
}
public int CalcCrc16(byte[] buffer)
{
return CalcCrc16(buffer, 0, buffer.Length, _polynom, 0);
}
public static int CalcCrc16(byte[] buffer, int offset, int bufLen, int polynom, int preset)
{
preset &= 0xFFFF;
polynom &= 0xFFFF;
var crc = preset;
for (var i = 0; i < (bufLen + 2); i++)
{
var data = buffer[(i + offset) % buffer.Length] & 0xFF;
crc ^= data;
for (var j = 0; j < 8; j++)
{
if ((crc & 0x0001) != 0)
{
crc = (crc >> 1) ^ polynom;
}
else
{
crc = crc >> 1;
}
}
}
return crc & 0xFFFF;
}
}
I created a simple example, using a small 4 byte message (in a 6 byte buffer):
using System;
namespace crc16
{
class Program
{
private static ushort Crc16(byte[] bfr, int bfrlen)
{
ushort crc = 0;
for (int i = 0; i < bfrlen; i++)
{
crc ^= bfr[i];
for (int j = 0; j < 8; j++)
// assumes twos complement math
crc = (ushort)((crc >> 1)^((0 - (crc&1)) & 0xa001));
}
return crc;
}
static void Main(string[] args)
{
ushort crc;
byte[] data = new byte[6] {0x11, 0x22, 0x33, 0x44, 0x00, 0x00};
crc = Crc16(data, 4); // generate crc
data[4] = (byte)(crc & 0xff); // append crc (lsb first)
data[5] = (byte)(crc >> 8);
crc = Crc16(data, 6); // verify crc;
Console.WriteLine("{0:X4}", crc);
return;
}
}
}
It's part of the signature of a public method, suitable whenever you want to calculate a CRC, but not on your entire buffer.
Sure, most of the time you may just use the simple version of the method, and in that case the parameter is always zero, but typically hashing and CRC implementations are built with an API like this, allowing you to calculate your CRC in chunks if you'd like.

Conversion of CRC function from C to C# yields wrong values

I'm trying to convert a couple of simple CRC calculating functions from C to C#, but I seem to be getting incorrect results.
The C functions are:
#define CRC32_POLYNOMIAL 0xEDB88320
unsigned long CRC32Value(int i)
{
int j;
unsigned long ulCRC;
ulCRC = i;
for (j=8;j>0;j--)
{
if (ulCRC & 1)
ulCRC = (ulCRC >> 1)^CRC32_POLYNOMIAL;
else
ulCRC >>= 1;
}
return ulCRC;
}
unsigned long CalculateBlockCRC32(
unsigned long ulCount,
unsigned char *ucBuffer)
{
unsigned long ulTemp1;
unsigned long ulTemp2; unsigned long ulCRC = 0;
while (ulCount-- != 0)
{
ulTemp1 = (ulCRC >> 8) & 0x00FFFFFFL;
ulTemp2 = CRC32Value(((int)ulCRC^*ucBuffer++)&0xff);
ulCRC = ulTemp1^ulTemp2;
}
return(ulCRC);
}
These are well defined, they are taken from a user manual. My C# versions of these functions are:
private ulong CRC32POLYNOMIAL = 0xEDB88320L;
private ulong CRC32Value(int i)
{
int j;
ulong ulCRC = (ulong)i;
for (j = 8; j > 0; j--)
{
if (ulCRC % 2 == 1)
{
ulCRC = (ulCRC >> 1) ^ CRC32POLYNOMIAL;
}
else
{
ulCRC >>= 1;
}
}
return ulCRC;
}
private ulong CalculateBlockCRC32(ulong ulCount, byte[] ucBuffer)
{
ulong ulTemp1;
ulong ulTemp2;
ulong ulCRC=0;
int bufind=0;
while (ulCount-- != 0)
{
ulTemp1 = (ulCRC >> 8) & 0x00FFFFFFL;
ulTemp2 = CRC32Value(((int)ulCRC ^ ucBuffer[bufind]) & 0xFF);
ulCRC = ulTemp1 ^ ulTemp2;
bufind++;
}
return ulCRC;
}
As I mentioned, there are discrepancies between the C version and the C# version. One possible source is my understanding of the C expression ulCRC & 1 which I believe will only be true for odd numbers.
I call the C# function like this:
string contents = "some data";
byte[] toBeHexed = Encoding.ASCII.GetBytes(contents);
ulong calculatedCRC = this.CalculateBlockCRC32((ulong)toBeHexed.Length, toBeHexed);
And the C function is called like this:
char *Buff="some data";
unsigned long iLen = strlen(Buff);
unsigned long CRC = CalculateBlockCRC32(iLen, (unsigned char*) Buff);
I believe that I am calling the functions with the same data in each language, is that correct? If anyone could shed some light on this I would be very grateful.
As it has been already pointed by #Adriano Repetti you should use UInt32 datatype in place of the ulong type(it is 64 bit unsigned UInt64, whereas in VC++ unsigned long is only 32 bit unsigned type)
private UInt32 CRC32POLYNOMIAL = 0xEDB88320;
private UInt32 CRC32Value(int i)
{
int j;
UInt32 ulCRC = (UInt32)i;
for (j = 8; j > 0; j--)
{
if (ulCRC % 2 == 1)
{
ulCRC = (ulCRC >> 1) ^ CRC32POLYNOMIAL;
}
else
{
ulCRC >>= 1;
}
}
return ulCRC;
}
private UInt32 CalculateBlockCRC32(UInt32 ulCount, byte[] ucBuffer)
{
UInt32 ulTemp1;
UInt32 ulTemp2;
UInt32 ulCRC = 0;
int bufind = 0;
while (ulCount-- != 0)
{
ulTemp1 = (ulCRC >> 8) & 0x00FFFFFF;
ulTemp2 = CRC32Value(((int)ulCRC ^ ucBuffer[bufind]) & 0xFF);
ulCRC = ulTemp1 ^ ulTemp2;
bufind++;
}
return ulCRC;
}
string contents = "12";
byte[] toBeHexed = Encoding.ASCII.GetBytes(contents);
UInt32 calculatedCRC = CalculateBlockCRC32((UInt32)toBeHexed.Length, toBeHexed);
Usually in C# it doesn't matter whether you use C# data type name(recommended by Microsoft) or ECMA type name. But in this and similar cases with bit level manipulation it can greatly clarify the intent and prevent mistakes.
In C it is always a good idea to use typedefs from stdint.h. They make the same job, as ECMA types in C# - clarify the intent, and also guarantee the length and sign of used datatypes(C compilers may use different lengths for the same types, because standard doesn't specify exact sizes):
#include <stdint.h>
#define CRC32_POLYNOMIAL ((uint32_t)0xEDB88320)
uint32_t CRC32Value(uint32_t i)
{
uint32_t j;
uint32_t ulCRC;
ulCRC = i;
for (j = 8; j > 0; j--)
{
if (ulCRC & 1)
ulCRC = (ulCRC >> 1) ^ CRC32_POLYNOMIAL;
else
ulCRC >>= 1;
}
return ulCRC;
}
uint32_t CalculateBlockCRC32(
size_t ulCount,
uint8_t *ucBuffer)
{
uint32_t ulTemp1;
uint32_t ulTemp2;
uint32_t ulCRC = 0;
while (ulCount-- != 0)
{
ulTemp1 = (ulCRC >> 8) & ((uint32_t)0x00FFFFFF);
ulTemp2 = CRC32Value((ulCRC^*ucBuffer++)&0xff);
ulCRC = ulTemp1^ulTemp2;
}
return(ulCRC);
}
char *Buff = "12";
size_t iLen = strlen(Buff);
uint32_t CRC = CalculateBlockCRC32(iLen, (uint8_t *) Buff);
printf("%u", CRC);

Function to calculate CRC16 (Modbus) value

Using C#.net,WPF application.I'm going to connect to a device (MODBUS protocol), I have to calculate CRC (CRC16).
Function which i use calculate normal crc16 and value is correct,but i want the value for CRC16(modbus) one.
Help me to sort out.
There are a lot of resources online about the calculation of the crc16 for the modbus protocol.
For example:
http://www.ccontrolsys.com/w/How_to_Compute_the_Modbus_RTU_Message_CRC
http://www.modbustools.com/modbus_crc16.htm
I think that translating that code in c# should be simple.
You can use this library:
https://github.com/meetanthony/crccsharp
It contains several CRC algorithms included ModBus.
Usage:
Download source code and add it to your project:
public byte[] CalculateCrc16Modbus(byte[] bytes)
{
CrcStdParams.StandartParameters.TryGetValue(CrcAlgorithms.Crc16Modbus, out Parameters crc_p);
Crc crc = new Crc(crc_p);
crc.Initialize();
var crc_bytes = crc.ComputeHash(bytes);
return crc_bytes;
}
Just use:
public static ushort Modbus(byte[] buf)
{
ushort crc = 0xFFFF;
int len = buf.Length;
for (int pos = 0; pos < len; pos++)
{
crc ^= buf[pos];
for (int i = 8; i != 0; i--)
{
if ((crc & 0x0001) != 0)
{
crc >>= 1;
crc ^= 0xA001;
}
else
crc >>= 1;
}
}
// lo-hi
//return crc;
// ..or
// hi-lo reordered
return (ushort)((crc >> 8) | (crc << 8));
}
(curtesy of https://www.cyberforum.ru/csharp-beginners/thread2329096.html)
Boost CRC (Added due to title)
auto v = std::vector< std::uint8_t > { 0x12, 0x34, 0x56, 0x78 };
auto result = boost::crc_optimal<16, 0x8005, 0xFFFF, 0, true, true> {};
result.process_bytes(v.data(), v.size());

jBCrypt 0.3 C# Port (BCrypt.net)

After looking into a bug in the original jBCrypt v0.1 C# port: BCrypt.net (Related Question). I decided to compare the new jBCrypt code against the old C# port to look for discrepancies and potential issues like the related question's bug.
Here is what I've found:
// original java (jBCrypt v0.3):
private static int streamtoword(byte data[], int offp[]) {
int i;
int word = 0;
int off = offp[0];
for (i = 0; i < 4; i++) {
word = (word << 8) | (data[off] & 0xff);
off = (off + 1) % data.length;
}
offp[0] = off;
return word;
}
// port to C# :
private static uint StreamToWord(byte[] data, ref int offset)
{
uint word = 0;
for (int i = 0; i < 4; i++)
{
// note the difference with the omission of "& 0xff"
word = (word << 8) | data[offset];
offset = (offset + 1) % data.Length;
}
return word;
}
if the prior is incorrect would the following fix it?
private static uint StreamToWord(byte[] data, ref int[] offsetp)
{
uint word = 0;
int offset = offsetp[0];
for (int i = 0; i < 4; i++)
{
word = (word << 8) | (uint)(data[offset] & 0xff);
offset = (offset + 1) % data.Length;
}
offsetp[0] = offset;
return word;
}
The & 0xff is required in the Java version because in Java, bytes are signed. (Some argue that this is a bug.)
In C#, bytes are unsigned, so the & 0xff is unnecessary.

Categories

Resources