I have the following CRC function:
public static ushort ComputeCRC16(byte[] data)
{
ushort i, j, crc = 0;
int size = data.Length;
for (i = 0; i < size - 2; i++)
{
crc ^= (ushort)(data[i] << 8);
for (j = 0; j < 8; j++)
{
if ((crc & 0x8000) != 0) /* Test for bit 15 */
{
crc = (ushort)((crc << 1) ^ 0x1234); /* POLYNOMIAL */
}
else
{
crc <<= 1;
}
}
}
return crc;
}
I have been trying to use it to calculate a CRC16 from a file that is around 800 Kb, but it takes forever, I mean after five minutes, the value of i was still at around 2 000, and it should go up to 800 000.
Can someone give me an explanation on why it is so slow and what I can do to solve this issue ?
I'm working with Visual Studio 2015 on a i7 processor and the computer is not old nor broken.
Replace the first lines with:
int i, j;
ushort crc = 0;
You were using a ushort for the for counter, but if size > 65535, the for cycle won't ever end.
The reason for this is that C# by default does not throw an Exception if an overflow occurs but just "silently" ignores it. Checkout the following code for a demonstration:
ushort i = ushort.MaxValue; //65535
i++; //0
Related
I have this code which calculate CRC-32, I need to edit this code with: Polynomial 0x04C11DB7 ,Initial value: 0xFFFFFFFF , XOR:0 .
So CRC32 for string "123456789" should be"0376E6E7", I found a code, it's very slow , But it works any way.
```internal static class Crc32
{
internal static uint[] MakeCrcTable()
{
uint c;
uint[] crcTable = new uint[256];
for (uint n = 0; n < 256; n++)
{
c = n;
for (int k = 0; k < 8; k++)
{
var res = c & 1;
c = (res == 1) ? (0xEDB88320 ^ (c >> 1)) : (c >> 1);
}
crcTable[n] = c;
}
return crcTable;
}
internal static uint CalculateCrc32(byte[] str)
{
uint[] crcTable = Crc32.MakeCrcTable();
uint crc = 0xffffffff;
for (int i = 0; i < str.Length; i++)
{
byte c = str[i];
crc = (crc >> 8) ^ crcTable[(crc ^ c) & 0xFF];
}
return ~crc; //(crc ^ (-1)) >> 0;
}
}```
Based on the added comments, what you are looking for is CRC-32/MPEG-2, which reverses the direction of the CRC, and eliminates the final exclusive-or, compared to the implementation you have, which is a CRC-32/ISO-HDLC.
To get there, you need to flip the CRC from reflected to forward. You bit-flip the polynomial to get 0x04c11db7, check the high bit instead of the low bit, reverse the shifts, both in the table generation and use of the table, and exclusive-or with the high byte of the CRC instead of the low byte.
To remove the final exclusive-or, remove the tilde at the end.
I'm trying to port an old code from C to C# which basically receives a string and returns a CRC16 of it...
The C method is as follow:
#define CRC_MASK 0x1021 /* x^16 + x^12 + x^5 + x^0 */
UINT16 CRC_Calc (unsigned char *pbData, int iLength)
{
UINT16 wData, wCRC = 0;
int i;
for ( ;iLength > 0; iLength--, pbData++) {
wData = (UINT16) (((UINT16) *pbData) << 8);
for (i = 0; i < 8; i++, wData <<= 1) {
if ((wCRC ^ wData) & 0x8000)
wCRC = (UINT16) ((wCRC << 1) ^ CRC_MASK);
else
wCRC <<= 1;
}
}
return wCRC;
}
My ported C# code is this:
private static ushort Calc(byte[] data)
{
ushort wData, wCRC = 0;
for (int i = 0; i < data.Length; i++)
{
wData = Convert.ToUInt16(data[i] << 8);
for (int j = 0; j < 8; j++, wData <<= 1)
{
var a = (wCRC ^ wData) & 0x8000;
if ( a != 0)
{
var c = (wCRC << 1) ^ 0x1021;
wCRC = Convert.ToUInt16(c);
}
else
{
wCRC <<= 1;
}
}
}
return wCRC;
}
The test string is "OPN"... It must return a uint which is (ofc) 2 bytes A8 A9 and the #CRC_MASK is the polynomial for that calculation. I did found several examples of CRC16 here and around the web, but none of them achieve this result since this CRC calculation must match the one that the device we are connecting to.
WHere is the mistake? I really appreciate any help.
Thanks! best regards
Gutemberg
UPDATE
Following the answer from #rcgldr, I put together the following sample:
_serial = new SerialPort("COM6", 19200, Parity.None, 8, StopBits.One);
_serial.Open();
_serial.Encoding = Encoding.GetEncoding(1252);
_serial.DataReceived += Serial_DataReceived;
var msg = "OPN";
var data = Encoding.GetEncoding(1252).GetBytes(msg);
var crc = BitConverter.GetBytes(Calc(data));
var msb = crc[0].ToString("X");
var lsb = crc[1].ToString("X");
//The following line must be something like: \x16OPN\x17\xA8\xA9
var cmd = string.Format(#"{0}{1}{2}\x{3}\x{4}", SYN, msg, ETB, msb, lsb);
//var cmd = "\x16OPN\x17\xA8\xA9";
_serial.Write(cmd);
The value of the cmd variable is what I'm trying to send to the device. If you have a look the the commented cmd value, this is a working string. The 2 bytes of the CRC16, goes in the last two parameters (msb and lsb). So, in the sample here, msb MUST be "\xA8" and lsb MUST be "\xA9" in order to the command to work(the CRC16 match on the device).
Any clues?
Thanks again.
UPDATE 2
For those who fall in the same case were you need to format the string with \x this is what I did to get it working:
protected string ToMessage(string data)
{
var msg = data + ETB;
var crc = CRC16.Compute(msg);
var fullMsg = string.Format(#"{0}{1}{2:X}{3:X}", SYN, msg, crc[0], crc[1]);
return fullMsg;
}
This return to me the full message that I need inclusing the \x on it. The SYN variable is '\x16' and ETB is '\x17'
Thank you all for the help!
Gutemberg
The problem here is that the message including the ETB (\x17) is 4 bytes long (the leading sync byte isn't used for the CRC): "OPN\x17" == {'O', 'P', 'N', 0x17}, which results in a CRC of {0xA8, 0xA9} to be appended to the message. So the CRC function is correct, but the original test data wasn't including the 4th byte which is 0x17.
This is a working example (at least with VS2015 express).
private static ushort Calc(byte[] data)
{
ushort wCRC = 0;
for (int i = 0; i < data.Length; i++)
{
wCRC ^= (ushort)(data[i] << 8);
for (int j = 0; j < 8; j++)
{
if ((wCRC & 0x8000) != 0)
wCRC = (ushort)((wCRC << 1) ^ 0x1021);
else
wCRC <<= 1;
}
}
return wCRC;
}
Using C#.net,WPF application.I'm going to connect to a device (MODBUS protocol), I have to calculate CRC (CRC16).
Function which i use calculate normal crc16 and value is correct,but i want the value for CRC16(modbus) one.
Help me to sort out.
There are a lot of resources online about the calculation of the crc16 for the modbus protocol.
For example:
http://www.ccontrolsys.com/w/How_to_Compute_the_Modbus_RTU_Message_CRC
http://www.modbustools.com/modbus_crc16.htm
I think that translating that code in c# should be simple.
You can use this library:
https://github.com/meetanthony/crccsharp
It contains several CRC algorithms included ModBus.
Usage:
Download source code and add it to your project:
public byte[] CalculateCrc16Modbus(byte[] bytes)
{
CrcStdParams.StandartParameters.TryGetValue(CrcAlgorithms.Crc16Modbus, out Parameters crc_p);
Crc crc = new Crc(crc_p);
crc.Initialize();
var crc_bytes = crc.ComputeHash(bytes);
return crc_bytes;
}
Just use:
public static ushort Modbus(byte[] buf)
{
ushort crc = 0xFFFF;
int len = buf.Length;
for (int pos = 0; pos < len; pos++)
{
crc ^= buf[pos];
for (int i = 8; i != 0; i--)
{
if ((crc & 0x0001) != 0)
{
crc >>= 1;
crc ^= 0xA001;
}
else
crc >>= 1;
}
}
// lo-hi
//return crc;
// ..or
// hi-lo reordered
return (ushort)((crc >> 8) | (crc << 8));
}
(curtesy of https://www.cyberforum.ru/csharp-beginners/thread2329096.html)
Boost CRC (Added due to title)
auto v = std::vector< std::uint8_t > { 0x12, 0x34, 0x56, 0x78 };
auto result = boost::crc_optimal<16, 0x8005, 0xFFFF, 0, true, true> {};
result.process_bytes(v.data(), v.size());
I am looking for a faster algorithm than the below for the following. Given a sequence of 64-bit unsigned integers, return a count of the number of times each of the sixty-four bits is set in the sequence.
Example:
4608 = 0000000000000000000000000000000000000000000000000001001000000000
4097 = 0000000000000000000000000000000000000000000000000001000000000001
2048 = 0000000000000000000000000000000000000000000000000000100000000000
counts 0000000000000000000000000000000000000000000000000002101000000001
Example:
2560 = 0000000000000000000000000000000000000000000000000000101000000000
530 = 0000000000000000000000000000000000000000000000000000001000010010
512 = 0000000000000000000000000000000000000000000000000000001000000000
counts 0000000000000000000000000000000000000000000000000000103000010010
Currently I am using a rather obvious and naive approach:
static int bits = sizeof(ulong) * 8;
public static int[] CommonBits(params ulong[] values) {
int[] counts = new int[bits];
int length = values.Length;
for (int i = 0; i < length; i++) {
ulong value = values[i];
for (int j = 0; j < bits && value != 0; j++, value = value >> 1) {
counts[j] += (int)(value & 1UL);
}
}
return counts;
}
A small speed improvement might be achieved by first OR'ing the integers together, then using the result to determine which bits you need to check. You would still have to iterate over each bit, but only once over bits where there are no 1s, rather than values.Length times.
I'll direct you to the classical: Bit Twiddling Hacks, but your goal seems slightly different than just typical counting (i.e. your 'counts' variable is in a really weird format), but maybe it'll be useful.
The best I can do here is just get silly with it and unroll the inner-loop... seems to have cut the performance in half (roughly 4 seconds as opposed to the 8 in yours to process 100 ulongs 100,000 times)... I used a qick command-line app to generate the following code:
for (int i = 0; i < length; i++)
{
ulong value = values[i];
if (0ul != (value & 1ul)) counts[0]++;
if (0ul != (value & 2ul)) counts[1]++;
if (0ul != (value & 4ul)) counts[2]++;
//etc...
if (0ul != (value & 4611686018427387904ul)) counts[62]++;
if (0ul != (value & 9223372036854775808ul)) counts[63]++;
}
that was the best I can do... As per my comment, you'll waste some amount (I know not how much) running this in a 32-bit environment. If your that concerned over performance it may benefit you to first convert the data to uint.
Tough problem... may even benefit you to marshal it into C++ but that entirely depends on your application. Sorry I couldn't be more help, maybe someone else will see something I missed.
Update, a few more profiler sessions showing a steady 36% improvement. shrug I tried.
Ok let me try again :D
change each byte in 64 bit integer into 64 bit integer by shifting each bit by n*8 in lef
for instance
10110101 -> 0000000100000000000000010000000100000000000000010000000000000001
(use the lookup table for that translation)
Then just sum everything togeter in right way and you got array of unsigned chars whit integers.
You have to make 8*(number of 64bit integers) sumations
Code in c
//LOOKTABLE IS EXTERNAL and has is int64[256] ;
unsigned char* bitcounts(int64* int64array,int len)
{
int64* array64;
int64 tmp;
unsigned char* inputchararray;
array64=(int64*)malloc(64);
inputchararray=(unsigned char*)input64array;
for(int i=0;i<8;i++) array64[i]=0; //set to 0
for(int j=0;j<len;j++)
{
tmp=int64array[j];
for(int i=7;tmp;i--)
{
array64[i]+=LOOKUPTABLE[tmp&0xFF];
tmp=tmp>>8;
}
}
return (unsigned char*)array64;
}
This redcuce speed compared to naive implemetaton by factor 8, becuase it couts 8 bit at each time.
EDIT:
I fixed code to do faster break on smaller integers, but I am still unsure about endianess
And this works only on up to 256 inputs, becuase it uses unsigned char to store data in. If you have longer input string, you can change this code to hold up to 2^16 bitcounts and decrease spped by 2
const unsigned int BYTESPERVALUE = 64 / 8;
unsigned int bcount[BYTESPERVALUE][256];
memset(bcount, 0, sizeof bcount);
for (int i = values.length; --i >= 0; )
for (int j = BYTESPERVALUE ; --j >= 0; ) {
const unsigned int jth_byte = (values[i] >> (j * 8)) & 0xff;
bcount[j][jth_byte]++; // count byte value (0..255) instances
}
unsigned int count[64];
memset(count, 0, sizeof count);
for (int i = BYTESPERVALUE; --i >= 0; )
for (int j = 256; --j >= 0; ) // check each byte value instance
for (int k = 8; --k >= 0; ) // for each bit in a given byte
if (j & (1 << k)) // if bit was set, then add its count
count[i * 8 + k] += bcount[i][j];
Another approach that might be profitable, would be to build an array of 256 elements,
which encodes the actions that you need to take in incrementing the count array.
Here is a sample for a 4 element table, which does 2 bits instead of 8 bits.
int bitToSubscript[4][3] =
{
{0}, // No Bits set
{1,0}, // Bit 0 set
{1,1}, // Bit 1 set
{2,0,1} // Bit 0 and bit 1 set.
}
The algorithm then degenerates to:
pick the 2 right hand bits off of the number.
Use that as a small integer to index into the bitToSubscriptArray.
In that array, pull off the first integer. That is the number of elements in the count array, that you need to increment.
Based on that count, Iterate through the remainder of the row, incrementing count, based on the subscript you pull out of the bitToSubscript array.
Once that loop is done, shift your original number two bits to the right.... Rinse Repeat as needed.
Now there is one issue I ignored, in that description. The actual subscripts are relative. You need to keep track of where you are in the count array. Every time you loop, you add two to an offset. To That offset, you add the relative subscript from the bitToSubscript array.
It should be possible to scale up to the size you want, based on this small example. I would think that another program could be used, to generate the source code for the bitToSubscript array, so that it can be simply hard coded in your program.
There are other variation on this scheme, but I would expect it to run faster on average than anything that does it one bit at a time.
Good Hunting.
Evil.
I believe this should give a nice speed improvement:
const ulong mask = 0x1111111111111111;
public static int[] CommonBits(params ulong[] values)
{
int[] counts = new int[64];
ulong accum0 = 0, accum1 = 0, accum2 = 0, accum3 = 0;
int i = 0;
foreach( ulong v in values ) {
if (i == 15) {
for( int j = 0; j < 64; j += 4 ) {
counts[j] += ((int)accum0) & 15;
counts[j+1] += ((int)accum1) & 15;
counts[j+2] += ((int)accum2) & 15;
counts[j+3] += ((int)accum3) & 15;
accum0 >>= 4;
accum1 >>= 4;
accum2 >>= 4;
accum3 >>= 4;
}
i = 0;
}
accum0 += (v) & mask;
accum1 += (v >> 1) & mask;
accum2 += (v >> 2) & mask;
accum3 += (v >> 3) & mask;
i++;
}
for( int j = 0; j < 64; j += 4 ) {
counts[j] += ((int)accum0) & 15;
counts[j+1] += ((int)accum1) & 15;
counts[j+2] += ((int)accum2) & 15;
counts[j+3] += ((int)accum3) & 15;
accum0 >>= 4;
accum1 >>= 4;
accum2 >>= 4;
accum3 >>= 4;
}
return counts;
}
Demo: http://ideone.com/eNn4O (needs more test cases)
http://graphics.stanford.edu/~seander/bithacks.html#CountBitsSetNaive
One of them
unsigned int v; // count the number of bits set in v
unsigned int c; // c accumulates the total bits set in v
for (c = 0; v; c++)
{
v &= v - 1; // clear the least significant bit set
}
Keep in mind, that complexity of this method is aprox O(log2(n)) where n is the number to count bits in, so for 10 binary it need only 2 loops
You should probably take the metod for counting 32 bits whit 64 bit arithmetics and applying it on each half of word, what would take by 2*15 + 4 instructions
// option 3, for at most 32-bit values in v:
c = ((v & 0xfff) * 0x1001001001001ULL & 0x84210842108421ULL) % 0x1f;
c += (((v & 0xfff000) >> 12) * 0x1001001001001ULL & 0x84210842108421ULL) %
0x1f;
c += ((v >> 24) * 0x1001001001001ULL & 0x84210842108421ULL) % 0x1f;
If you have sse4,3 capable processor you can use POPCNT instruction.
http://en.wikipedia.org/wiki/SSE4
Hello quick question regarding bit shifting
I have a value in HEX: new byte[] { 0x56, 0xAF };
which is 0101 0110 1010 1111
I want to the first N bits, for example 12.
Then I must right-shift off the lowest 4 bits (16 - 12) to get 0000 0101 0110 1010 (1386 dec).
I can't wrap my head around it and make it scalable for n bits.
Sometime ago i coded these two functions, the first one shifts an byte[] a specified amount of bits to the left, the second does the same to the right:
Left Shift:
public byte[] ShiftLeft(byte[] value, int bitcount)
{
byte[] temp = new byte[value.Length];
if (bitcount >= 8)
{
Array.Copy(value, bitcount / 8, temp, 0, temp.Length - (bitcount / 8));
}
else
{
Array.Copy(value, temp, temp.Length);
}
if (bitcount % 8 != 0)
{
for (int i = 0; i < temp.Length; i++)
{
temp[i] <<= bitcount % 8;
if (i < temp.Length - 1)
{
temp[i] |= (byte)(temp[i + 1] >> 8 - bitcount % 8);
}
}
}
return temp;
}
Right Shift:
public byte[] ShiftRight(byte[] value, int bitcount)
{
byte[] temp = new byte[value.Length];
if (bitcount >= 8)
{
Array.Copy(value, 0, temp, bitcount / 8, temp.Length - (bitcount / 8));
}
else
{
Array.Copy(value, temp, temp.Length);
}
if (bitcount % 8 != 0)
{
for (int i = temp.Length - 1; i >= 0; i--)
{
temp[i] >>= bitcount % 8;
if (i > 0)
{
temp[i] |= (byte)(temp[i - 1] << 8 - bitcount % 8);
}
}
}
return temp;
}
If you need further explanation please comment on this, i will then edit my post for clarification...
You can use a BitArray and then easily copy each bit to the right, starting from the right.
http://msdn.microsoft.com/en-us/library/system.collections.bitarray_methods.aspx
you want something like...
var HEX = new byte[] {0x56, 0xAF};
var bits = new BitArray(HEX);
int bitstoShiftRight = 4;
for (int i = 0; i < bits.Length; i++)
{
bits[i] = i < (bits.Length - bitstoShiftRight) ? bits[i + bitstoShiftRight] : false;
}
bits.CopyTo(HEX, 0);
If you have k total bits, and you want the "first" (as in most significant) n bits, you can simply right shift k-n times. The last k-n bits will be removed, by sort of "falling" off the end, and the first n will be moved to the least significant side.
Answering using C-like notation, assuming bits_in_byte is the number of bits in a byte determined elsewhere:
int remove_bits_count= HEX.count*bits_in_byte - bits_to_keep;
int remove_bits_in_byte_count= remove_bits_count % bits_in_byte;
if (remove_bits_count > 0)
{
for (int iteration= 0; iteration<min(HEX.count, (bits_to_keep + bits_in_byte - 1)/bits_in_byte); ++iteration)
{
int write_index= HEX.count - iteration - 1;
int read_index_lo= write_index - remove_bits_count/bits_in_byte;
if (read_index_lo>=0)
{
int read_index_hi= read_index_lo - (remove_bits_count + bits_in_byte - 1)/bits_in_byte;
HEX[write_index]=
(HEX[read_index_lo] >> remove_bits_in_byte_count) |
(HEX[read_index_hi] << (bits_in_byte - remove_bits_in_byte_count));
}
else
{
HEX[write_index]= 0;
}
}
}
Assuming you are overwriting the original array, you basically take every byte you write to and figure out the bytes that it would get its shifted bits from. You go from the end of the array to the front to ensure you never overwrite data you will need to read.