C# to F# CRC16 ^ and ^^^ works different. How ^ works? - c#

I found this C# code for CRC16 but I need it on F# :
using System;
public class Crc16 {
const ushort polynomial = 0xA001;
ushort[] table = new ushort[256];
public ushort ComputeChecksum(byte[] bytes) {
ushort crc = 0;
for(int i = 0; i < bytes.Length; ++i) {
byte index = (byte)(crc ^ bytes[i]);
crc = (ushort)((crc >> 8) ^ table[index]);
}
return crc;
}
public byte[] ComputeChecksumBytes(byte[] bytes) {
ushort crc = ComputeChecksum(bytes);
return BitConverter.GetBytes(crc);
}
public Crc16() {
ushort value;
ushort temp;
for(ushort i = 0; i < table.Length; ++i) {
value = 0;
temp = i;
for(byte j = 0; j < 8; ++j) {
if(((value ^ temp) & 0x0001) != 0) {
value = (ushort)((value >> 1) ^ polynomial);
}else {
value >>= 1;
}
temp >>= 1;
}
table[i] = value;
}
}
}
Here is where I started :
let ComputeChecksum(bytes : byte array) =
let mutable crc = 0us
for i = 0 to bytes.Length do
let index = (crc ^^^ bytes.[i]) // ? uint16 and byte
So I think C# version is taking here first or second byte. So I want to know how C# '^' will work here ? And how can I translate this line of C# code to F# ?

This computes the same result as your C# code.
type Crc16() =
let polynomial = 0xA001us
let table = Array.init 256 (fun i ->
((0us, uint16 i), [0y..7y])
||> Seq.fold (fun (value, temp) j ->
let newValue =
match (value ^^^ temp) &&& 0x0001us with
| 0us -> value >>> 1
| _ -> ((value >>> 1) ^^^ polynomial)
newValue, temp >>> 1)
|> fst)
member __.ComputeChecksum(bytes:byte[]) =
(0us, bytes) ||> Seq.fold (fun crc byt ->
let index = byte (crc ^^^ (uint16 byt))
(crc >>> 8) ^^^ table.[int index])

C# ^ and F# ^^^ are both the XOR operator. They should work the same. Is that what you're asking?

Related

How to edit this code to calculate CRC-32/MPEG in C#

I have this code which calculate CRC-32, I need to edit this code with: Polynomial 0x04C11DB7 ,Initial value: 0xFFFFFFFF , XOR:0 .
So CRC32 for string "123456789" should be"0376E6E7", I found a code, it's very slow , But it works any way.
```internal static class Crc32
{
internal static uint[] MakeCrcTable()
{
uint c;
uint[] crcTable = new uint[256];
for (uint n = 0; n < 256; n++)
{
c = n;
for (int k = 0; k < 8; k++)
{
var res = c & 1;
c = (res == 1) ? (0xEDB88320 ^ (c >> 1)) : (c >> 1);
}
crcTable[n] = c;
}
return crcTable;
}
internal static uint CalculateCrc32(byte[] str)
{
uint[] crcTable = Crc32.MakeCrcTable();
uint crc = 0xffffffff;
for (int i = 0; i < str.Length; i++)
{
byte c = str[i];
crc = (crc >> 8) ^ crcTable[(crc ^ c) & 0xFF];
}
return ~crc; //(crc ^ (-1)) >> 0;
}
}```
Based on the added comments, what you are looking for is CRC-32/MPEG-2, which reverses the direction of the CRC, and eliminates the final exclusive-or, compared to the implementation you have, which is a CRC-32/ISO-HDLC.
To get there, you need to flip the CRC from reflected to forward. You bit-flip the polynomial to get 0x04c11db7, check the high bit instead of the low bit, reverse the shifts, both in the table generation and use of the table, and exclusive-or with the high byte of the CRC instead of the low byte.
To remove the final exclusive-or, remove the tilde at the end.

Converting CRC16 calculation from C to C#

I have to translate a CRC16 calculation from C to C# but get the message that I cannot implicitly convert type 'int' to 'bool' on the (crc & 0x8000) and on return (crc & 0xFFFF)
The code so far:
public unsafe short Crc16(string str)
{
short crc = 0;
for(int i = 0; i < str.Length; i++)
{
crc = (crc << 1) ^ str[i] ^ ((crc & 0x8000) ? 0x1021 : 0);
}
return (crc & 0xFFFF);
}
EDIT: Changed char parameter to string
Original C code
short Crc16( char *str )
{
short crc=0;
unsigned int i;
for (i=0; i<strlen(str); i++)
crc = (crc << 1) ^ *(str+i) ^ ((crc & 0x8000)? 0x1021:0 );
return (crc & 0xffff);
}
In C, 0 and FALSE are synonymous, and any number that is not 0 is true. (reference)
To make the conversion you would do it something like this:
public short CalcCrc16(string str)
{
short crc = 0;
int i;
unchecked
{
foreach(char c in str)
{
short exponent = (crc & 0x8000) != 0 ? 0x1021 : 0;
crc = (crc << 1) ^ ((short)c) ^ exponent);
}
return (short)(crc & 0xFFFF);
}
}
So now that we have the C code to work with, I changed the code sample I have here to match. Below is the explanation of the changes:
char* is the C equivalent of a string
The for loop is iterating over the characters *(str + i) can be rewritten as str[i], and is equivalent to C#'s str.GetChar(i)
The ternary condition is to determine whether the exponent is 0x1021 or 0.
I broke up the lines so you could see the algorithm a bit clearer.
I changed for(int i = 0; i < str.Length; i++) to a foreach on the characters because it was easier to understand.

Calculating CRC16 in C#

I'm trying to port an old code from C to C# which basically receives a string and returns a CRC16 of it...
The C method is as follow:
#define CRC_MASK 0x1021 /* x^16 + x^12 + x^5 + x^0 */
UINT16 CRC_Calc (unsigned char *pbData, int iLength)
{
UINT16 wData, wCRC = 0;
int i;
for ( ;iLength > 0; iLength--, pbData++) {
wData = (UINT16) (((UINT16) *pbData) << 8);
for (i = 0; i < 8; i++, wData <<= 1) {
if ((wCRC ^ wData) & 0x8000)
wCRC = (UINT16) ((wCRC << 1) ^ CRC_MASK);
else
wCRC <<= 1;
}
}
return wCRC;
}
My ported C# code is this:
private static ushort Calc(byte[] data)
{
ushort wData, wCRC = 0;
for (int i = 0; i < data.Length; i++)
{
wData = Convert.ToUInt16(data[i] << 8);
for (int j = 0; j < 8; j++, wData <<= 1)
{
var a = (wCRC ^ wData) & 0x8000;
if ( a != 0)
{
var c = (wCRC << 1) ^ 0x1021;
wCRC = Convert.ToUInt16(c);
}
else
{
wCRC <<= 1;
}
}
}
return wCRC;
}
The test string is "OPN"... It must return a uint which is (ofc) 2 bytes A8 A9 and the #CRC_MASK is the polynomial for that calculation. I did found several examples of CRC16 here and around the web, but none of them achieve this result since this CRC calculation must match the one that the device we are connecting to.
WHere is the mistake? I really appreciate any help.
Thanks! best regards
Gutemberg
UPDATE
Following the answer from #rcgldr, I put together the following sample:
_serial = new SerialPort("COM6", 19200, Parity.None, 8, StopBits.One);
_serial.Open();
_serial.Encoding = Encoding.GetEncoding(1252);
_serial.DataReceived += Serial_DataReceived;
var msg = "OPN";
var data = Encoding.GetEncoding(1252).GetBytes(msg);
var crc = BitConverter.GetBytes(Calc(data));
var msb = crc[0].ToString("X");
var lsb = crc[1].ToString("X");
//The following line must be something like: \x16OPN\x17\xA8\xA9
var cmd = string.Format(#"{0}{1}{2}\x{3}\x{4}", SYN, msg, ETB, msb, lsb);
//var cmd = "\x16OPN\x17\xA8\xA9";
_serial.Write(cmd);
The value of the cmd variable is what I'm trying to send to the device. If you have a look the the commented cmd value, this is a working string. The 2 bytes of the CRC16, goes in the last two parameters (msb and lsb). So, in the sample here, msb MUST be "\xA8" and lsb MUST be "\xA9" in order to the command to work(the CRC16 match on the device).
Any clues?
Thanks again.
UPDATE 2
For those who fall in the same case were you need to format the string with \x this is what I did to get it working:
protected string ToMessage(string data)
{
var msg = data + ETB;
var crc = CRC16.Compute(msg);
var fullMsg = string.Format(#"{0}{1}{2:X}{3:X}", SYN, msg, crc[0], crc[1]);
return fullMsg;
}
This return to me the full message that I need inclusing the \x on it. The SYN variable is '\x16' and ETB is '\x17'
Thank you all for the help!
Gutemberg
The problem here is that the message including the ETB (\x17) is 4 bytes long (the leading sync byte isn't used for the CRC): "OPN\x17" == {'O', 'P', 'N', 0x17}, which results in a CRC of {0xA8, 0xA9} to be appended to the message. So the CRC function is correct, but the original test data wasn't including the 4th byte which is 0x17.
This is a working example (at least with VS2015 express).
private static ushort Calc(byte[] data)
{
ushort wCRC = 0;
for (int i = 0; i < data.Length; i++)
{
wCRC ^= (ushort)(data[i] << 8);
for (int j = 0; j < 8; j++)
{
if ((wCRC & 0x8000) != 0)
wCRC = (ushort)((wCRC << 1) ^ 0x1021);
else
wCRC <<= 1;
}
}
return wCRC;
}

How to Implement CRC-16-DNP using C#?

I'm trying to implement a 16-CRC [DNP] using c#, the generator polynomial is given as
I found a standard solution for 16-crc : [ Source ]
public class Crc16
{
const ushort polynomial = 0xA001;
ushort[] table = new ushort[256];
public ushort ComputeChecksum ( byte[] bytes )
{
ushort crc = 0;
for ( int i = 0; i < bytes.Length; ++i )
{
byte index = ( byte ) ( crc ^ bytes[i] );
crc = ( ushort ) ( ( crc >> 8 ) ^ table[index] );
}
return crc;
}
public byte[] ComputeChecksumBytes ( byte[] bytes )
{
ushort crc = ComputeChecksum ( bytes );
return BitConverter.GetBytes ( crc );
}
public Crc16 ()
{
ushort value;
ushort temp;
for ( ushort i = 0; i < table.Length; ++i )
{
value = 0;
temp = i;
for ( byte j = 0; j < 8; ++j )
{
if ( ( ( value ^ temp ) & 0x0001 ) != 0 )
{
value = ( ushort ) ( ( value >> 1 ) ^ polynomial );
}
else
{
value >>= 1;
}
temp >>= 1;
}
table[i] = value;
}
}
}
Now, If I convert my polynomial I get 1 0011 1101 0110 0111 => (3D65)h & my question is what do I need to change to work the above solution for the given polynomial.
Edit: I also need to consider two things,
1) The initial value will be 0 &
2) The final CRC has to be complemented.
This was actually very helpful for me. However, I did not use the solution SanVEE did, I actually modified the code from his original post as described by Mark Adler and it works great. At least, so far the result matches up with the DNP3 checksum calculator found here: http://www.lammertbies.nl/comm/info/crc-calculation.html
The code posted as the answer for SanVEE looks like it might be very inefficient (e.g. using bools to store each bit), though I have not tested them to compare. Anyone facing the same question may want to examine both answers to see which works better for them.
public class Crc16DNP3
{
const ushort polynomial = 0xA6BC; //0xA001;
ushort[] table = new ushort[256];
public ushort ComputeChecksum(byte[] bytes)
{
ushort crc = 0;
for (int i = 0; i < bytes.Length; ++i)
{
byte index = (byte)(crc ^ bytes[i]);
crc = (ushort)((crc >> 8) ^ table[index]);
}
crc = SwapBytes((ushort)(crc ^ 0xffff));
return crc;
}
public byte[] ComputeChecksumBytes(byte[] bytes)
{
ushort crc = ComputeChecksum(bytes);
return BitConverter.GetBytes(crc);
}
// SwapBytes taken from http://stackoverflow.com/questions/19560436/bitwise-endian-swap-for-various-types
private ushort SwapBytes(ushort x)
{
return (ushort)((ushort)((x & 0xff) << 8) | ((x >> 8) & 0xff));
}
public Crc16DNP3()
{
ushort value;
ushort temp;
for (ushort i = 0; i < table.Length; ++i)
{
value = 0;
temp = i;
for (byte j = 0; j < 8; ++j)
{
if (((value ^ temp) & 0x0001) != 0)
{
value = (ushort)((value >> 1) ^ polynomial);
}
else
{
value >>= 1;
}
temp >>= 1;
}
table[i] = value;
}
}
}
What's wrong with the code at your first link? That also specifies how the CRC bytes are ordered in the message.
You need to reverse the polynomial below x16. The polynomial in bit form is 10011110101100101. Drop the leading 1 (x16), and you have in groups of four: 0011 1101 0110 0101. Reversed that is: 1010 0110 1011 1100. So you should set polynomial = 0xA6BC.
The initial value is already zero. Complementing the final CRC can be done simply with ^ 0xffff.
Finally, I ended up using the following solution & thought it's worth sharing & it may be useful for someone.
private static int GetCrc ( string BitString )
{
bool[] Res = new bool[17];
bool[] CRC = new bool[16];
int i;
bool DoInvert = false;
string crcBits = string.Empty;
for ( i = 0; i < 16; ++i ) // Init before calculation
CRC[i] = false;
for ( i = 0; i < BitString.Length; ++i )
{
DoInvert = ('1' == BitString[i]) ^ CRC[15]; // XOR required?
CRC[15] = CRC[14];
CRC[14] = CRC[13];
CRC[13] = CRC[12] ^ DoInvert;
CRC[12] = CRC[11] ^ DoInvert;
CRC[11] = CRC[10] ^ DoInvert;
CRC[10] = CRC[9] ^ DoInvert;
CRC[9] = CRC[8];
CRC[8] = CRC[7] ^ DoInvert;
CRC[7] = CRC[6];
CRC[6] = CRC[5] ^ DoInvert;
CRC[5] = CRC[4] ^ DoInvert;
CRC[4] = CRC[3];
CRC[3] = CRC[2];
CRC[2] = CRC[1] ^ DoInvert;
CRC[1] = CRC[0];
CRC[0] = DoInvert;
}
for ( i = 0; i < 16; ++i )
Res[15 - i] = CRC[i] ? true : false;
Res[16] = false;
// The final result must be Complemented
for ( i = 0; i < 16; i++ )
{
if ( Res[i] )
crcBits += "0";
else
crcBits += "1";
}
return Convert.ToInt32 ( crcBits, 2 );
}
The above C# solution is converted from C based auto generated code from here.

Porting CRC16 Code in C to C# .NET

So I have this C code that I need to port to C#:
C Code:
uint16 crc16_calc(volatile uint8* bytes, uint32 length)
{
uint32 i;
uint32 j;
uint16 crc = 0xFFFF;
uint16 word;
for (i=0; i < length/2 ; i++)
{
word = ((uint16*)bytes)[i];
// upper byte
j = (uint8)((word ^ crc) >> 8);
crc = (crc << 8) ^ crc16_table[j];
// lower byte
j = (uint8)((word ^ (crc >> 8)) & 0x00FF);
crc = (crc << 8) ^ crc16_table[j];
}
return crc;
}
Ported C# Code:
public ushort CalculateChecksum(byte[] bytes)
{
uint j = 0;
ushort crc = 0xFFFF;
ushort word;
for (uint i = 0; i < bytes.Length / 2; i++)
{
word = bytes[i];
// Upper byte
j = (byte)((word ^ crc) >> 8);
crc = (ushort)((crc << 8) ^ crc16_table[j]);
// Lower byte
j = (byte)((word ^ (crc >> 8)) & 0x00FF);
crc = (ushort)((crc << 8) ^ crc16_table[j]);
}
return crc;
}
This C algorithm calculates the CRC16 of the supplied bytes using a lookup table crc16_table[j]
However the Ported C# code does not produce the same results as the C code, am I doing something wrong?
word = ((uint16*)bytes)[i];
reads two bytes from bytes into a uint16, whereas
word = bytes[i];
just reads a single byte.
Assuming you're running on a little endian machine, your C# code could change to
word = bytes[i++];
word += bytes[i] << 8;
Or, probably better, as suggested by MerickOWA
word = BitConverter.ToInt16(bytes, i++);
Note that you could avoid the odd-looking extra increment of i by changing your loop:
for (uint i = 0; i < bytes.Length; i+=2)
{
word = BitConverter.ToInt16(bytes, i);

Categories

Resources