Using C#.net,WPF application.I'm going to connect to a device (MODBUS protocol), I have to calculate CRC (CRC16).
Function which i use calculate normal crc16 and value is correct,but i want the value for CRC16(modbus) one.
Help me to sort out.
There are a lot of resources online about the calculation of the crc16 for the modbus protocol.
For example:
http://www.ccontrolsys.com/w/How_to_Compute_the_Modbus_RTU_Message_CRC
http://www.modbustools.com/modbus_crc16.htm
I think that translating that code in c# should be simple.
You can use this library:
https://github.com/meetanthony/crccsharp
It contains several CRC algorithms included ModBus.
Usage:
Download source code and add it to your project:
public byte[] CalculateCrc16Modbus(byte[] bytes)
{
CrcStdParams.StandartParameters.TryGetValue(CrcAlgorithms.Crc16Modbus, out Parameters crc_p);
Crc crc = new Crc(crc_p);
crc.Initialize();
var crc_bytes = crc.ComputeHash(bytes);
return crc_bytes;
}
Just use:
public static ushort Modbus(byte[] buf)
{
ushort crc = 0xFFFF;
int len = buf.Length;
for (int pos = 0; pos < len; pos++)
{
crc ^= buf[pos];
for (int i = 8; i != 0; i--)
{
if ((crc & 0x0001) != 0)
{
crc >>= 1;
crc ^= 0xA001;
}
else
crc >>= 1;
}
}
// lo-hi
//return crc;
// ..or
// hi-lo reordered
return (ushort)((crc >> 8) | (crc << 8));
}
(curtesy of https://www.cyberforum.ru/csharp-beginners/thread2329096.html)
Boost CRC (Added due to title)
auto v = std::vector< std::uint8_t > { 0x12, 0x34, 0x56, 0x78 };
auto result = boost::crc_optimal<16, 0x8005, 0xFFFF, 0, true, true> {};
result.process_bytes(v.data(), v.size());
Related
What's the meaning of this variable named Offset in this algorithm ?
It's declared in the second calcCrc16 parameter.
For me it's useless bcause it's aways zero and it's used in a sum.
this algorithm generates a crc-16. I'm trying to understand this algorithm bcause a have to create a algorithm that verify crc-16, and i want to use this code as base.
public sealed class CRC
{
private readonly int _polynom;
public static readonly CRC Default = new CRC(0xA001);
public CRC(int polynom)
{
_polynom = polynom;
}
public int CalcCrc16(byte[] buffer)
{
return CalcCrc16(buffer, 0, buffer.Length, _polynom, 0);
}
public static int CalcCrc16(byte[] buffer, int offset, int bufLen, int polynom, int preset)
{
preset &= 0xFFFF;
polynom &= 0xFFFF;
var crc = preset;
for (var i = 0; i < (bufLen + 2); i++)
{
var data = buffer[(i + offset) % buffer.Length] & 0xFF;
crc ^= data;
for (var j = 0; j < 8; j++)
{
if ((crc & 0x0001) != 0)
{
crc = (crc >> 1) ^ polynom;
}
else
{
crc = crc >> 1;
}
}
}
return crc & 0xFFFF;
}
}
I created a simple example, using a small 4 byte message (in a 6 byte buffer):
using System;
namespace crc16
{
class Program
{
private static ushort Crc16(byte[] bfr, int bfrlen)
{
ushort crc = 0;
for (int i = 0; i < bfrlen; i++)
{
crc ^= bfr[i];
for (int j = 0; j < 8; j++)
// assumes twos complement math
crc = (ushort)((crc >> 1)^((0 - (crc&1)) & 0xa001));
}
return crc;
}
static void Main(string[] args)
{
ushort crc;
byte[] data = new byte[6] {0x11, 0x22, 0x33, 0x44, 0x00, 0x00};
crc = Crc16(data, 4); // generate crc
data[4] = (byte)(crc & 0xff); // append crc (lsb first)
data[5] = (byte)(crc >> 8);
crc = Crc16(data, 6); // verify crc;
Console.WriteLine("{0:X4}", crc);
return;
}
}
}
It's part of the signature of a public method, suitable whenever you want to calculate a CRC, but not on your entire buffer.
Sure, most of the time you may just use the simple version of the method, and in that case the parameter is always zero, but typically hashing and CRC implementations are built with an API like this, allowing you to calculate your CRC in chunks if you'd like.
I have a terminal that communicates through RS232 COM with the computer. The protocol that I was given says that I have to send a certain combination of bytes and the CRC 16 IBM calculation of the data sent at the end
I was also given a C written application that I can test with, that application writes a log with send data and received data. In that log I see if I send the terminal this string hexString = "02 00 04 a0 00 01 01 03". I must also send this CRC16 IBM result of data 06 35.
I have managed to somehow translate the C method that was given as an example, into C#. But my result is far away from what I know I must receive.
I have tested sending the data from the log and everything is fine. I must have my calculation done wrong. Am I doing anything wrong here?
Here is my code:
CRC class:
public enum Crc16Mode : ushort
{
ARINC_NORMAL = 0XA02B, ARINC_REVERSED = 0xD405, ARINC_REVERSED_RECIPROCAL = 0XD015,
CCITT_NORMAL = 0X1021, CCITT_REVERSED = 0X8408, CCITT_REVERSED_RECIPROCAL = 0X8810,
CDMA2000_NORMAL = 0XC867, CDMA2000_REVERSED = 0XE613, CDMA2000_REVERSED_RECIPROCAL = 0XE433,
DECT_NORMAL = 0X0589, DECT_REVERSED = 0X91A0, DECT_REVERSED_RECIPROCAL = 0X82C4,
T10_DIF_NORMAL = 0X8BB7, T10_DIF_REVERSED = 0XEDD1, T10_DIF_REVERSED_RECIPROCAL = 0XC5DB,
DNP_NORMAL = 0X3D65, DNP_REVERSED = 0XA6BC, DNP_REVERSED_RECIPROCAL = 0X9EB2,
IBM_NORMAL = 0X8005, IBM_REVERSED = 0XA001, IBM_REVERSED_RECIPROCAL = 0XC002,
OPENSAFETY_A_NORMAL = 0X5935, OPENSAFETY_A_REVERSED = 0XAC9A, OPENSAFETY_A_REVERSED_RECIPROCAL = 0XAC9A,
OPENSAFETY_B_NORMAL = 0X755B, OPENSAFETY_B_REVERSED = 0XDDAE, OPENSAFETY_B_REVERSED_RECIPROCAL = 0XBAAD,
PROFIBUS_NORMAL = 0X1DCF, PROFIBUS_REVERSED = 0XF3B8, PROFIBUS_REVERSED_RECIPROCAL = 0X8EE7
}
public class Crc16
{
readonly ushort[] table = new ushort[256];
public ushort ComputeChecksum(params byte[] bytes)
{
ushort crc = 0;
for (int i = 0; i < bytes.Length; ++i)
{
byte index = (byte)(crc ^ bytes[i]);
crc = (ushort)((crc >> 8) ^ table[index]);
}
return crc;
}
public byte[] ComputeChecksumBytes(params byte[] bytes)
{
ushort crc = ComputeChecksum(bytes);
return BitConverter.GetBytes(crc);
}
public Crc16(Crc16Mode mode)
{
ushort polynomial = (ushort)mode;
ushort value;
ushort temp;
for (ushort i = 0; i < table.Length; ++i)
{
value = 0;
temp = i;
for (byte j = 0; j < 8; ++j)
{
if (((value ^ temp) & 0x0001) != 0)
{
value = (ushort)((value >> 1) ^ polynomial);
}
else
{
value >>= 1;
}
temp >>= 1;
}
table[i] = value;
}
}
}
Method to process the bytes received:
public ushort CalculateCRC(byte[] data)
{
Crc16 crcCalc = new Crc16(Crc16Mode.IBM_NORMAL);
ushort crc = crcCalc.ComputeChecksum(data);
return crc;
}
In this method you can select polynomial from the Enum.
Main in Program Class:
static void Main(string[] args)
{
try
{
Metode m = new Metode();
string hexString = "02 00 04 a0 00 01 01 03";
byte[] bytes = m.HexStringToByteArray(hexString);
ushort crc = m.CalculateCRC(bytes);
string hexResult;
int myInt = crc;
hexResult = myInt.ToString("X");
//Console.WriteLine(crc.ToString());
Console.WriteLine(hexResult);
Console.ReadLine();
}
catch (Exception ex)
{
Metode m = new Metode();
m.writeError(ex.Message);
}
}
Convert from hexstring to byte array:
public byte[] HexStringToByteArray(string hexString)
{
hexString = hexString.Replace(" ", "");
return Enumerable.Range(0, hexString.Length)
.Where(x => x % 2 == 0)
.Select(x => Convert.ToByte(hexString.Substring(x, 2), 16))
.ToArray();
}
Convert from byte array to hex string:
public string ByteArrayToHexString(byte[] byteArray)
{
return BitConverter.ToString(byteArray);
}
What am I doing wrong here?
UPDATE:
Thanks to #MarkAdler I have managed to translate the calculation. What I didn't notice until late was the fact that the CRC calculation should have been for online the DATA sent to the terminal, NOT the entire message!
So the hexString should have been in fact "a0 00 01 01", the data without the STX/length/ETX.
Here is the code for this particular CRC16 Calculus in C#:
public ushort CalculateCRC(byte[] data, int len)
{
int crc = 0, i = 0;
while (len-- != 0)
{
crc ^= data[i++] << 8;
for (int k = 0; k < 8; k++)
crc = ((crc & 0x8000) != 0) ? (crc << 1) ^ 0x8005 : (crc << 1);
}
return (ushort)(crc & 0xffff);
}
You'd need to provide more information on the specification you are trying to implement. However I can tell right away that you are using the wrong polynomial. The CRC routines are shifting right, which means that the polynomial should be bit-reversed. As a result, IBM_NORMAL cannot be correct.
While IBM_REVERSED would be an appropriate polynomial for shifting right, that may or may not be the polynomial you need to meet your specification. Also there could be exclusive-or's coming into or leaving the CRC routine that are needed.
Update:
The linked documentation provides actual code to compute the CRC. Why aren't you looking at that? Finding random code on the interwebs to compute a CRC without looking at what's in the documentation is not likely to get you far. And it didn't.
The documented code shifts the CRC left, opposite of the code that you posted in the question. You need to shift left. The polynomial is 0x8005. There is no final exclusive-or, and the initial CRC value is zero.
Here is a simplified version of the code in the document, written in C (this code avoids the little-endian assumption that is built into the code in the document):
#include <stddef.h>
typedef unsigned char byte;
typedef unsigned short ushort;
ushort crc16ecr(byte data[], int len) {
ushort crc = 0;
for (int i = 0; i < len; i++) {
crc ^= (ushort)(data[i]) << 8;
for (int k = 0; k < 8; k++)
crc = crc & 0x8000 ? (crc << 1) ^ 0x8005 : crc << 1;
}
return crc;
}
Per the document, the CRC is computed on the tag, len, and data, which for your message is a0 00 01 01. Not the whole thing. (Reading the documentation thoroughly is always an excellent first step.) Running that through the CRC code in the document, you get 0x0635. The document says that that is transmitted most significant byte first, so 0x06 0x35.
I'm trying to port the CRC calculation function for Modbus RTU from C# to Python.
C#
private static ushort CRC(byte[] data)
{
ushort crc = 0xFFFF;
for (int pos = 0; pos < data.Length; pos++)
{
crc ^= (UInt16)data[pos];
for (int i = 8; i != 0; i--)
{
if ((crc & 0x0001) != 0)
{
crc >>= 1;
crc ^= 0xA001;
}
else
{
crc >>= 1;
}
}
}
return crc;
}
Which I run like this:
byte[] array = { 0x01, 0x03, 0x00, 0x01, 0x00, 0x01 };
ushort u = CRC(array);
Console.WriteLine(u.ToString("X4"));
Python
def CalculateCRC(data):
crc = 0xFFFF
for pos in data:
crc ^= pos
for i in range(len(data)-1, -1, -1):
if ((crc & 0x0001) != 0):
crc >>= 1
crc ^= 0xA001
else:
crc >>= 1
return crc
Which I run like this:
data = bytearray.fromhex("010300010001")
crc = CalculateCRC(data)
print("%04X"%(crc))
The result from the C# example is: 0xCAD5.
The result from the Python example is: 0x8682.
I know from fact by other applications that the CRC should be 0xCAD5, as the C#-example provides.
When I debug both examples step-by-step, the variable 'crc' has difference values after these code lines:
crc ^= (UInt16)data[pos];
VS
crc ^= pos
What am I missing?
/Mc_Topaz
Your inner loop uses the size of the data array instead of a fixed 8 iterations. Try this:
def calc_crc(data):
crc = 0xFFFF
for pos in data:
crc ^= pos
for i in range(8):
if ((crc & 1) != 0):
crc >>= 1
crc ^= 0xA001
else:
crc >>= 1
return crc
data = bytearray.fromhex("010300010001")
crc = calc_crc(data)
print("%04X"%(crc))
I'm trying to port an old code from C to C# which basically receives a string and returns a CRC16 of it...
The C method is as follow:
#define CRC_MASK 0x1021 /* x^16 + x^12 + x^5 + x^0 */
UINT16 CRC_Calc (unsigned char *pbData, int iLength)
{
UINT16 wData, wCRC = 0;
int i;
for ( ;iLength > 0; iLength--, pbData++) {
wData = (UINT16) (((UINT16) *pbData) << 8);
for (i = 0; i < 8; i++, wData <<= 1) {
if ((wCRC ^ wData) & 0x8000)
wCRC = (UINT16) ((wCRC << 1) ^ CRC_MASK);
else
wCRC <<= 1;
}
}
return wCRC;
}
My ported C# code is this:
private static ushort Calc(byte[] data)
{
ushort wData, wCRC = 0;
for (int i = 0; i < data.Length; i++)
{
wData = Convert.ToUInt16(data[i] << 8);
for (int j = 0; j < 8; j++, wData <<= 1)
{
var a = (wCRC ^ wData) & 0x8000;
if ( a != 0)
{
var c = (wCRC << 1) ^ 0x1021;
wCRC = Convert.ToUInt16(c);
}
else
{
wCRC <<= 1;
}
}
}
return wCRC;
}
The test string is "OPN"... It must return a uint which is (ofc) 2 bytes A8 A9 and the #CRC_MASK is the polynomial for that calculation. I did found several examples of CRC16 here and around the web, but none of them achieve this result since this CRC calculation must match the one that the device we are connecting to.
WHere is the mistake? I really appreciate any help.
Thanks! best regards
Gutemberg
UPDATE
Following the answer from #rcgldr, I put together the following sample:
_serial = new SerialPort("COM6", 19200, Parity.None, 8, StopBits.One);
_serial.Open();
_serial.Encoding = Encoding.GetEncoding(1252);
_serial.DataReceived += Serial_DataReceived;
var msg = "OPN";
var data = Encoding.GetEncoding(1252).GetBytes(msg);
var crc = BitConverter.GetBytes(Calc(data));
var msb = crc[0].ToString("X");
var lsb = crc[1].ToString("X");
//The following line must be something like: \x16OPN\x17\xA8\xA9
var cmd = string.Format(#"{0}{1}{2}\x{3}\x{4}", SYN, msg, ETB, msb, lsb);
//var cmd = "\x16OPN\x17\xA8\xA9";
_serial.Write(cmd);
The value of the cmd variable is what I'm trying to send to the device. If you have a look the the commented cmd value, this is a working string. The 2 bytes of the CRC16, goes in the last two parameters (msb and lsb). So, in the sample here, msb MUST be "\xA8" and lsb MUST be "\xA9" in order to the command to work(the CRC16 match on the device).
Any clues?
Thanks again.
UPDATE 2
For those who fall in the same case were you need to format the string with \x this is what I did to get it working:
protected string ToMessage(string data)
{
var msg = data + ETB;
var crc = CRC16.Compute(msg);
var fullMsg = string.Format(#"{0}{1}{2:X}{3:X}", SYN, msg, crc[0], crc[1]);
return fullMsg;
}
This return to me the full message that I need inclusing the \x on it. The SYN variable is '\x16' and ETB is '\x17'
Thank you all for the help!
Gutemberg
The problem here is that the message including the ETB (\x17) is 4 bytes long (the leading sync byte isn't used for the CRC): "OPN\x17" == {'O', 'P', 'N', 0x17}, which results in a CRC of {0xA8, 0xA9} to be appended to the message. So the CRC function is correct, but the original test data wasn't including the 4th byte which is 0x17.
This is a working example (at least with VS2015 express).
private static ushort Calc(byte[] data)
{
ushort wCRC = 0;
for (int i = 0; i < data.Length; i++)
{
wCRC ^= (ushort)(data[i] << 8);
for (int j = 0; j < 8; j++)
{
if ((wCRC & 0x8000) != 0)
wCRC = (ushort)((wCRC << 1) ^ 0x1021);
else
wCRC <<= 1;
}
}
return wCRC;
}
So I have this C code that I need to port to C#:
C Code:
uint16 crc16_calc(volatile uint8* bytes, uint32 length)
{
uint32 i;
uint32 j;
uint16 crc = 0xFFFF;
uint16 word;
for (i=0; i < length/2 ; i++)
{
word = ((uint16*)bytes)[i];
// upper byte
j = (uint8)((word ^ crc) >> 8);
crc = (crc << 8) ^ crc16_table[j];
// lower byte
j = (uint8)((word ^ (crc >> 8)) & 0x00FF);
crc = (crc << 8) ^ crc16_table[j];
}
return crc;
}
Ported C# Code:
public ushort CalculateChecksum(byte[] bytes)
{
uint j = 0;
ushort crc = 0xFFFF;
ushort word;
for (uint i = 0; i < bytes.Length / 2; i++)
{
word = bytes[i];
// Upper byte
j = (byte)((word ^ crc) >> 8);
crc = (ushort)((crc << 8) ^ crc16_table[j]);
// Lower byte
j = (byte)((word ^ (crc >> 8)) & 0x00FF);
crc = (ushort)((crc << 8) ^ crc16_table[j]);
}
return crc;
}
This C algorithm calculates the CRC16 of the supplied bytes using a lookup table crc16_table[j]
However the Ported C# code does not produce the same results as the C code, am I doing something wrong?
word = ((uint16*)bytes)[i];
reads two bytes from bytes into a uint16, whereas
word = bytes[i];
just reads a single byte.
Assuming you're running on a little endian machine, your C# code could change to
word = bytes[i++];
word += bytes[i] << 8;
Or, probably better, as suggested by MerickOWA
word = BitConverter.ToInt16(bytes, i++);
Note that you could avoid the odd-looking extra increment of i by changing your loop:
for (uint i = 0; i < bytes.Length; i+=2)
{
word = BitConverter.ToInt16(bytes, i);