Converting CRC16 calculation from C to C# - c#

I have to translate a CRC16 calculation from C to C# but get the message that I cannot implicitly convert type 'int' to 'bool' on the (crc & 0x8000) and on return (crc & 0xFFFF)
The code so far:
public unsafe short Crc16(string str)
{
short crc = 0;
for(int i = 0; i < str.Length; i++)
{
crc = (crc << 1) ^ str[i] ^ ((crc & 0x8000) ? 0x1021 : 0);
}
return (crc & 0xFFFF);
}
EDIT: Changed char parameter to string
Original C code
short Crc16( char *str )
{
short crc=0;
unsigned int i;
for (i=0; i<strlen(str); i++)
crc = (crc << 1) ^ *(str+i) ^ ((crc & 0x8000)? 0x1021:0 );
return (crc & 0xffff);
}

In C, 0 and FALSE are synonymous, and any number that is not 0 is true. (reference)
To make the conversion you would do it something like this:
public short CalcCrc16(string str)
{
short crc = 0;
int i;
unchecked
{
foreach(char c in str)
{
short exponent = (crc & 0x8000) != 0 ? 0x1021 : 0;
crc = (crc << 1) ^ ((short)c) ^ exponent);
}
return (short)(crc & 0xFFFF);
}
}
So now that we have the C code to work with, I changed the code sample I have here to match. Below is the explanation of the changes:
char* is the C equivalent of a string
The for loop is iterating over the characters *(str + i) can be rewritten as str[i], and is equivalent to C#'s str.GetChar(i)
The ternary condition is to determine whether the exponent is 0x1021 or 0.
I broke up the lines so you could see the algorithm a bit clearer.
I changed for(int i = 0; i < str.Length; i++) to a foreach on the characters because it was easier to understand.

Related

How to edit this code to calculate CRC-32/MPEG in C#

I have this code which calculate CRC-32, I need to edit this code with: Polynomial 0x04C11DB7 ,Initial value: 0xFFFFFFFF , XOR:0 .
So CRC32 for string "123456789" should be"0376E6E7", I found a code, it's very slow , But it works any way.
```internal static class Crc32
{
internal static uint[] MakeCrcTable()
{
uint c;
uint[] crcTable = new uint[256];
for (uint n = 0; n < 256; n++)
{
c = n;
for (int k = 0; k < 8; k++)
{
var res = c & 1;
c = (res == 1) ? (0xEDB88320 ^ (c >> 1)) : (c >> 1);
}
crcTable[n] = c;
}
return crcTable;
}
internal static uint CalculateCrc32(byte[] str)
{
uint[] crcTable = Crc32.MakeCrcTable();
uint crc = 0xffffffff;
for (int i = 0; i < str.Length; i++)
{
byte c = str[i];
crc = (crc >> 8) ^ crcTable[(crc ^ c) & 0xFF];
}
return ~crc; //(crc ^ (-1)) >> 0;
}
}```
Based on the added comments, what you are looking for is CRC-32/MPEG-2, which reverses the direction of the CRC, and eliminates the final exclusive-or, compared to the implementation you have, which is a CRC-32/ISO-HDLC.
To get there, you need to flip the CRC from reflected to forward. You bit-flip the polynomial to get 0x04c11db7, check the high bit instead of the low bit, reverse the shifts, both in the table generation and use of the table, and exclusive-or with the high byte of the CRC instead of the low byte.
To remove the final exclusive-or, remove the tilde at the end.

Conversion of CRC function from C to C# yields wrong values

I'm trying to convert a couple of simple CRC calculating functions from C to C#, but I seem to be getting incorrect results.
The C functions are:
#define CRC32_POLYNOMIAL 0xEDB88320
unsigned long CRC32Value(int i)
{
int j;
unsigned long ulCRC;
ulCRC = i;
for (j=8;j>0;j--)
{
if (ulCRC & 1)
ulCRC = (ulCRC >> 1)^CRC32_POLYNOMIAL;
else
ulCRC >>= 1;
}
return ulCRC;
}
unsigned long CalculateBlockCRC32(
unsigned long ulCount,
unsigned char *ucBuffer)
{
unsigned long ulTemp1;
unsigned long ulTemp2; unsigned long ulCRC = 0;
while (ulCount-- != 0)
{
ulTemp1 = (ulCRC >> 8) & 0x00FFFFFFL;
ulTemp2 = CRC32Value(((int)ulCRC^*ucBuffer++)&0xff);
ulCRC = ulTemp1^ulTemp2;
}
return(ulCRC);
}
These are well defined, they are taken from a user manual. My C# versions of these functions are:
private ulong CRC32POLYNOMIAL = 0xEDB88320L;
private ulong CRC32Value(int i)
{
int j;
ulong ulCRC = (ulong)i;
for (j = 8; j > 0; j--)
{
if (ulCRC % 2 == 1)
{
ulCRC = (ulCRC >> 1) ^ CRC32POLYNOMIAL;
}
else
{
ulCRC >>= 1;
}
}
return ulCRC;
}
private ulong CalculateBlockCRC32(ulong ulCount, byte[] ucBuffer)
{
ulong ulTemp1;
ulong ulTemp2;
ulong ulCRC=0;
int bufind=0;
while (ulCount-- != 0)
{
ulTemp1 = (ulCRC >> 8) & 0x00FFFFFFL;
ulTemp2 = CRC32Value(((int)ulCRC ^ ucBuffer[bufind]) & 0xFF);
ulCRC = ulTemp1 ^ ulTemp2;
bufind++;
}
return ulCRC;
}
As I mentioned, there are discrepancies between the C version and the C# version. One possible source is my understanding of the C expression ulCRC & 1 which I believe will only be true for odd numbers.
I call the C# function like this:
string contents = "some data";
byte[] toBeHexed = Encoding.ASCII.GetBytes(contents);
ulong calculatedCRC = this.CalculateBlockCRC32((ulong)toBeHexed.Length, toBeHexed);
And the C function is called like this:
char *Buff="some data";
unsigned long iLen = strlen(Buff);
unsigned long CRC = CalculateBlockCRC32(iLen, (unsigned char*) Buff);
I believe that I am calling the functions with the same data in each language, is that correct? If anyone could shed some light on this I would be very grateful.
As it has been already pointed by #Adriano Repetti you should use UInt32 datatype in place of the ulong type(it is 64 bit unsigned UInt64, whereas in VC++ unsigned long is only 32 bit unsigned type)
private UInt32 CRC32POLYNOMIAL = 0xEDB88320;
private UInt32 CRC32Value(int i)
{
int j;
UInt32 ulCRC = (UInt32)i;
for (j = 8; j > 0; j--)
{
if (ulCRC % 2 == 1)
{
ulCRC = (ulCRC >> 1) ^ CRC32POLYNOMIAL;
}
else
{
ulCRC >>= 1;
}
}
return ulCRC;
}
private UInt32 CalculateBlockCRC32(UInt32 ulCount, byte[] ucBuffer)
{
UInt32 ulTemp1;
UInt32 ulTemp2;
UInt32 ulCRC = 0;
int bufind = 0;
while (ulCount-- != 0)
{
ulTemp1 = (ulCRC >> 8) & 0x00FFFFFF;
ulTemp2 = CRC32Value(((int)ulCRC ^ ucBuffer[bufind]) & 0xFF);
ulCRC = ulTemp1 ^ ulTemp2;
bufind++;
}
return ulCRC;
}
string contents = "12";
byte[] toBeHexed = Encoding.ASCII.GetBytes(contents);
UInt32 calculatedCRC = CalculateBlockCRC32((UInt32)toBeHexed.Length, toBeHexed);
Usually in C# it doesn't matter whether you use C# data type name(recommended by Microsoft) or ECMA type name. But in this and similar cases with bit level manipulation it can greatly clarify the intent and prevent mistakes.
In C it is always a good idea to use typedefs from stdint.h. They make the same job, as ECMA types in C# - clarify the intent, and also guarantee the length and sign of used datatypes(C compilers may use different lengths for the same types, because standard doesn't specify exact sizes):
#include <stdint.h>
#define CRC32_POLYNOMIAL ((uint32_t)0xEDB88320)
uint32_t CRC32Value(uint32_t i)
{
uint32_t j;
uint32_t ulCRC;
ulCRC = i;
for (j = 8; j > 0; j--)
{
if (ulCRC & 1)
ulCRC = (ulCRC >> 1) ^ CRC32_POLYNOMIAL;
else
ulCRC >>= 1;
}
return ulCRC;
}
uint32_t CalculateBlockCRC32(
size_t ulCount,
uint8_t *ucBuffer)
{
uint32_t ulTemp1;
uint32_t ulTemp2;
uint32_t ulCRC = 0;
while (ulCount-- != 0)
{
ulTemp1 = (ulCRC >> 8) & ((uint32_t)0x00FFFFFF);
ulTemp2 = CRC32Value((ulCRC^*ucBuffer++)&0xff);
ulCRC = ulTemp1^ulTemp2;
}
return(ulCRC);
}
char *Buff = "12";
size_t iLen = strlen(Buff);
uint32_t CRC = CalculateBlockCRC32(iLen, (uint8_t *) Buff);
printf("%u", CRC);

Porting CRC16 Code in C to C# .NET

So I have this C code that I need to port to C#:
C Code:
uint16 crc16_calc(volatile uint8* bytes, uint32 length)
{
uint32 i;
uint32 j;
uint16 crc = 0xFFFF;
uint16 word;
for (i=0; i < length/2 ; i++)
{
word = ((uint16*)bytes)[i];
// upper byte
j = (uint8)((word ^ crc) >> 8);
crc = (crc << 8) ^ crc16_table[j];
// lower byte
j = (uint8)((word ^ (crc >> 8)) & 0x00FF);
crc = (crc << 8) ^ crc16_table[j];
}
return crc;
}
Ported C# Code:
public ushort CalculateChecksum(byte[] bytes)
{
uint j = 0;
ushort crc = 0xFFFF;
ushort word;
for (uint i = 0; i < bytes.Length / 2; i++)
{
word = bytes[i];
// Upper byte
j = (byte)((word ^ crc) >> 8);
crc = (ushort)((crc << 8) ^ crc16_table[j]);
// Lower byte
j = (byte)((word ^ (crc >> 8)) & 0x00FF);
crc = (ushort)((crc << 8) ^ crc16_table[j]);
}
return crc;
}
This C algorithm calculates the CRC16 of the supplied bytes using a lookup table crc16_table[j]
However the Ported C# code does not produce the same results as the C code, am I doing something wrong?
word = ((uint16*)bytes)[i];
reads two bytes from bytes into a uint16, whereas
word = bytes[i];
just reads a single byte.
Assuming you're running on a little endian machine, your C# code could change to
word = bytes[i++];
word += bytes[i] << 8;
Or, probably better, as suggested by MerickOWA
word = BitConverter.ToInt16(bytes, i++);
Note that you could avoid the odd-looking extra increment of i by changing your loop:
for (uint i = 0; i < bytes.Length; i+=2)
{
word = BitConverter.ToInt16(bytes, i);

C# to F# CRC16 ^ and ^^^ works different. How ^ works?

I found this C# code for CRC16 but I need it on F# :
using System;
public class Crc16 {
const ushort polynomial = 0xA001;
ushort[] table = new ushort[256];
public ushort ComputeChecksum(byte[] bytes) {
ushort crc = 0;
for(int i = 0; i < bytes.Length; ++i) {
byte index = (byte)(crc ^ bytes[i]);
crc = (ushort)((crc >> 8) ^ table[index]);
}
return crc;
}
public byte[] ComputeChecksumBytes(byte[] bytes) {
ushort crc = ComputeChecksum(bytes);
return BitConverter.GetBytes(crc);
}
public Crc16() {
ushort value;
ushort temp;
for(ushort i = 0; i < table.Length; ++i) {
value = 0;
temp = i;
for(byte j = 0; j < 8; ++j) {
if(((value ^ temp) & 0x0001) != 0) {
value = (ushort)((value >> 1) ^ polynomial);
}else {
value >>= 1;
}
temp >>= 1;
}
table[i] = value;
}
}
}
Here is where I started :
let ComputeChecksum(bytes : byte array) =
let mutable crc = 0us
for i = 0 to bytes.Length do
let index = (crc ^^^ bytes.[i]) // ? uint16 and byte
So I think C# version is taking here first or second byte. So I want to know how C# '^' will work here ? And how can I translate this line of C# code to F# ?
This computes the same result as your C# code.
type Crc16() =
let polynomial = 0xA001us
let table = Array.init 256 (fun i ->
((0us, uint16 i), [0y..7y])
||> Seq.fold (fun (value, temp) j ->
let newValue =
match (value ^^^ temp) &&& 0x0001us with
| 0us -> value >>> 1
| _ -> ((value >>> 1) ^^^ polynomial)
newValue, temp >>> 1)
|> fst)
member __.ComputeChecksum(bytes:byte[]) =
(0us, bytes) ||> Seq.fold (fun crc byt ->
let index = byte (crc ^^^ (uint16 byt))
(crc >>> 8) ^^^ table.[int index])
C# ^ and F# ^^^ are both the XOR operator. They should work the same. Is that what you're asking?

How would I convert this crypto from C# to C

This is the C# code I use:
public void Decrypt(byte[] #in, byte[] #out, int size)
{
lock (this)
{
for (ushort i = 0; i < size; i++)
{
if (_server)
{
#out[i] = (byte)(#in[i] ^ 0xAB);
#out[i] = (byte)((#out[i] << 4) | (#out[i] >> 4));
#out[i] = (byte)(ConquerKeys.Key2[_inCounter >> 8] ^ #out[i]);
#out[i] = (byte)(ConquerKeys.Key1[_inCounter & 0xFF] ^ #out[i]);
}
else
{
#out[i] = (byte)(ConquerKeys.Key1[_inCounter & 0xFF] ^ #in[i]);
#out[i] = (byte)(ConquerKeys.Key2[_inCounter >> 8] ^ #out[i]);
#out[i] = (byte)((#out[i] << 4) | (#out[i] >> 4));
#out[i] = (byte)(#out[i] ^ 0xAB);
}
_inCounter = (ushort)(_inCounter + 1);
}
}
}
and this is how I converted it to work in C.
char* decrypt(char* in, int size, int server)
{
char out[size];
memset(out, 0, size);
for (int i = 0; i < size; i++)
{
if (server == 1)
{
out[i] = in[i] ^ 0xAB;
out[i] = out[i] << 4 | out[i] >> 4;
out[i] = Key2[incounter >> 8] ^ out[i];
out[i] = Key1[incounter & 0xFF] ^ in[i];
}
else if (server == 0)
{
out[i] = Key1[incounter & 0xFF] ^ in[i];
out[i] = Key2[incounter >> 8] ^ out[i];
out[i] = out[i] << 4 | out[i] >> 4;
out[i] = out[i] ^ 0xAB;
}
incounter++;
}
return out;
}
However for some reason the C one does not work.
Link for the full C# file
Link for the full C file
Link for the C implementation
There was a translation error.
The C# line:
#out[i] = (byte)(ConquerKeys.Key1[_inCounter & 0xFF] ^ #out[i]);
Became:
out[i] = Key1[incounter & 0xFF] ^ in[i];
The value on the right of the xor (^) is from the wrong array.
Additionally, you are returning a stack-allocated variable, which will cause all sorts of problem.
Change:
char out[size];
memset(out, 0, size);
to:
char *out = (char*)calloc(size, sizeof(char));
The most glaring error I see is that you are returning a pointer to a stack-allocated array, which is going to get stomped by the next function call after decrypt() returns. You need to malloc() that buffer or pass in a pointer to a writable buffer.
You are returning a reference to a local variable which is illegal. Either let the caller pass in an array or use malloc() to create an array inside the method.
I also suggest turning char into unsigned char since it is more portable. If your platform assumes char is the same as signed char, the arithmetic (bit shifts, etc) will not work right.
So just specify unsigned char explicitly (use a typedef or include <stdint.h> if unsigned char seems too long-winded for you).

Categories

Resources