C#, Checksum CRC-16/MCRF4XX - c#

Does anyone know the calculate of CRC-16/MCRF4XX with C#?
I tried with this code:
private static ushort Calc(byte[] x)
{
ushort wCRC = 0;
for (int i = 0; i < x.Length; i++)
{
wCRC ^= (ushort)(x[i] << 8);
for (int j = 0; j < 8; j++)
{
if ((wCRC & 0x8000) != 0)
wCRC = (ushort)((wCRC << 1) ^ 0x1021);
else
wCRC <<= 1;
}
}
return wCRC;
}
but the result doesn't match.

Related

add wavelet filters to discrete Haar wavelet transform (DWT)

I need help for 1-level 5/3 discrete Haar wavelet transform (DWT) source code with c# .
I use this project, and the methods of forward wavelet transform is here :
FWT(double[] data)
{
int h = data.Length >> 1;
for (int i = 0; i < h; i++)
{
int k = (i << 1);
temp[i] = data[k] * s0 + data[k + 1] * s1;
temp[i + h] = data[k] * w0 + data[k + 1] * w1;
}
}
FWT(double[,] data)
{
for (int k = 0; k < 1; k++)
{
for (int i = 0; i < rows / (k+1); i++)
{
for (int j = 0; j < row.Length / (k+1); j++)
row[j] = data[i, j];
FWT(row);
for (int j = 0; j < row.Length / (k+1); j++)
data[i, j] = row[j];
}
for (int j = 0; j < cols / (k+1); j++)
{
for (int i = 0; i < col.Length / (k+1); i++)
col[i] = data[i, j];
FWT(col);
for (int i = 0; i < col.Length / (k+1); i++)
data[i, j] = col[i];
}
}
}
w0 = 0.5; w1 = -0.5;s0 = 0.5;s1 = 0.5;
I searched about this topic in the papers , but I don't understand the algorithm of 5/3 or 9/7 wavelet filters and how can I change this code?
Any help would be much appreciated
You may find it useful: implementation of jpeg2000 decoder in pdf.js.
The implementation of the core of the 5-3 code:
function reversibleTransformFilter(x, offset, length) {
var len = length >> 1;
offset = offset | 0;
var j, n;
for (j = offset, n = len + 1; n--; j += 2) {
x[j] -= (x[j - 1] + x[j + 1] + 2) >> 2;
}
for (j = offset + 1, n = len; n--; j += 2) {
x[j] += (x[j - 1] + x[j + 1]) >> 1;
}
};

Compare string using bitwise shift operation

So i am new to csharp and i cant seem to find a logical error here in this program.i am learning the bitwise shift operators as i am new to these operators. I need help tracing a fault in my code. the program encodes an input String and decodes the encoded String after.After that i compare the string to see if they are equal.They seem to be equal to me but i keep getting a false when i compare them. Here is my code:
class Program
{
static char[] transcode = new char[64];
private static void prep()
{
for (int i = 0; i < transcode.Length; i++)
{
transcode[i] = (char)((int)'A' + i);
if (i > 25 && i <= 51)
{
transcode[i] = (char)((int)transcode[i] + 6);
}
else if (i > 51)
{
transcode[i] = (char)((int)transcode[i] - 0x4b);
}
}
transcode[transcode.Length - 3] = '+';
transcode[transcode.Length - 2] = '/';
transcode[transcode.Length - 1] = '=';
}
static void Main(string[] args)
{
prep();
string test_string = "a";
if (Convert.ToBoolean(String.Compare(test_string, decode(encode(test_string)))))
{
Console.WriteLine("Test succeeded");
}
else
{
Console.WriteLine("Test failed");
}
}
private static string encode(string input)
{
int l = input.Length;
int cb = (l / 3 + (Convert.ToBoolean(l % 3) ? 1 : 0)) * 4;// (0 +(1))*4 =4
char[] output = new char[cb];
for (int i = 0; i < cb; i++)
{
output[i] = '=';
}
int c = 0;
int reflex = 0;
const int s = 0x3f;
for (int j = 0; j < l; j++)
{
reflex <<= 8;
reflex &= 0x00ffff00;
reflex += input[j];
int x = ((j % 3) + 1) * 2;
int mask = s << x;
while (mask >= s)
{
int pivot = (reflex & mask) >> x;
output[c++] = transcode[pivot];
char alpha = transcode[pivot];
int invert = ~mask;
reflex &= invert;
mask >>= 6;
x -= 6; //-4
}
}
switch (l % 3)
{
case 1:
reflex <<= 4; //16
output[c++] = transcode[reflex];
char at16 = transcode[16];
// Console.WriteLine("Character at 16 is: " + at16);
break;
case 2:
reflex <<= 2;
output[c++] = transcode[reflex];
break;
}
return new string(output);//final value is: YQ== (Encoded String.)
}
private static string decode(string input)//input is YQ== which has a length of 4
{
int l = input.Length;
int cb = (l / 4 + ((Convert.ToBoolean(l % 4)) ? 1 : 0)) * 3 + 1; // (1 + (0))*4
char[] output = new char[cb]; //4 in length
int c = 0;
int bits = 0;
int reflex = 0;
for (int j = 0; j < l; j++)
{
reflex <<= 6;
bits += 6;
bool fTerminate = ('=' == input[j]);
if (!fTerminate)
{
reflex += indexOf(input[j]);
while (bits >= 8)
{
int mask = 0x000000ff << (bits % 8);
output[c++] = (char)((reflex & mask) >> (bits % 8)); //convert issue cannot implicitly convert to proper data type.so will have to explicitly convert.
int invert = ~mask;
reflex &= invert;
bits -= 8;
}
}
else
{
break;
}
}
return new string(output);
}
private static int indexOf(char ch)
{
int index;
for (index = 0; index < transcode.Length; index++)
if (ch == transcode[index])
break;
return index;
}
}
Read the docs for String.Compare then read the docs for Convert.ToBoolean. Pay particular attention to the value returned by String.Compare when two strings are equal. Then compare with how that value gets converted to a boolean by ToBoolean
String.Compare is designed for sorting strings. It returns 0 when two strings are equal. ToBoolean will convert that 0 to false. So when you strings are equal, your if evaluates to false and not true.
A simple change would be:
if (String.Compare(test_string, decode(encode(test_string)))==0)
{
Console.WriteLine("Test succeeded");
}
else
{
Console.WriteLine("Test failed");
}
#Tom's comment about the trailing nulls also applies, but it seems that String.Compare just ignores them.

How to generate a CRC-16 from C#

I am trying to generate a CRC-16 using C#. The hardware I am using for RS232 requires the input string to be HEX. The screenshot below shows the correct conversion, For a test, I need 8000 to be 0xC061, however the C# method that generates CRC-16 must be able to convert any given HEX string.
I have tried using Nito.KitchenSink.CRC
I have also tried the below which generates 8009 when 8000 is inputted -
public string CalcCRC16(string strInput)
{
ushort crc = 0x0000;
byte[] data = GetBytesFromHexString(strInput);
for (int i = 0; i < data.Length; i++)
{
crc ^= (ushort)(data[i] << 8);
for (int j = 0; j < 8; j++)
{
if ((crc & 0x8000) > 0)
crc = (ushort)((crc << 1) ^ 0x8005);
else
crc <<= 1;
}
}
return crc.ToString("X4");
}
public Byte[] GetBytesFromHexString(string strInput)
{
Byte[] bytArOutput = new Byte[] { };
if (!string.IsNullOrEmpty(strInput) && strInput.Length % 2 == 0)
{
SoapHexBinary hexBinary = null;
try
{
hexBinary = SoapHexBinary.Parse(strInput);
if (hexBinary != null)
{
bytArOutput = hexBinary.Value;
}
}
catch (Exception ex)
{
MessageBox.Show(ex.Message);
}
}
return bytArOutput;
}
Here we go; note that this is a specific flavor of CRC-16 - it is confusing to say just "CRC-16". This borrows some implementation specifics from http://www.sanity-free.com/ - note I have made it static rather than instance-based.
using System;
static class Program
{
static void Main()
{
string input = "8000";
var bytes = HexToBytes(input);
string hex = Crc16.ComputeChecksum(bytes).ToString("x2");
Console.WriteLine(hex); //c061
}
static byte[] HexToBytes(string input)
{
byte[] result = new byte[input.Length / 2];
for(int i = 0; i < result.Length; i++)
{
result[i] = Convert.ToByte(input.Substring(2 * i, 2), 16);
}
return result;
}
public static class Crc16
{
const ushort polynomial = 0xA001;
static readonly ushort[] table = new ushort[256];
public static ushort ComputeChecksum(byte[] bytes)
{
ushort crc = 0;
for (int i = 0; i < bytes.Length; ++i)
{
byte index = (byte)(crc ^ bytes[i]);
crc = (ushort)((crc >> 8) ^ table[index]);
}
return crc;
}
static Crc16()
{
ushort value;
ushort temp;
for (ushort i = 0; i < table.Length; ++i)
{
value = 0;
temp = i;
for (byte j = 0; j < 8; ++j)
{
if (((value ^ temp) & 0x0001) != 0)
{
value = (ushort)((value >> 1) ^ polynomial);
}
else
{
value >>= 1;
}
temp >>= 1;
}
table[i] = value;
}
}
}
}
In Addition, If you want CRC16-CCITT.
private ushort Crc16Ccitt(byte[] bytes)
{
const ushort poly = 4129;
ushort[] table = new ushort[256];
ushort initialValue = 0xffff;
ushort temp, a;
ushort crc = initialValue;
for (int i = 0; i < table.Length; ++i)
{
temp = 0;
a = (ushort)(i << 8);
for (int j = 0; j < 8; ++j)
{
if (((temp ^ a) & 0x8000) != 0)
temp = (ushort)((temp << 1) ^ poly);
else
temp <<= 1;
a <<= 1;
}
table[i] = temp;
}
for (int i = 0; i < bytes.Length; ++i)
{
crc = (ushort)((crc << 8) ^ table[((crc >> 8) ^ (0xff & bytes[i]))]);
}
return crc;
}

CRC_CCITT Kermit 16 in C#

I'm currently working on a hard way that requires the CRC_CCITT Kermit 16 protocol with the formula (X16 + X12 + X5 + 1). However some of the code I've found online both on this site or the web in general I don't seem to get my desired result. I saw this website (http://www.lammertbies.nl/comm/info/crc-calculation.html) that actually provides me with the exact match I want but it was written in C++. So can anyone help me with this?
I look forward to hearing from you.
Kind regards
Michael
if you need CRC CCITT 16 Kermit you'll need the following code (from my site):
var crc16Kermit = new Crc16( Crc16Mode.CcittKermit );
var checksum = crc16Kermit.ComputeChecksumBytes( 0x01, 0x23, 0x45 );
// checksum = 0x2e, 0x46
here's the source for the above code
using System;
public enum Crc16Mode : ushort { Standard = 0xA001, CcittKermit = 0x8408 }
public class Crc16 {
static ushort[] table = new ushort[256];
public ushort ComputeChecksum( params byte[] bytes ) {
ushort crc = 0;
for(int i = 0; i < bytes.Length; ++i) {
byte index = (byte)(crc ^ bytes[i]);
crc = (ushort)((crc >> 8) ^ table[index]);
}
return crc;
}
public byte[] ComputeChecksumBytes( params byte[] bytes ) {
ushort crc = ComputeChecksum( bytes );
return BitConverter.GetBytes( crc );
}
public Crc16( Crc16Mode mode ) {
ushort polynomial = (ushort)mode;
ushort value;
ushort temp;
for(ushort i = 0; i < table.Length; ++i) {
value = 0;
temp = i;
for(byte j = 0; j < 8; ++j) {
if(((value ^ temp) & 0x0001) != 0) {
value = (ushort)((value >> 1) ^ polynomial);
}else {
value >>= 1;
}
temp >>= 1;
}
table[i] = value;
}
}
}
link for the above code: http://sanity-free.org/147/standard_crc16_and_crc16_kermit_implementation_in_csharp.html
Seems you want CRC16 CCITT. Try this:
using System;
public enum InitialCrcValue { Zeros, NonZero1 = 0xffff, NonZero2 = 0x1D0F }
public class Crc16Ccitt {
const ushort poly = 4129;
ushort[] table = new ushort[256];
ushort initialValue = 0;
public ushort ComputeChecksum(byte[] bytes) {
ushort crc = this.initialValue;
for(int i = 0; i < bytes.Length; ++i) {
crc = (ushort)((crc << 8) ^ table[((crc >> 8) ^ (0xff & bytes[i]))]);
}
return crc;
}
public byte[] ComputeChecksumBytes(byte[] bytes) {
ushort crc = ComputeChecksum(bytes);
return BitConverter.GetBytes(crc);
}
public Crc16Ccitt(InitialCrcValue initialValue) {
this.initialValue = (ushort)initialValue;
ushort temp, a;
for(int i = 0; i < table.Length; ++i) {
temp = 0;
a = (ushort)(i << 8);
for(int j = 0; j < 8; ++j) {
if(((temp ^ a) & 0x8000) != 0) {
temp = (ushort)((temp << 1) ^ poly);
} else {
temp <<= 1;
}
a <<= 1;
}
table[i] = temp;
}
}
}
Source:
http://sanity-free.org/133/crc_16_ccitt_in_csharp.html

Bitfields in C#

So, bitfields. Specifically, large bitfields. I understand how to manipulate individual values in a bitfield, but how would I go about doing this on a large set, such as say:
uint[] bitfield = new uint[4] { 0x0080000, 0x00FA3020, 0x00C8000, 0x0FF00D0 };
The specific problem I'm having is doing left and right shifts that carry through across the whole array. So for instance, if I did a >> 4 on the above array, I'd end up with:
uint[4] { 0x0008000, 0x000FA302, 0x000C800, 0x00FF00D };
Now, an (overly) simplistic algorithm here might look something like (this is me writting code on the fly):
int shift = 4;
for (int i = 0; i <= shift; i++) {
for (int j = bitfield.GetUpperBound(0); j > 0; j--) {
bitfield[j] = bitfield[j] >> 1;
bitfield[j] = bitfield[j] + ((bitfield[j-1] & 1) << (sizeof(uint)*8));
}
bitfield[0] = bitfield[0] >> 1;
}
Is there anything built in that might ease working with this sort of data?
What makes you think that BitArray uses bools internally? It uses Boolean values to represent the bits in terms of the API, but under the hood I believe it uses an int[].
I'm not sure if it's the best way to do it, but this could work (constraining shifts to be in the range 0-31.
public static void ShiftLeft(uint[] bitfield, int shift) {
if(shift < 0 || shift > 31) {
// handle error here
return;
}
int len = bitfield.Length;
int i = len - 1;
uint prev = 0;
while(i >= 0) {
uint tmp = bitfield[i];
bitfield[i] = bitfield[i] << shift;
if(i < len - 1) {
bitfield[i] |= (uint)(prev & (1 >> shift) - 1 ) >> (32 - shift);
}
prev = tmp;
i--;
}
}
public static void ShiftRight(uint[] bitfield, int shift) {
if(shift < 0 || shift > 31) {
// handle error here
return;
}
int len = bitfield.Length;
int i = 0;
uint prev = 0;
while(i < len) {
uint tmp = bitfield[i];
bitfield[i] = bitfield[i] >> shift;
if(i > 0) {
bitfield[i] |= (uint)(prev & (1 << shift) - 1 ) << (32 - shift);
}
prev = tmp;
i++;
}
}
PD: With this change, you should be able to handle shifts greater than 31 bits. Could be refactored to make it look a little less ugly, but in my tests, it works and it doesn't seem too bad performance-wise (unless, there's actually something built in to handle large bitsets, which could be the case).
public static void ShiftLeft(uint[] bitfield, int shift) {
if(shift < 0) {
// error
return;
}
int intsShift = shift >> 5;
if(intsShift > 0) {
if(intsShift > bitfield.Length) {
// error
return;
}
for(int j=0;j < bitfield.Length;j++) {
if(j > intsShift + 1) {
bitfield[j] = 0;
} else {
bitfield[j] = bitfield[j+intsShift];
}
}
BitSetUtils.ShiftLeft(bitfield,shift - intsShift * 32);
return;
}
int len = bitfield.Length;
int i = len - 1;
uint prev = 0;
while(i >= 0) {
uint tmp = bitfield[i];
bitfield[i] = bitfield[i] << shift;
if(i < len - 1) {
bitfield[i] |= (uint)(prev & (1 >> shift) - 1 ) >> (32 - shift);
}
prev = tmp;
i--;
}
}
public static void ShiftRight(uint[] bitfield, int shift) {
if(shift < 0) {
// error
return;
}
int intsShift = shift >> 5;
if(intsShift > 0) {
if(intsShift > bitfield.Length) {
// error
return;
}
for(int j=bitfield.Length-1;j >= 0;j--) {
if(j >= intsShift) {
bitfield[j] = bitfield[j-intsShift];
} else {
bitfield[j] = 0;
}
}
BitSetUtils.ShiftRight(bitfield,shift - intsShift * 32);
return;
}
int len = bitfield.Length;
int i = 0;
uint prev = 0;
while(i < len) {
uint tmp = bitfield[i];
bitfield[i] = bitfield[i] >> shift;
if(i > 0) {
bitfield[i] |= (uint)(prev & (1 << shift) - 1 ) << (32 - shift);
}
prev = tmp;
i++;
}
}
Using extension methods, you could do this:
public static class BitArrayExtensions
{
public static void DownShift(this BitArray bitArray, int places)
{
for (var i = 0; i < bitArray.Length; i++)
{
bitArray[i] = i + places < bitArray.Length && bitArray[i + places];
}
}
public static void UpShift(this BitArray bitArray, int places)
{
for (var i = bitArray.Length - 1; i >= 0; i--)
{
bitArray[i] = i - places >= 0 && bitArray[i - places];
}
}
}
Unfortunately, I couldn't come up with a way to overload the shift operators. (Mainly because BitArray is sealed.)
If you intend to manipulate ints or uints, you could create extension methods for inserting bits into / extracting bits from the BitArray. (BitArray has a constructor that takes an array of ints, but that only takes you that far.)
This doesn't cover specifically shifting, but could be useful for working with large sets. It's in C, but I think it could be easily adapted to C#
Is there a practical limit to the size of bit masks?

Categories

Resources