I am trying to translate Java code with logical right shift (>>>) (Difference between >>> and >>) to C#
Java code is
return hash >>> 24 ^ hash & 0xFFFFFF;
C# is marked >>> as syntax error.
How to fix that?
Update 1
People recommend to use >> in C#, but it didn't solve problem.
System.out.println("hash 1 !!! = " + (-986417464>>>24));
is 197
but
Console.WriteLine("hash 1 !!! = " + (-986417464 >> 24));
is -59
Thank you!
Java needed to introduce >>> because its only unsigned type is char, whose operations are done in integers.
C#, on the other hand, has unsigned types, which perform right shift without sign extension:
uint h = (uint)hash;
return h >> 24 ^ h & 0xFFFFFF;
For C# you can just use >>
If the left-hand operand is of type uint or ulong, the right-shift operator performs a logical shift: the high-order empty bit positions are always set to zero.
From the docs.
Related
I have this block of C code that I can not for the life of me understand. I need to calculate the CRC-16 for a certain byte array I send to the method and it should give me the msb(most significant byte) and the lsb(least significant byte). I was also given a C written app to test some functionality and that app also gives me a log of what is sent and what is received via COM port.
What is weird is that I entered the hex string that I found in the log into this online calculator, but it gives me a different result.
I took a stab at translating the method to C#, but I don't understand certain aspects:
What is pucPTR doing there (it's not beeing used anywhere else)?
What do the 2 lines of code mean, under the first for?
Why in the second for the short "i" is <=7, shouldn't it be <=8?
Last line in if statement means that usCRC is in fact ushort 8005?
Here is the block of code:
unsigned short CalculateCRC(unsigned char* a_szBufuer, short a_sBufferLen)
{
unsigned short usCRC = 0;
for (short j = 0; j < a_sBufferLen; j++)
{
unsigned char* pucPtr = (unsigned char*)&usCRC;
*(pucPtr + 1) = *(pucPtr + 1) ^ *a_szBufuer++;
for (short i = 0; i <= 7; i++)
{
if (usCRC & ((short)0x8000))
{
usCRC = usCRC << 1;
usCRC = usCRC ^ ((ushort)0x8005);
}
else
usCRC = usCRC << 1;
}
}
return (usCRC);
}
This is the hex string that I convert to byte array and send to the method:
02 00 04 a0 00 01 01 03
This is the result that should be given from the CRC calculus:
06 35
The document I have been given says that this is a CRC16 IBM (msb, lsb) of the entire data.
Can anyone please help? I've been stuck on it for a while now.
Any code guru out there capable of translating that C method to C#? Apparently I'm not capable of such sourcery.
First of all, please note than in C, the ^ operator means bitwise XOR.
What is pucPTR doing there (it's not beeing used anywhere else)?
What do the 2 lines of code mean, under the first for?
Causing bugs, by the looks of it. It is only used to grab one of the two bytes of the FCS, but the code is written in an endianess-dependent way.
Endianess is very important when dealing with checksum algorithms, since they were originally designed for hardware shift registers, that require MSB first, aka big endian. In addition, CRC often means data communication, and data communication means possibly different endianess between the sender, the protocol and the receiver.
I would guess that this code was written for little endian machines only and the intent is to XOR with the ms byte. The code points to the first byte then uses +1 pointer arithmetic to get to the second byte. Corrected code should be something like:
uint8_t puc = (unsigned int)usCRC >> 8;
puc ^= *a_szBufuer;
usCRC = (usCRC & 0xFF) | ((unsigned int)puc << 8);
a_szBufuer++;
The casts to unsigned int are there to portably prevent mishaps with implicit integer promotion.
Why in the second for the short "i" is <=7, shouldn't it be <=8?
I think it is correct, but more readably it could have been written as i < 8.
Last line in if statement means that usCRC is in fact ushort 8005?
No, it means to XOR your FCS with the polynomial 0x8005. See this.
The document I have been given says that this is a CRC16 IBM
Yeah it is sometimes called that. Though from what I recall, "CRC16 IBM" also involves some bit inversion of the final result(?). I'd double check that.
Overall, be careful with this code. Whoever wrote it didn't have much of a clue about endianess, integer signedness and implicit type promotions. It is amateur-level code. You should be able to find safer, portable professional versions of the same CRC algorithm on the net.
Very good reading about the topic is A Painless Guide To CRC.
What is pucPTR doing there (it's not beeing used anywhere else)?
pucPtr is used to transform uglily an unsigned short to an array of 2 unsigned char. According endianess of platform, pucPtr will point on first byte of unsigned short and pucPtr+1 will point on second byte of unsigned short (or vice versa). You have to know if this algorithm is designed for little or big endian.
Code equivalent (and portable, if code have been developed for big endian):
unsigned char rawCrc[2];
rawCrc[0] = (unsigned char)(usCRC & 0x00FF);
rawCrc[1] = (unsigned char)((usCRC >> 8) & 0x00FF);
rawCrc[1] = rawCrc[1] ^ *a_szBufuer++;
usCRC = (unsigned short)rawCrc[0]
| (unsigned short)((unsigned int)rawCrc[1] << 8);
For little endian, you have to inverse raw[0] and raw[1]
What do the 2 lines of code mean, under the first for?
First line does the ugly transformation described in 1.
Second line retrieves value pointed by a_szBufuer and increment it. And does a "xor" with second (or first, according endianess) byte of crc (note *(pucPtr +1) is equivalent of pucPtr[1]) and stores results inside second (or first, according endianess) byte of crc.
*(pucPtr + 1) = *(pucPtr + 1) ^ *a_szBufuer++;
is equivalent to
pucPtr[1] = pucPtr[1] ^ *a_szBufuer++;
Why in the second for the short "i" is <=7, shouldn't it be <=8?
You have to do 8 iterations, from 0 to 7. You can change condition to i = 0; i<8 or i=1; i<=8
Last line in if statement means that usCRC is in fact ushort 8005?
No, it doesn't. It means that usCRC is now equal to usCRC XOR 0x8005. ^ is XOR bitwise operation (also called or-exclusive). Example:
0b1100110
^0b1001011
----------
0b0101101
I'm struggling on converting an apparently easy code from Python to C# as below:
def computeIV(self, lba):
iv = ""
lba &= 0xffffffff
for _ in xrange(4):
if (lba & 1):
lba = 0x80000061 ^ (lba >> 1)
else:
lba = lba >> 1
iv += struct.pack("<L", lba)
return iv
I'm used to C# logic and I really can't understand arrays bitmask ...
You can make use of BitArray Class in C# which manages a compact array of bit values, which are represented as Booleans, where true indicates that the bit is on (1) and false indicates the bit is off (0).
It offers AND, OR, NOT, SET and XOR functions.
For Shift operation, a possible solution can be found here:
BitArray - Shift bits or Shifting a BitArray
Can someone explain what the << is doing in this function:
return (b & (1 << pos)) != 0;
And is there an equivalent to this in T-SQL?
It's bitwise shift.
Shift is not mentioned on Bitwise Operators (Transact-SQL) page so I would say they are not awailable in TSQL. However, bitwise shift in 2-based numeric system is equivalent to multiplying by 2, so you can use that to perform similar operation without actually using bitwise shift.
<< in C# means "shift number left". You can simulate it by multiplying by a corresponding power of two:
b & POWER(2, pos) <> 0
I have never used C# before and Im trying to translate a function to C and all was going well until I reached this weird line. Someone help?
out Int128 remainder;
remainder._lo |= 1; ???
assuming in C you have an Int128 struct of the same nature... in C it would be
remainder._lo |= 1;
which just says do a bitwise OR with 1
Some C compilers provide a 128bit ints you could use, in which case you'd end up just doing remainder |= 1;
This implies that
remainder._lo
is an integer of some type, and the |= operator is bitwise or.
So this is equivalent to
reminder._lo = reminder._lo | 1
That might be legal C depending on your context, but that should give you the key to it.
It's the equivalent of
remainder._lo = remainder._lo | 1;
where | is the bitwise or operator, but the |= shouldbe aupported in C as-is.
Int128 is presumably a structure with _hi and _lo members to store the high and low 64 bits of the 128-bit integer. This line is just doing a bit-wise or of the low 64 bits with 1, effectively switching on the least significant bit.
This is actually fairly tricky to Google.
How do you SET (bitwise or) the top two bits of a 32 bit int?
I am getting compiler warnings from everything I try.
Try this:
integerVariable |= 3 << 30;
It may be more clear to use (1 << 31) | (1 << 30) instead of (3 << 30), or you could add a comment about the behavior. In any case, the compiler should be able to optimize the expression to a single value, which is equal to int.MinValue >> 1 == int.MinValue / 2.
If it's a uint:
uintVar |= 3u << 30;
integerVariable |= 0xC0000000;
Use 0xC0000000u for an unsigned integer variable.
Showing the entire 32-bit integer in hex notation is clearer to me than the bit shifts in Mehrdad's answer. They probably compile to the same thing, though, so use whichever looks clearer to you.