Convert Python array/byteshift/struct to C# - c#

I'm struggling on converting an apparently easy code from Python to C# as below:
def computeIV(self, lba):
iv = ""
lba &= 0xffffffff
for _ in xrange(4):
if (lba & 1):
lba = 0x80000061 ^ (lba >> 1)
else:
lba = lba >> 1
iv += struct.pack("<L", lba)
return iv
I'm used to C# logic and I really can't understand arrays bitmask ...

You can make use of BitArray Class in C# which manages a compact array of bit values, which are represented as Booleans, where true indicates that the bit is on (1) and false indicates the bit is off (0).
It offers AND, OR, NOT, SET and XOR functions.
For Shift operation, a possible solution can be found here:
BitArray - Shift bits or Shifting a BitArray

Related

Set last 4 bits to 0

I want to set the last 4 bits of my byte list to 0. I've tried this line of code but it does not work:
myData = 1111.1111
myData should be = 1111.0000
(myData & 0x0F) >> 4
Assuming you mean that "4 last bits" is 4 least significant bits, I have this code example for you:
var myData = 0xFF;
var result = myData & ~0xF;
So, basically, what you want here is not to set the 4 least significant bits to 0, but to preserve the rest of the data. To achieve this you have to prepare the "passthrough" mask, which matches the criteria, this is the one's complement of the non-needed bits mask i.e. the one's complement of the 0xF (also note that 0xF = (2 to the power of 4) - 1 -- where 4 is the number of the desired cleared out LSBs). Thus, ~0xF is the desired mask -- you just have to apply it to the number -- myData & ~0xF.
N.B. The one's complement approach is better than magical numbers, pre-computed yourself (such as 0xF in the anwser above), as the compiler will generate the valid number of MSBs (most significant bits) for the type you use this approach against.
An even safer approach would be to compute the one's complement of the variable itself, giving the effective code stated below:
var myData = 0xFF;
var result = ~(~myData | 0xF);
That's it!
To preserve the 4 high bits and zero the 4 low bits
myData & 0xF0

CRC-16 0x8005 polynominal, from C to C#. SOS

I have this block of C code that I can not for the life of me understand. I need to calculate the CRC-16 for a certain byte array I send to the method and it should give me the msb(most significant byte) and the lsb(least significant byte). I was also given a C written app to test some functionality and that app also gives me a log of what is sent and what is received via COM port.
What is weird is that I entered the hex string that I found in the log into this online calculator, but it gives me a different result.
I took a stab at translating the method to C#, but I don't understand certain aspects:
What is pucPTR doing there (it's not beeing used anywhere else)?
What do the 2 lines of code mean, under the first for?
Why in the second for the short "i" is <=7, shouldn't it be <=8?
Last line in if statement means that usCRC is in fact ushort 8005?
Here is the block of code:
unsigned short CalculateCRC(unsigned char* a_szBufuer, short a_sBufferLen)
{
unsigned short usCRC = 0;
for (short j = 0; j < a_sBufferLen; j++)
{
unsigned char* pucPtr = (unsigned char*)&usCRC;
*(pucPtr + 1) = *(pucPtr + 1) ^ *a_szBufuer++;
for (short i = 0; i <= 7; i++)
{
if (usCRC & ((short)0x8000))
{
usCRC = usCRC << 1;
usCRC = usCRC ^ ((ushort)0x8005);
}
else
usCRC = usCRC << 1;
}
}
return (usCRC);
}
This is the hex string that I convert to byte array and send to the method:
02 00 04 a0 00 01 01 03
This is the result that should be given from the CRC calculus:
06 35
The document I have been given says that this is a CRC16 IBM (msb, lsb) of the entire data.
Can anyone please help? I've been stuck on it for a while now.
Any code guru out there capable of translating that C method to C#? Apparently I'm not capable of such sourcery.
First of all, please note than in C, the ^ operator means bitwise XOR.
What is pucPTR doing there (it's not beeing used anywhere else)?
What do the 2 lines of code mean, under the first for?
Causing bugs, by the looks of it. It is only used to grab one of the two bytes of the FCS, but the code is written in an endianess-dependent way.
Endianess is very important when dealing with checksum algorithms, since they were originally designed for hardware shift registers, that require MSB first, aka big endian. In addition, CRC often means data communication, and data communication means possibly different endianess between the sender, the protocol and the receiver.
I would guess that this code was written for little endian machines only and the intent is to XOR with the ms byte. The code points to the first byte then uses +1 pointer arithmetic to get to the second byte. Corrected code should be something like:
uint8_t puc = (unsigned int)usCRC >> 8;
puc ^= *a_szBufuer;
usCRC = (usCRC & 0xFF) | ((unsigned int)puc << 8);
a_szBufuer++;
The casts to unsigned int are there to portably prevent mishaps with implicit integer promotion.
Why in the second for the short "i" is <=7, shouldn't it be <=8?
I think it is correct, but more readably it could have been written as i < 8.
Last line in if statement means that usCRC is in fact ushort 8005?
No, it means to XOR your FCS with the polynomial 0x8005. See this.
The document I have been given says that this is a CRC16 IBM
Yeah it is sometimes called that. Though from what I recall, "CRC16 IBM" also involves some bit inversion of the final result(?). I'd double check that.
Overall, be careful with this code. Whoever wrote it didn't have much of a clue about endianess, integer signedness and implicit type promotions. It is amateur-level code. You should be able to find safer, portable professional versions of the same CRC algorithm on the net.
Very good reading about the topic is A Painless Guide To CRC.
What is pucPTR doing there (it's not beeing used anywhere else)?
pucPtr is used to transform uglily an unsigned short to an array of 2 unsigned char. According endianess of platform, pucPtr will point on first byte of unsigned short and pucPtr+1 will point on second byte of unsigned short (or vice versa). You have to know if this algorithm is designed for little or big endian.
Code equivalent (and portable, if code have been developed for big endian):
unsigned char rawCrc[2];
rawCrc[0] = (unsigned char)(usCRC & 0x00FF);
rawCrc[1] = (unsigned char)((usCRC >> 8) & 0x00FF);
rawCrc[1] = rawCrc[1] ^ *a_szBufuer++;
usCRC = (unsigned short)rawCrc[0]
| (unsigned short)((unsigned int)rawCrc[1] << 8);
For little endian, you have to inverse raw[0] and raw[1]
What do the 2 lines of code mean, under the first for?
First line does the ugly transformation described in 1.
Second line retrieves value pointed by a_szBufuer and increment it. And does a "xor" with second (or first, according endianess) byte of crc (note *(pucPtr +1) is equivalent of pucPtr[1]) and stores results inside second (or first, according endianess) byte of crc.
*(pucPtr + 1) = *(pucPtr + 1) ^ *a_szBufuer++;
is equivalent to
pucPtr[1] = pucPtr[1] ^ *a_szBufuer++;
Why in the second for the short "i" is <=7, shouldn't it be <=8?
You have to do 8 iterations, from 0 to 7. You can change condition to i = 0; i<8 or i=1; i<=8
Last line in if statement means that usCRC is in fact ushort 8005?
No, it doesn't. It means that usCRC is now equal to usCRC XOR 0x8005. ^ is XOR bitwise operation (also called or-exclusive). Example:
0b1100110
^0b1001011
----------
0b0101101

What is code for logical right shift in C#?

I am trying to translate Java code with logical right shift (>>>) (Difference between >>> and >>) to C#
Java code is
return hash >>> 24 ^ hash & 0xFFFFFF;
C# is marked >>> as syntax error.
How to fix that?
Update 1
People recommend to use >> in C#, but it didn't solve problem.
System.out.println("hash 1 !!! = " + (-986417464>>>24));
is 197
but
Console.WriteLine("hash 1 !!! = " + (-986417464 >> 24));
is -59
Thank you!
Java needed to introduce >>> because its only unsigned type is char, whose operations are done in integers.
C#, on the other hand, has unsigned types, which perform right shift without sign extension:
uint h = (uint)hash;
return h >> 24 ^ h & 0xFFFFFF;
For C# you can just use >>
If the left-hand operand is of type uint or ulong, the right-shift operator performs a logical shift: the high-order empty bit positions are always set to zero.
From the docs.

What does << in C# mean and is there a SQL equivalent?

Can someone explain what the << is doing in this function:
return (b & (1 << pos)) != 0;
And is there an equivalent to this in T-SQL?
It's bitwise shift.
Shift is not mentioned on Bitwise Operators (Transact-SQL) page so I would say they are not awailable in TSQL. However, bitwise shift in 2-based numeric system is equivalent to multiplying by 2, so you can use that to perform similar operation without actually using bitwise shift.
<< in C# means "shift number left". You can simulate it by multiplying by a corresponding power of two:
b & POWER(2, pos) <> 0

Binary Shift Differences between VB.NET and C#

I just found an interesting problem between translating some data:
VB.NET: CByte(4) << 8 Returns 4
But C#: (byte)4 << 8 Returns 1024
Namely, why does VB.NET: (CByte(4) << 8).GetType() return type {Name = "Byte" FullName = "System.Byte"}
Yet C#: ((byte)4 << 8).GetType() returns type {Name = "Int32" FullName = "System.Int32"}
Is there a reason why these two treat the binary shift the same? Following from that, is there any way to make the C# bit shift perform the same as VB.NET (to make VB.NET perform like C# you just do CInt(_____) << 8)?
According to http://msdn.microsoft.com/en-us/library/a1sway8w.aspx byte does not have << defined on it for C# (only int, uint, long and ulong. This means that it will use an implciit conversion to a type that it can use so it converts it to int before doing the bit shift.
http://msdn.microsoft.com/en-us/library/7haw1dex.aspx says that VB defines the operation on Bytes. To prevent overflow it applies a mask to your shift to bring it within an appropriate range so it is actually in this case shifting by nothing at all.
As to why C# doesn't define shifting on bytes I can't tell you.
To actually make it behave the same for other datatypes you need to just mask your shift number by 7 for bytes or 15 for shorts (see second link for info).
To apply the same in C#, you would use
static byte LeftShiftVBStyle(byte value, int count)
{
return (byte)(value << (count & 7));
}
as for why VB took that approach.... just different language, different rules (it is a natural extension of the way C# handles shifting of int/&31 and long/&63, to be fair).
Chris already nailed it, vb.net has defined shift operators for the Byte and Short types, C# does not. The C# spec is very similar to C and also a good match for the MSIL definitions for OpCodes.Shl, Shr and Shr_Un, they only accept int32, int64 and intptr operands. Accordingly, any byte or short sized operands are first converted to int32 with their implicit conversion.
That's a limitation that the vb.net compiler has to work with, it needs to generate extra code to make the byte and short specific versions of the operators work. The byte operator is implemented like this:
Dim result As Byte = CByte(leftOperand << (rightOperand And 7))
and the short operator:
Dim result As Short = CShort(leftOperand << (rightOperand And 15))
The corresponding C# operation is:
Dim result As Integer = CInt(leftOperand) << CInt(rightOperand)
Or CLng() if required. Implicit in C# code is that the programmer always has to cast the result back to the desired result type. There are a lot of SO questions about that from programmers that don't think that's very intuitive. VB.NET has another feature that makes automatic casting more survivable, it has overflow checking enabled by default. Although that's not applicable to shifts.

Categories

Resources