What does "int &= 0xFF" in a checksum do? - c#

I implemented this checksum algorithm I found, and it works fine but I can't figure out what this "&= 0xFF" line is actually doing.
I looked up the bitwise & operator, and wikipedia claims it's a logical AND of all the bits in A with B. I also read that 0xFF is equivalent to 255 -- which should mean that all of the bits are 1. If you take any number & 0xFF, wouldn't that be the identity of the number? So A & 0xFF produces A, right?
So then I thought, wait a minute, checksum in the code below is a 32 bit Int, but 0xFF is 8bit. Does that mean that the result of checksum &= 0xFF is that 24 bits end up as zeros and only the remaining 8 bits are kept? In which case, checksum is truncated to 8 bits. Is that what's going on here?
private int CalculateChecksum(byte[] dataToCalculate)
{
int checksum = 0;
for(int i = 0; i < dataToCalculate.Length; i++)
{
checksum += dataToCalculate[i];
}
//What does this line actually do?
checksum &= 0xff;
return checksum;
}
Also, if the result is getting truncated to 8 bits, is that because 32 bits is pointless in a checksum? Is it possible to have a situation where a 32 bit checksum catches corrupt data when 8 bit checksum doesn't?

It is masking off the higher bytes, leaving only the lower byte.
checksum &= 0xFF;
Is syntactically short for:
checksum = checksum & 0xFF;
Which, since it is doing integer operations, the 0xFF gets expanded into an int:
checksum = checksum & 0x000000FF;
Which masks off the upper 3 bytes and returns the lower byte as an integer (not a byte).
To answer your other question: Since a 32-bit checksum is much wider than an 8-bit checksum, it can catch errors that an 8-bit checksum would not, but both sides need to use the same checksum calculations for that to work.

Seems like you have a good understanding of the situation.
Does that mean that the result of checksum &= 0xFF is that 24 bits end up as zeros and only the remaining 8 bits are kept?
Yes.
Is it possible to have a situation where a 32 bit checksum catches corrupt data when 8 bit checksum doesn't?
Yes.

This is performing a simple checksum on the bytes (8 bit values) by adding them and ignoring any overflow out into higher order bits. The final &=0xFF, as you suspected, just truncates the value to the 8LSB of the 32 bit (If that is your compiler's definition of int) value resulting in an unsigned value between 0 and 255.
The truncation to 8 bits and throwing away the higher order bits is simply the algorithm defined for this checksum implementation. Historically this sort of check value was used to provide some confidence that a block of bytes had been transferred over a simple serial interface correctly.
To answer your last question then yes, a 32 bit check value will be able to detect an error that would not be detected with an 8 bit check value.

Yes, the checksum is truncated to 8 bits by the
&= 0xFF. The lowest 8 bits are kept and all higher bits are set to 0.
Narrowing the checksum to 8 bits does decrease the reliability. Just think of two 32bit checksums that are different but the lowest 8 bits are equal. In case of truncating to 8 bits both would be equal, in 32bit case they are not.

Related

Does BitConverter handle little-endianness incorrectly?

I'm currently writing something in C#/.NET that involves sending unsigned 16-bit integers in a network packet. The ordering of the bytes needs to be big endian.
At the bit level, my understanding of 'big endian' is that the most significant bit goes at the end, and in reverse for little endian.
And at the byte level, my understanding is the same -- if I'm converting a 16 bit integer to the two 8 bit integers that comprise it, and the architecture is little endian, then I would expect the most significant byte to go at the beginning.
However, BitConverter appears to put the byte with the smallest value at the end of the array, as opposed to the byte with the least-significant value, e.g.
ushort number = 4;
var bytes = BitConverter.GetBytes(number);
Debug.Assert(bytes[BitConverter.IsLittleEndian ? 0 : 1] == 0);
For clarity, if my understanding is correct, then on a little endian machine I would expect the above to return 0x00, 0x04 and on a big endian machine 0x04, 0x00. However, on my little endian Windows x86 workstation running .NET 5, it returns 0x04, 0x00
It's even documented that they've considered the endianness. From: https://learn.microsoft.com/en-us/dotnet/api/system.bitconverter.getbytes?view=net-5.0
The order of bytes in the array returned by the GetBytes method depends on whether the computer architecture is little-endian or big-endian.
Am I being daft or does this seem like the wrong behaviour?
I am indeed being daft. As #mjwills pointed out, and Microsoft's documentation explains (https://learn.microsoft.com/en-us/dotnet/api/system.bitconverter.islittleendian?view=net-5.0#remarks):
"Big-endian" means the most significant byte is on the left end of a word. "Little-endian" means the most significant byte is on the right end of a word.
Wikipedia has a slightly better explanation:
A big-endian system stores the most significant byte of a word at the smallest memory address and the least significant byte at the largest. A little-endian system, in contrast, stores the least-significant byte at the smallest address.
So, if you imagine the memory addresses, converting a 16-bit integer with a value of 4 becomes:
Address
0x00
0x01
Little-endian
0x04
0x00
Big-endian
0x00
0x04
Hopefully this'll help anyone equally daft in future!

adding together the LSB and MSB to obtain a value

I am reading data back from an imaging camera system, this camera detects age, gender etc, one of the values that comes back is the confidence value, this is 2 bytes, and is shown as the LSB and MSB, I have just tried converting these to integers and adding them together, but I don't get the value expected.
is this the correct way to get a value using the LSB and MSB, I have not used this before.
Thanks
Your value is going to be:
Value = LSB + (MSB << 8);
Explanation:
A byte can only store 0 - 255 different values, whereas an int (for this example) is 16 bits.
The MSB is the left hand^ side of the 16 bits, and as such needs to be shifted to the left side to change the bits used. You can then add the two values.
I would suggest looking up the shifting operators.
^ based on endienness (Intel/Motorola)
Assuming that MSB and LSB are most/least significant byte (rather than bit or any other expansion of that acronym), the value can be obtained by MSB * 256 + LSB.

Convert 11 bit hex value to a signed 32 bit int

I am given a 11 bit signed hex value that must be stored in an int32 data type. When I cast the hex value to an int 32, the 11 bit hex value is obviously smaller then the int32 so it 0 fills the higher order bits.
Basically i need to be able to store 11 bit signed values in an int32 or 16 from a given 11 bit hex value.
For example.
String hex = 0x7FF;
If i cast this to int32 using Int.parse(hex, System.Globalization.Numbers.Hexvalue);
I get 2047 when it should be -1 (according to the 11 bit binary 111 1111)
How can I accomplish this in c#?
It's actually very simple, just two shifts. Shifting right keeps the sign, so that's useful. In order to use it, the sign of the 11 bit thing has to be aligned with the sign of the int:
x <<= -11;
Then do the right shift:
x >>= -11;
That's all.
The -11, which may seem odd, is just a shorter way to write 32 - 11. That's not in general the same thing, but shift counts are masked by 31 (ints) or 63 (longs), so in this case you can use that shortcut.
string hex = "0x7FF";
var i = (Convert.ToInt32(hex, 16) << 21) >> 21;

Is it possible to deal with individual bits in C#? Trying to implement a SHA256 generator

Just doing this for fun and I was reading the pseudo-code on Wikipedia and it says when pre-processing to append the bit '1' to the message and then append enough '0' bits to the resulting message length modulus 512 is 448. Then append the length of the message in bits as a 64-bit big-endian integer.
Okay. I'm not sure how to append just a '1' bit but I figure it could be possible to just append 128 (1000 0000) but that wouldn't work in the off chance the resulting message length modulus 512 was already 448 without all those extra 0's. In which case I'm not sure how to append just a 1 because I'd need to deal with at least bytes. Is it possible in C#?
Also, is there a built-in way to append a big-endian integer because I believe my system is little-endian by default.
It's defined in such a way that you only need to deal with bytes if the message is an even number of bytes. If the message length (mod 64) is 56, then append one byte of 0b10000000, folowed by 63 0 bytes, followed by the length. Otherwise, append one byte of 0b10000000, followed by 0 to 62 0 bytes, followed by the length.
You might check out the BitArray class in System.Collections. One of the ctor overloads takes an array of bytes, etc.

what does this mean in c# while converting to a unsigned int 32

(uint)Convert.ToInt32(elements[0]) << 24;
The << is the left shift operator.
Given that the number is a binary number, it will shift all the bits the specified amount to the left.
If we have
2 << 1
This will take the number 2 in binary (00000010) and shift it to the left one bit. This gives you 4 (000000100).
Overflows
Note that once you get to the very left, the bits are discarded. So assuming you are working with an 8 bit sized integer (I know c# uint like you have in your example is 32 bits - I dont want to have to type out a 32 bit digit, so just assume we are on 8 bits)
255 << 1
will return 254 (11111110).
Use
Being very careful of the overflows mentioned before, bit shifting is a very fast way to multiply or divide by 2. In a highly optimised environment (such as games) this is a very useful way to perform arithmetic very fast.
However, in your example, it is taking only the right most 8 bits of the number making them the left most 8 bits (multiplying it by 16,777,216) . Why you would want do this, I could only guess.
I guess you are referring to Shift operators.
As Mongus Pong said, shifts are usually used to multiply and divide very fast. (And can cause weird problems due to overflow).
I'm going to go out on a limb and trying to guess what your code is doing.
If elements[0] is a byte element(that is to say, it contains only 8 bits), then this code will result in straight-forward multiplication by 2^24. Otherwise, it will drop the 24 high-order bits and multiply by 2^24.

Categories

Resources