I am given a 11 bit signed hex value that must be stored in an int32 data type. When I cast the hex value to an int 32, the 11 bit hex value is obviously smaller then the int32 so it 0 fills the higher order bits.
Basically i need to be able to store 11 bit signed values in an int32 or 16 from a given 11 bit hex value.
For example.
String hex = 0x7FF;
If i cast this to int32 using Int.parse(hex, System.Globalization.Numbers.Hexvalue);
I get 2047 when it should be -1 (according to the 11 bit binary 111 1111)
How can I accomplish this in c#?
It's actually very simple, just two shifts. Shifting right keeps the sign, so that's useful. In order to use it, the sign of the 11 bit thing has to be aligned with the sign of the int:
x <<= -11;
Then do the right shift:
x >>= -11;
That's all.
The -11, which may seem odd, is just a shorter way to write 32 - 11. That's not in general the same thing, but shift counts are masked by 31 (ints) or 63 (longs), so in this case you can use that shortcut.
string hex = "0x7FF";
var i = (Convert.ToInt32(hex, 16) << 21) >> 21;
Related
I implemented this checksum algorithm I found, and it works fine but I can't figure out what this "&= 0xFF" line is actually doing.
I looked up the bitwise & operator, and wikipedia claims it's a logical AND of all the bits in A with B. I also read that 0xFF is equivalent to 255 -- which should mean that all of the bits are 1. If you take any number & 0xFF, wouldn't that be the identity of the number? So A & 0xFF produces A, right?
So then I thought, wait a minute, checksum in the code below is a 32 bit Int, but 0xFF is 8bit. Does that mean that the result of checksum &= 0xFF is that 24 bits end up as zeros and only the remaining 8 bits are kept? In which case, checksum is truncated to 8 bits. Is that what's going on here?
private int CalculateChecksum(byte[] dataToCalculate)
{
int checksum = 0;
for(int i = 0; i < dataToCalculate.Length; i++)
{
checksum += dataToCalculate[i];
}
//What does this line actually do?
checksum &= 0xff;
return checksum;
}
Also, if the result is getting truncated to 8 bits, is that because 32 bits is pointless in a checksum? Is it possible to have a situation where a 32 bit checksum catches corrupt data when 8 bit checksum doesn't?
It is masking off the higher bytes, leaving only the lower byte.
checksum &= 0xFF;
Is syntactically short for:
checksum = checksum & 0xFF;
Which, since it is doing integer operations, the 0xFF gets expanded into an int:
checksum = checksum & 0x000000FF;
Which masks off the upper 3 bytes and returns the lower byte as an integer (not a byte).
To answer your other question: Since a 32-bit checksum is much wider than an 8-bit checksum, it can catch errors that an 8-bit checksum would not, but both sides need to use the same checksum calculations for that to work.
Seems like you have a good understanding of the situation.
Does that mean that the result of checksum &= 0xFF is that 24 bits end up as zeros and only the remaining 8 bits are kept?
Yes.
Is it possible to have a situation where a 32 bit checksum catches corrupt data when 8 bit checksum doesn't?
Yes.
This is performing a simple checksum on the bytes (8 bit values) by adding them and ignoring any overflow out into higher order bits. The final &=0xFF, as you suspected, just truncates the value to the 8LSB of the 32 bit (If that is your compiler's definition of int) value resulting in an unsigned value between 0 and 255.
The truncation to 8 bits and throwing away the higher order bits is simply the algorithm defined for this checksum implementation. Historically this sort of check value was used to provide some confidence that a block of bytes had been transferred over a simple serial interface correctly.
To answer your last question then yes, a 32 bit check value will be able to detect an error that would not be detected with an 8 bit check value.
Yes, the checksum is truncated to 8 bits by the
&= 0xFF. The lowest 8 bits are kept and all higher bits are set to 0.
Narrowing the checksum to 8 bits does decrease the reliability. Just think of two 32bit checksums that are different but the lowest 8 bits are equal. In case of truncating to 8 bits both would be equal, in 32bit case they are not.
I am trying to convert hex data to signed int/decimal and can't figure out what I'm doing wrong.
I need FE to turn into -2.
I'm using Convert.ToInt32(fields[10], 16) but am getting 254 instead of -2.
Any assistance would be greatly appreciated.
int is 32 bits wide, so 0xFE is REALLY being interpreted as 0x000000FE for the purposes of Convert.ToInt32(string, int), which is equal to 254 in the space of int.
Since you're wanting to work with a signed byte range of values , use Convert.ToSByte(string, int) instead (byte is unsigned by default, so you need the sbyte type instead).
Convert.ToSByte("FE",16)
Interpret the value as a signed byte:
sbyte value = Convert.ToSByte("FE", 16); //-2
Well the bounds of Int32 are -2 147 483 648 to 2 147 483 647. So FE matches 254.
In case you want to do a wrap around over 128, the most elegant solution is proably to use a signed byte (sbyte):
csharp> Convert.ToSByte("FE",16);
-2
I'm reading CLR via C# by Jeffrey Richter and on page 115 there is an example of an overflow resulting from arithmetic operations on primitives. Can someone pls explain?
Byte b = 100;
b = (Byte) (b+200); // b now contains 44 (or 2C in Hex).
I understand that there should be an overflow, since byte is an unsigned 8-bit value, but why does its value equal 44?
100+200 is 300; 300 is (in bits):
1 0010 1100
Of this, only the last 8 is kept, so:
0010 1100
which is: 44
The binary representation of 300 is 100101100. That's nine bits, one more than the byte type has room for. Therefore the higher bit is discarded, causing the result to be 00101100. When you translate this value to decimal you get 44.
A byte can hold represent integers in the range between 0 and 255 (inclusive). When you pass a value greater than the 255, like 300, the value of 300-256=44 is stored. This happens because a byte is consisted of 8 bits and each bit can either 0 or 1. So you can represent 2^8=256 integers using a byte, starting from 0.
Actually, you have to divide your number with the 256. The remainder of this can only be represented by a byte.
I've had a good search, spent a few hours of wasted time and I can't do a simple bit shift in reverse :(
Dim result = VALUE >> 8 And &HFF
I have existing code that reads VALUE (an UInt16) from a file, does the bit shift to it. What I am trying to do is the reverse of it so it can be saved and read using the existing code above.
I've read up in bit shifting and read this great Code Project article but it may as well be in Latin.
UInt16 tt = 12123; //10111101011011
int aa = tt >> 8 & 0xFF; //101111 = 47
8 bits are disappeared. you can never get it back.
If you have the value 54, in binary 110110
If you shift 54 >> 2, it moves the bit to the right
00110110
00011011 (shift once)
00001101 (shift twice)
You end up with 13. If you shift 13 to the left. 13 << 2
00001101
00011010 (shift once)
00110100 (shift twice)
You will end up with 52
(uint)Convert.ToInt32(elements[0]) << 24;
The << is the left shift operator.
Given that the number is a binary number, it will shift all the bits the specified amount to the left.
If we have
2 << 1
This will take the number 2 in binary (00000010) and shift it to the left one bit. This gives you 4 (000000100).
Overflows
Note that once you get to the very left, the bits are discarded. So assuming you are working with an 8 bit sized integer (I know c# uint like you have in your example is 32 bits - I dont want to have to type out a 32 bit digit, so just assume we are on 8 bits)
255 << 1
will return 254 (11111110).
Use
Being very careful of the overflows mentioned before, bit shifting is a very fast way to multiply or divide by 2. In a highly optimised environment (such as games) this is a very useful way to perform arithmetic very fast.
However, in your example, it is taking only the right most 8 bits of the number making them the left most 8 bits (multiplying it by 16,777,216) . Why you would want do this, I could only guess.
I guess you are referring to Shift operators.
As Mongus Pong said, shifts are usually used to multiply and divide very fast. (And can cause weird problems due to overflow).
I'm going to go out on a limb and trying to guess what your code is doing.
If elements[0] is a byte element(that is to say, it contains only 8 bits), then this code will result in straight-forward multiplication by 2^24. Otherwise, it will drop the 24 high-order bits and multiply by 2^24.