Reading 4 bits without losing information - c#

I have come across a problem I cannot seem to solve.
I have a type of file "ASDF", and in their header I can get the necessary information to read them. The problem is that one of the "fields" is only 4 bits long.
So, lets say it's like this:
From bit 0 to 8 it's the index of the current node (I've read this already)
From 8 to 16 it's the index for the next node (Read this as well)
From bit 16 to 20 Length of the content (string, etc..)
So my problem is that if I try to read the "length" with a bytereader I will be losing 4 bits of information, or would be "4 bits off". Is there any way to read only 4 bits?

You should read this byte as you read the others then apply a bitmask of 0x0F
For example
byte result = (byte)(byteRead & 0x0F);
this will preserve the lower four bits in the result.
if the needed bits are the high four then you could apply the shift operator
byte result = (byte)((byteRead & 0x0F) >> 5);

Related

Set last 4 bits to 0

I want to set the last 4 bits of my byte list to 0. I've tried this line of code but it does not work:
myData = 1111.1111
myData should be = 1111.0000
(myData & 0x0F) >> 4
Assuming you mean that "4 last bits" is 4 least significant bits, I have this code example for you:
var myData = 0xFF;
var result = myData & ~0xF;
So, basically, what you want here is not to set the 4 least significant bits to 0, but to preserve the rest of the data. To achieve this you have to prepare the "passthrough" mask, which matches the criteria, this is the one's complement of the non-needed bits mask i.e. the one's complement of the 0xF (also note that 0xF = (2 to the power of 4) - 1 -- where 4 is the number of the desired cleared out LSBs). Thus, ~0xF is the desired mask -- you just have to apply it to the number -- myData & ~0xF.
N.B. The one's complement approach is better than magical numbers, pre-computed yourself (such as 0xF in the anwser above), as the compiler will generate the valid number of MSBs (most significant bits) for the type you use this approach against.
An even safer approach would be to compute the one's complement of the variable itself, giving the effective code stated below:
var myData = 0xFF;
var result = ~(~myData | 0xF);
That's it!
To preserve the 4 high bits and zero the 4 low bits
myData & 0xF0

What does "int &= 0xFF" in a checksum do?

I implemented this checksum algorithm I found, and it works fine but I can't figure out what this "&= 0xFF" line is actually doing.
I looked up the bitwise & operator, and wikipedia claims it's a logical AND of all the bits in A with B. I also read that 0xFF is equivalent to 255 -- which should mean that all of the bits are 1. If you take any number & 0xFF, wouldn't that be the identity of the number? So A & 0xFF produces A, right?
So then I thought, wait a minute, checksum in the code below is a 32 bit Int, but 0xFF is 8bit. Does that mean that the result of checksum &= 0xFF is that 24 bits end up as zeros and only the remaining 8 bits are kept? In which case, checksum is truncated to 8 bits. Is that what's going on here?
private int CalculateChecksum(byte[] dataToCalculate)
{
int checksum = 0;
for(int i = 0; i < dataToCalculate.Length; i++)
{
checksum += dataToCalculate[i];
}
//What does this line actually do?
checksum &= 0xff;
return checksum;
}
Also, if the result is getting truncated to 8 bits, is that because 32 bits is pointless in a checksum? Is it possible to have a situation where a 32 bit checksum catches corrupt data when 8 bit checksum doesn't?
It is masking off the higher bytes, leaving only the lower byte.
checksum &= 0xFF;
Is syntactically short for:
checksum = checksum & 0xFF;
Which, since it is doing integer operations, the 0xFF gets expanded into an int:
checksum = checksum & 0x000000FF;
Which masks off the upper 3 bytes and returns the lower byte as an integer (not a byte).
To answer your other question: Since a 32-bit checksum is much wider than an 8-bit checksum, it can catch errors that an 8-bit checksum would not, but both sides need to use the same checksum calculations for that to work.
Seems like you have a good understanding of the situation.
Does that mean that the result of checksum &= 0xFF is that 24 bits end up as zeros and only the remaining 8 bits are kept?
Yes.
Is it possible to have a situation where a 32 bit checksum catches corrupt data when 8 bit checksum doesn't?
Yes.
This is performing a simple checksum on the bytes (8 bit values) by adding them and ignoring any overflow out into higher order bits. The final &=0xFF, as you suspected, just truncates the value to the 8LSB of the 32 bit (If that is your compiler's definition of int) value resulting in an unsigned value between 0 and 255.
The truncation to 8 bits and throwing away the higher order bits is simply the algorithm defined for this checksum implementation. Historically this sort of check value was used to provide some confidence that a block of bytes had been transferred over a simple serial interface correctly.
To answer your last question then yes, a 32 bit check value will be able to detect an error that would not be detected with an 8 bit check value.
Yes, the checksum is truncated to 8 bits by the
&= 0xFF. The lowest 8 bits are kept and all higher bits are set to 0.
Narrowing the checksum to 8 bits does decrease the reliability. Just think of two 32bit checksums that are different but the lowest 8 bits are equal. In case of truncating to 8 bits both would be equal, in 32bit case they are not.

C# 4 bit data type [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
Does C# have a 4 bit data type? I want to make a program with variables that waste the minimum amount of memory, because the program will consume a lot.
For example: I need to save a value that i know it will go from 0 to 10 and a 4 bit var can go from 0 to 15 and it's perfect. But the closest i found was the 8 bit (1 Byte) data type Byte.
I have the idea of creating a c++ dll with a custom data type. Something like nibble. But, if that's the solution to my problem, i don't know where to start, and what i have to do.
Limitations: Creating a Byte and splitting it in two is NOT an option.
No, there is no such thing as a four-bit data type in c#.
Incidentally, four bits will only store a number from 0 to 15, so it doesn't sound like it is fit for purpose if you are storing values from 0 to 127. To compute the range of a variable given that it has N bits, use the formula (2^N)-1 to calculate the maximum. 2^4 = 16 - 1 = 15.
If you need to use a data type that is less than 8 bits in order to save space, you will need to use a packed binary format and special code to access it.
You could for example store two four-bit values in a byte using an AND mask plus a bit shift, e.g.
byte source = 0xAD;
var hiNybble = (source & 0xF0) >> 4; //Left hand nybble = A
var loNyblle = (source & 0x0F); //Right hand nybble = D
Or using integer division and modulus, which works well too but maybe isn't quite as readable:
var hiNybble = source / 16;
var loNybble = source % 16;
And of course you can use an extension method.
static byte GetLowNybble(this byte input)
{
return input % 16;
}
static byte GetHighNybble(this byte input)
{
return input / 16;
}
var hiNybble = source.GetHighNybble();
var loNybble = source.GetLowNybble();
Storing it is easier:
var source = hiNybble * 16 + lowNybble;
Updating just one nybble is harder:
var source = source & 0xF0 + loNybble; //Update only low four bits
var source = source & 0x0F + (hiNybble << 4); //Update only high bits
A 4-bit data type (AKA Nib) only goes from 0-15. It requires 7 bits to go from 0-127. You need a byte essentially.
No, C# does not have a 4-bit numeric data type. If you wish to pack 2 4-bit values in a single 8-bit byte, you will need to write the packing and unpacking code yourself.
No, even boolean is 8 bits size.
You can use >> and << operators to store and read two 4 bit values from one byte.
https://msdn.microsoft.com/en-us/library/a1sway8w.aspx
https://msdn.microsoft.com/en-us/library/xt18et0d.aspx
Depending on how many of your nibbles you need to handle and how much of an issue performance is over memory usage, you might want to have a look at the BitArray and BitVector32 classes. For passing around of values, you'd still need bigger types though.
Yet another option could also be StructLayout fiddling, ... beware of dragons though.

adding together the LSB and MSB to obtain a value

I am reading data back from an imaging camera system, this camera detects age, gender etc, one of the values that comes back is the confidence value, this is 2 bytes, and is shown as the LSB and MSB, I have just tried converting these to integers and adding them together, but I don't get the value expected.
is this the correct way to get a value using the LSB and MSB, I have not used this before.
Thanks
Your value is going to be:
Value = LSB + (MSB << 8);
Explanation:
A byte can only store 0 - 255 different values, whereas an int (for this example) is 16 bits.
The MSB is the left hand^ side of the 16 bits, and as such needs to be shifted to the left side to change the bits used. You can then add the two values.
I would suggest looking up the shifting operators.
^ based on endienness (Intel/Motorola)
Assuming that MSB and LSB are most/least significant byte (rather than bit or any other expansion of that acronym), the value can be obtained by MSB * 256 + LSB.

what does this mean in c# while converting to a unsigned int 32

(uint)Convert.ToInt32(elements[0]) << 24;
The << is the left shift operator.
Given that the number is a binary number, it will shift all the bits the specified amount to the left.
If we have
2 << 1
This will take the number 2 in binary (00000010) and shift it to the left one bit. This gives you 4 (000000100).
Overflows
Note that once you get to the very left, the bits are discarded. So assuming you are working with an 8 bit sized integer (I know c# uint like you have in your example is 32 bits - I dont want to have to type out a 32 bit digit, so just assume we are on 8 bits)
255 << 1
will return 254 (11111110).
Use
Being very careful of the overflows mentioned before, bit shifting is a very fast way to multiply or divide by 2. In a highly optimised environment (such as games) this is a very useful way to perform arithmetic very fast.
However, in your example, it is taking only the right most 8 bits of the number making them the left most 8 bits (multiplying it by 16,777,216) . Why you would want do this, I could only guess.
I guess you are referring to Shift operators.
As Mongus Pong said, shifts are usually used to multiply and divide very fast. (And can cause weird problems due to overflow).
I'm going to go out on a limb and trying to guess what your code is doing.
If elements[0] is a byte element(that is to say, it contains only 8 bits), then this code will result in straight-forward multiplication by 2^24. Otherwise, it will drop the 24 high-order bits and multiply by 2^24.

Categories

Resources