I have a single byte which contains two values. Here's the documentation:
The authority byte is split into two fields. The three least significant bits carry the user’s authority level (0-5). The five most
significant bits carry an override reject threshold. If these bits are
set to zero, the system reject threshold is used to determine whether
a score for this user is considered an accept or reject. If they are
not zero, then the value of these bits multiplied by ten will be the
threshold score for this user.
Authority Byte:
7 6 5 4 3 ......... 2 1 0
Reject Threshold .. Authority
I don't have any experience of working with bits in C#.
Can someone please help me convert a Byte and get the values as mentioned above?
I've tried the following code:
BitArray BA = new BitArray(mybyte);
But the length comes back as 29 and I would have expected 8, being each bit in the byte.
-- Thanks for everyone's quick help. Got it working now! Awesome internet.
Instead of BitArray, you can more easily use the built-in bitwise AND and right-shift operator as follows:
byte authorityByte = ...
int authorityLevel = authorityByte & 7;
int rejectThreshold = authorityByte >> 3;
To get the single byte back, you can use the bitwise OR and left-shift operator:
int authorityLevel = ...
int rejectThreshold = ...
Debug.Assert(authorityLevel >= 0 && authorityLevel <= 7);
Debug.Assert(rejectThreshold >= 0 && rejectThreshold <= 31);
byte authorityByte = (byte)((rejectThreshold << 3) | authorityLevel);
Your use of the BitArray is incorrect. This:
BitArray BA = new BitArray(mybyte);
..will be implicitly converted to an int. When that happens, you're triggering this constructor:
BitArray(int length);
..therefore, its creating it with a specific length.
Looking at MSDN (http://msdn.microsoft.com/en-us/library/x1xda43a.aspx) you want this:
BitArray BA = new BitArray(new byte[] { myByte });
Length will then be 8 (as expected).
To get a value of the five most significant bits in a byte as an integer, shift the byte to the right by 3 (i.e. by 8-5), and set the three upper bits to zero using bitwise AND operation, like this:
byte orig = ...
int rejThreshold = (orig >> 3) & 0x1F;
>> is the "shift right" operator. It moves bits 7..3 into positions 4..0, dropping the three lower bits.
0x1F is the binary number 00011111, which has the upper three bits set to zero, and the lower five bits set to one. AND-ing with this number zeroes out three upper bits.
This technique can be generalized to get other bit patterns and other integral data types. You shift the bits that you want into the least-significant position, and apply a mask that "cuts out" the number of bits that you want. In some cases, shifting would not be necessary (e.g. when you get the least significant group of bits). In other cases, such as above, the masking would not be necessary, because you get the most significant group of bits in an unsigned type (if the type is signed, ANDing would be required).
You're using the wrong constructor (probably).
The one that you're using is probably this one, while you need this one:
var bitArray = new BitArray(new [] { myByte } );
Related
I would like to convert a number to a BitArray, with the resulting BitArray only being as big as it needs to be.
For instance:
BitArray tooBig = new BitArray(new int[] { 9 });
results in a BitArray with a length of 32 bit, however for the value 9 only 4 bits are required. How can I create BitArrays which are only as long as they need to be? So in this example, 4 bits. Or for the number 260 I expected the BitArray to be 9 bits long
You can figure out all the bits first and then create the array by checking if the least significant bit is 1 or 0 and then right shifting until the number is 0. Note this will not work for negative numbers where the 32nd bit would be 1 to indicate the sign.
public BitArray ToShortestBitArray(int x)
{
var bits = new List<bool>();
while(x > 0)
{
bits.Add((x & 1) == 1);
x >>= 1;
}
return new BitArray(bits.ToArray());
}
Assuming you are working with exclusively unsigned integers, the number of bits you need is equal to the base 2 logarithm of the (number+1), rounded up.
Counting the bits is probably the easiest solution.
In JavaScript for example...
// count bits needed to store a positive number
const bits = (x, b = 0) => x > 0 ? bits(x >> 1, b + 1) : b;
I've been programming for many years, but have never needed to use bitwise operations too much or really deal with data too much on a bit or even byte level, until now. So, please forgive my lack of knowledge.
I'm having to process streaming message frame data that I'm getting via socket communication. The message frames are a series of hex bytes encoded Big Endian which I read into a byte array called byteArray. Take the following 2 bytes for example:
0x03 0x20
The data I need is represented in the first 14 bits - meaning I need to convert the first 14 bits into an int value. (The last 2 bits represent 2 other bool values). I have coded the following to accomplish this:
if (BitConverter.IsLittleEndian)
{
Array.Reverse(byteArray);
}
BitArray bitArray = GetBitArrayFromRange(new BitArray(byteArray), 0, 14);
int dataValue = GetIntFromBitArray(bitArray)
The dataValue variable ends up with the correct result which is: 800
The two functions I'm calling are here:
private static BitArray GetBitArrayFromRange(BitArray bitArray, int startIndex, int length)
{
var newBitArray = new BitArray(length);
for (int i = startIndex; i < length; i++)
{
newBitArray[i] = bitArray.Get(i);
}
return newBitArray;
}
private static int GetIntFromBitArray(BitArray bitArray)
{
int[] array = new int[1];
bitArray.CopyTo(array, 0);
return array[0];
}
Since I have a lack of experience in this area, my question is: Does this code look correct/reasonable? Or, is there a more preferred/conventional way of accomplishing what I need?
Thanks!
"The dataValue variable ends up with the correct result which is: 800"
Shouldn't that correct result be actually 200?
1) 00000011 00100001 : is integer 0x0321 (so now skip beginning two bits 01...)
2) xx000000 11001000 : is extracted last 14 bits (missing 2 bits, so those xx count as zero)
3) 00000000 11001000 : is expected final result from 14-bits extraction = 200
At present it looks like you have an empty (zero filled) 16 bits into which you put the 14 bits. Somehow you putting in exact same position (left-hand side instead of right-hand side)
Original bits : 00000011 00100001
Slots 16 bit : XXXXXXXX XXXXXXXX
Instead of this : XX000000 11001000 //correct way
You have done : 00000011 001000XX //wrong way
Your right-hand side XX are zero so your result is 00000011 00100000 which would give 800, but that's wrong because it's not the true value of those specific 14 bits you extracted.
"Is there a more preferred/conventional way of accomplishing what I
need?"
I guess bit-shifting is the conventional way...
Solution (pseudo-code) :
var myShort = 0x0321; //Short means 2 bytes
var answer = (myShort >> 2); //bitshift to right-hand side by 2 places
By nudging everything 2 places/slots towards right, you can see how the two now-empty places at far-left becomes the XX (automatically zero until you change them), and by nudging you have also just removed the (right-side) 2 bits you wanted to ignore... Leaving you with correct 14-bit value.
PS:
Regarding your code... I've not had chance to test it all but the below logic seems more appropriate for your GetBitArrayFromRange function :
for (int i = 0; i < (length-1); i++)
{
newBitArray[i] = bitArray.Get(startIndex + i);
}
I am making an application in C# and I have hex numbers such as 0x0FF8,0xFFFA etc.
Here I want only 12 bit from right to left. Suppose I have a number as 0x0FF8.
So I just want to make operation on FF8.(12 bits), and this is signed number.
It is the decimal number is -8. In my application I have to first find whether number is negative or not? And after that its value.
I am not getting how to do it in C# efficiently as I have to do it very fast.
The number representation is as 0x0FF8= -8 please see the link http://www.swarthmore.edu/NatSci/echeeve1/Ref/BinaryMath/NumSys.html
erm,
To get the right twelve bits only you could do,
var right12 = 0x0FFF & yourNumber;
To find out if it negative or positive do,
var positive = yourNumber >= 0;
var absoluteValue = Math.Abs(yourNumber); // Assuming yourNumber is Int32
var low12 = 0xFFF & absoluteValue;
This does a bitwise and against a bit mask for the twelve bits you want to keep.
To check if a signed integer value is negative, you have to check its left most bit. If the bit is set, the value is negative.
However, you only have 12 bits, while an int have 32 bits. So when you put the 12 bits in a int, 20 bits are reseted to zero (aka not set). So the left most bit (#31) is not set and the int value is not seen as a negative one.
You have to check the bit #11 and set the 20 other if the 11th is set:
int Value = 0x0FF8;
// Check bit #11
if ((Value & 0x0800) != 0)
{
// Set the 20 other bits to make the int value a negative one
Value |= 0xFFFFF000;
}
You can also do the same thing by using short instead of int. A short only have 16 bits, so:
short Value = 0x0FF8;
// Check bit #11
if ((Value & 0x0800) != 0)
{
// Set the 4 other bits to make the short value a negative one
Value |= 0xF000;
}
The int version is probably the best one to use to avoid casts in the code.
I'm using the MD5 algorithm to hash the key for an on-disk hash table (I know it's questionable whether this is the best algorithm to use for this, but I'm going with it for now. The problem is generalizable to any algorithm that produces a byte array). My problem is this:
The size of the hash code determines the number of combinations (buckets) in the hash table. Since MD5 is 128 bit, there are a huge number of combinations (~ 3.4e38) which is way too big for my purpose. So what I want to do is pick off the first n bits of the byte array that MD5 produces, and convert those into a long (or ulong) value. Since MD5 produces a byte array, it would be easy to do if I wanted an integral number of bytes, but this leads to too big a jump in the number of combinations. I'm finding the single bit version to be a lot trickier.
Goal:
n = 10 // I.e. I want 2^10 combinations
long pos = someFcn(byte[] key, n)
where key is the value being hashed, and n is the number of bits of the MD5 result I want to use. Pos, then, will be an integer from 0 to 1023 (in the case of n = 10). If n = 11, the code will be from 0 to 2^11-1 = 2027, etc. Has to be somewhat fast/efficient.
Doesn't seem that hard but it's eluding me. Any help would be much appreciated. Thanks.
First, convert the first four bytes into an integer, with BitConverter.ToInt32. It's getting four bytes no matter what, but this probably won't make it measurably slower, since you're working with 32-bit registers for the rest of the calculations anyway, and complex stuff like "if it's < 16 then do this with the first two bytes" will just make it more complicated
Then, given that integer, take the lowest N bits. If you really want a specific number of bits [a power of two number of buckets] not known at compile time, ~((-1)<<N) is a nice trick to get 2^N-1.
Or you could simply use ToUInt32 instead and modulo a prime number [it might be slightly better to convert to UInt64 instead, then you've got fully half the bits to start with, in this case]
To obtain the first 10 bits, for example:
int result = ((int)key[0] << 2) | (((int)key[1] >> 6) & 0x03)
If you have an array like this,
unsigned char data[2000];
then you can just scrape off the first n bits into an integer like so:
typedef unsigned long long int MyInt;
MyInt scrape(size_t n, unsigned char * data)
{
MyInt result = 0;
size_t b;
for (b = 0; b < n / 8; ++b)
{
result <<= 8;
result += data[b];
}
const size_t remaining_bits = n % 8;
result <<= remaining_bits;
result += (data[b] >> (8 - remaining_bits));
return result;
}
I'm assuming that CHAR_BITS == 8, feel free to generalize the code if you like. Also the size of the array times 8 must be at least n.
I have a control that has a byte array in it.
Every now and then there are two bytes that tell me some info about number of future items in the array.
So as an example I could have:
...
...
Item [4] = 7
Item [5] = 0
...
...
The value of this is clearly 7.
But what about this?
...
...
Item [4] = 0
Item [5] = 7
...
...
Any idea on what that equates to (as an normal int)?
I went to binary and thought it may be 11100000000 which equals 1792. But I don't know if that is how it really works (ie does it use the whole 8 items for the byte).
Is there any way to know this with out testing?
Note: I am using C# 3.0 and visual studio 2008
BitConverter can easily convert the two bytes in a two-byte integer value:
// assumes byte[] Item = someObject.GetBytes():
short num = BitConverter.ToInt16(Item, 4); // makes a short
// out of Item[4] and Item[5]
A two-byte number has a low and a high byte. The high byte is worth 256 times as much as the low byte:
value = 256 * high + low;
So, for high=0 and low=7, the value is 7. But for high=7 and low=0, the value becomes 1792.
This of course assumes that the number is a simple 16-bit integer. If it's anything fancier, the above won't be enough. Then you need more knowledge about how the number is encoded, in order to decode it.
The order in which the high and low bytes appear is determined by the endianness of the byte stream. In big-endian, you will see high before low (at a lower address), in little-endian it's the other way around.
You say "this value is clearly 7", but it depends entirely on the encoding. If we assume full-width bytes, then in little-endian, yes; 7, 0 is 7. But in big endian it isn't.
For little-endian, what you want is
int i = byte[i] | (byte[i+1] << 8);
and for big-endian:
int i = (byte[i] << 8) | byte[i+1];
But other encoding schemes are available; for example, some schemes use 7-bit arithmetic, with the 8th bit as a continuation bit. Some schemes (UTF-8) put all the continuation bits in the first byte (so the first has only limited room for data bits), and 8 bits for the rest in the sequence.
If you simply want to put those two bytes next to each other in binary format, and see what that big number is in decimal, then you need to use this code:
if (BitConverter.IsLittleEndian)
{
byte[] tempByteArray = new byte[2] { Item[5], Item[4] };
ushort num = BitConverter.ToUInt16(tempByteArray, 0);
}
else
{
ushort num = BitConverter.ToUInt16(Item, 4);
}
If you use short num = BitConverter.ToInt16(Item, 4); as seen in the accepted answer, you are assuming that the first bit of those two bytes is the sign bit (1 = negative and 0 = positive). That answer also assumes you are using a big endian system. See this for more info on the sign bit.
If those bytes are the "parts" of an integer it works like that. But beware, that the order of bytes is platform specific and that it also depends on the length of the integer (16 bit=2 bytes, 32 bit=4bytes, ...)
In case that item[5] is the MSB
ushort result = BitConverter.ToUInt16(new byte[2] { Item[5], Item[4] }, 0);
int result = 256 * Item[5] + Item[4];