Convert 2 bytes to a number - c#

I have a control that has a byte array in it.
Every now and then there are two bytes that tell me some info about number of future items in the array.
So as an example I could have:
...
...
Item [4] = 7
Item [5] = 0
...
...
The value of this is clearly 7.
But what about this?
...
...
Item [4] = 0
Item [5] = 7
...
...
Any idea on what that equates to (as an normal int)?
I went to binary and thought it may be 11100000000 which equals 1792. But I don't know if that is how it really works (ie does it use the whole 8 items for the byte).
Is there any way to know this with out testing?
Note: I am using C# 3.0 and visual studio 2008

BitConverter can easily convert the two bytes in a two-byte integer value:
// assumes byte[] Item = someObject.GetBytes():
short num = BitConverter.ToInt16(Item, 4); // makes a short
// out of Item[4] and Item[5]

A two-byte number has a low and a high byte. The high byte is worth 256 times as much as the low byte:
value = 256 * high + low;
So, for high=0 and low=7, the value is 7. But for high=7 and low=0, the value becomes 1792.
This of course assumes that the number is a simple 16-bit integer. If it's anything fancier, the above won't be enough. Then you need more knowledge about how the number is encoded, in order to decode it.
The order in which the high and low bytes appear is determined by the endianness of the byte stream. In big-endian, you will see high before low (at a lower address), in little-endian it's the other way around.

You say "this value is clearly 7", but it depends entirely on the encoding. If we assume full-width bytes, then in little-endian, yes; 7, 0 is 7. But in big endian it isn't.
For little-endian, what you want is
int i = byte[i] | (byte[i+1] << 8);
and for big-endian:
int i = (byte[i] << 8) | byte[i+1];
But other encoding schemes are available; for example, some schemes use 7-bit arithmetic, with the 8th bit as a continuation bit. Some schemes (UTF-8) put all the continuation bits in the first byte (so the first has only limited room for data bits), and 8 bits for the rest in the sequence.

If you simply want to put those two bytes next to each other in binary format, and see what that big number is in decimal, then you need to use this code:
if (BitConverter.IsLittleEndian)
{
byte[] tempByteArray = new byte[2] { Item[5], Item[4] };
ushort num = BitConverter.ToUInt16(tempByteArray, 0);
}
else
{
ushort num = BitConverter.ToUInt16(Item, 4);
}
If you use short num = BitConverter.ToInt16(Item, 4); as seen in the accepted answer, you are assuming that the first bit of those two bytes is the sign bit (1 = negative and 0 = positive). That answer also assumes you are using a big endian system. See this for more info on the sign bit.

If those bytes are the "parts" of an integer it works like that. But beware, that the order of bytes is platform specific and that it also depends on the length of the integer (16 bit=2 bytes, 32 bit=4bytes, ...)

In case that item[5] is the MSB
ushort result = BitConverter.ToUInt16(new byte[2] { Item[5], Item[4] }, 0);
int result = 256 * Item[5] + Item[4];

Related

c# convert 2 bytes into int value

First, I had read many posts and tried BitConverter methods for the conversion, but I haven't got the desired result.
From a 2 byte array of:
byte[] dateArray = new byte[] { 0x07 , 0xE4 };
Y need to get an integer with value 2020. So, the decimal of 0x7E4.
Following method does not returning the desired value,
int i1 = BitConverter.ToInt16(dateArray, 0);
The endianess tells you how numbers are stored on your computer. There are two possibilities: Little endian and big endian.
Big endian means the biggest byte is stored first, i.e. 2020 would become 0x07, 0xE4.
Little endian means the lowest byte is stored first, i.e. 2020 would become 0xE4, 0x07.
Most computers are little endian, hence the other way round a human would expect. With BitConverter.IsLittleEndian, you can check which type of endianess your computer has. Your code would become:
byte[] dateArray = new byte[] { 0x07 , 0xE4 };
if(BitConverter.IsLittleEndian)
{
Array.Reverse(dataArray);
}
int i1 = BitConverter.ToInt16(dateArray, 0);
dateArray[0] << 8 | dateArray[1]

What is the conventional way to convert a range of bits to an integer value in c# .net?

I've been programming for many years, but have never needed to use bitwise operations too much or really deal with data too much on a bit or even byte level, until now. So, please forgive my lack of knowledge.
I'm having to process streaming message frame data that I'm getting via socket communication. The message frames are a series of hex bytes encoded Big Endian which I read into a byte array called byteArray. Take the following 2 bytes for example:
0x03 0x20
The data I need is represented in the first 14 bits - meaning I need to convert the first 14 bits into an int value. (The last 2 bits represent 2 other bool values). I have coded the following to accomplish this:
if (BitConverter.IsLittleEndian)
{
Array.Reverse(byteArray);
}
BitArray bitArray = GetBitArrayFromRange(new BitArray(byteArray), 0, 14);
int dataValue = GetIntFromBitArray(bitArray)
The dataValue variable ends up with the correct result which is: 800
The two functions I'm calling are here:
private static BitArray GetBitArrayFromRange(BitArray bitArray, int startIndex, int length)
{
var newBitArray = new BitArray(length);
for (int i = startIndex; i < length; i++)
{
newBitArray[i] = bitArray.Get(i);
}
return newBitArray;
}
private static int GetIntFromBitArray(BitArray bitArray)
{
int[] array = new int[1];
bitArray.CopyTo(array, 0);
return array[0];
}
Since I have a lack of experience in this area, my question is: Does this code look correct/reasonable? Or, is there a more preferred/conventional way of accomplishing what I need?
Thanks!
"The dataValue variable ends up with the correct result which is: 800"
Shouldn't that correct result be actually 200?
1) 00000011 00100001 : is integer 0x0321 (so now skip beginning two bits 01...)
2) xx000000 11001000 : is extracted last 14 bits (missing 2 bits, so those xx count as zero)
3) 00000000 11001000 : is expected final result from 14-bits extraction = 200
At present it looks like you have an empty (zero filled) 16 bits into which you put the 14 bits. Somehow you putting in exact same position (left-hand side instead of right-hand side)
Original bits : 00000011 00100001
Slots 16 bit : XXXXXXXX XXXXXXXX
Instead of this : XX000000 11001000 //correct way
You have done : 00000011 001000XX //wrong way
Your right-hand side XX are zero so your result is 00000011 00100000 which would give 800, but that's wrong because it's not the true value of those specific 14 bits you extracted.
"Is there a more preferred/conventional way of accomplishing what I
need?"
I guess bit-shifting is the conventional way...
Solution (pseudo-code) :
var myShort = 0x0321; //Short means 2 bytes
var answer = (myShort >> 2); //bitshift to right-hand side by 2 places
By nudging everything 2 places/slots towards right, you can see how the two now-empty places at far-left becomes the XX (automatically zero until you change them), and by nudging you have also just removed the (right-side) 2 bits you wanted to ignore... Leaving you with correct 14-bit value.
PS:
Regarding your code... I've not had chance to test it all but the below logic seems more appropriate for your GetBitArrayFromRange function :
for (int i = 0; i < (length-1); i++)
{
newBitArray[i] = bitArray.Get(startIndex + i);
}

Logic Behind Type-Casting [Explicitly] in c#

Basically Explicit type casting means there is Possible loss of precision
example :
short s = 256;
byte b = (byte) s;
Console.WriteLine(b);
// output : 0
or
short s = 257;
byte b = (byte) s;
Console.WriteLine(b);
// output : 1
or
short s = 1024;
byte b = (byte)s;
Console.WriteLine(b);
Console.ReadKey();
// output : 0
Base behind this output ... ?
Short is a 2-byte number, byte is 1-byte!
When you cast from two bytes to one you are losing the first
8 bits: 1024 (short: "0000 0100" "0000 0000").
Which in binary becomes (binary: "0000 0000") = 0.
The base behind your output is simple:
Every number is represented as bits, every 8 bits create 1 byte.
Byte holds numbers from "0" to "255".
If you convert bigger number to smaller in programing you are losing bits not just precision.
In your case you are losing every bit after the 8 (If the number you are trying to convert has value holed in its last 8 bits (bit can be 1 or 0) you will get it if not you will get 0).
P.S. Use windows calculator in programmer mode or find a program in google to convert number to bits and it will become more clear to you.

How to work with the bits in a byte

I have a single byte which contains two values. Here's the documentation:
The authority byte is split into two fields. The three least significant bits carry the user’s authority level (0-5). The five most
significant bits carry an override reject threshold. If these bits are
set to zero, the system reject threshold is used to determine whether
a score for this user is considered an accept or reject. If they are
not zero, then the value of these bits multiplied by ten will be the
threshold score for this user.
Authority Byte:
7 6 5 4 3 ......... 2 1 0
Reject Threshold .. Authority
I don't have any experience of working with bits in C#.
Can someone please help me convert a Byte and get the values as mentioned above?
I've tried the following code:
BitArray BA = new BitArray(mybyte);
But the length comes back as 29 and I would have expected 8, being each bit in the byte.
-- Thanks for everyone's quick help. Got it working now! Awesome internet.
Instead of BitArray, you can more easily use the built-in bitwise AND and right-shift operator as follows:
byte authorityByte = ...
int authorityLevel = authorityByte & 7;
int rejectThreshold = authorityByte >> 3;
To get the single byte back, you can use the bitwise OR and left-shift operator:
int authorityLevel = ...
int rejectThreshold = ...
Debug.Assert(authorityLevel >= 0 && authorityLevel <= 7);
Debug.Assert(rejectThreshold >= 0 && rejectThreshold <= 31);
byte authorityByte = (byte)((rejectThreshold << 3) | authorityLevel);
Your use of the BitArray is incorrect. This:
BitArray BA = new BitArray(mybyte);
..will be implicitly converted to an int. When that happens, you're triggering this constructor:
BitArray(int length);
..therefore, its creating it with a specific length.
Looking at MSDN (http://msdn.microsoft.com/en-us/library/x1xda43a.aspx) you want this:
BitArray BA = new BitArray(new byte[] { myByte });
Length will then be 8 (as expected).
To get a value of the five most significant bits in a byte as an integer, shift the byte to the right by 3 (i.e. by 8-5), and set the three upper bits to zero using bitwise AND operation, like this:
byte orig = ...
int rejThreshold = (orig >> 3) & 0x1F;
>> is the "shift right" operator. It moves bits 7..3 into positions 4..0, dropping the three lower bits.
0x1F is the binary number 00011111, which has the upper three bits set to zero, and the lower five bits set to one. AND-ing with this number zeroes out three upper bits.
This technique can be generalized to get other bit patterns and other integral data types. You shift the bits that you want into the least-significant position, and apply a mask that "cuts out" the number of bits that you want. In some cases, shifting would not be necessary (e.g. when you get the least significant group of bits). In other cases, such as above, the masking would not be necessary, because you get the most significant group of bits in an unsigned type (if the type is signed, ANDing would be required).
You're using the wrong constructor (probably).
The one that you're using is probably this one, while you need this one:
var bitArray = new BitArray(new [] { myByte } );

Converting a partial MD5 hash code into a long

I'm using the MD5 algorithm to hash the key for an on-disk hash table (I know it's questionable whether this is the best algorithm to use for this, but I'm going with it for now. The problem is generalizable to any algorithm that produces a byte array). My problem is this:
The size of the hash code determines the number of combinations (buckets) in the hash table. Since MD5 is 128 bit, there are a huge number of combinations (~ 3.4e38) which is way too big for my purpose. So what I want to do is pick off the first n bits of the byte array that MD5 produces, and convert those into a long (or ulong) value. Since MD5 produces a byte array, it would be easy to do if I wanted an integral number of bytes, but this leads to too big a jump in the number of combinations. I'm finding the single bit version to be a lot trickier.
Goal:
n = 10 // I.e. I want 2^10 combinations
long pos = someFcn(byte[] key, n)
where key is the value being hashed, and n is the number of bits of the MD5 result I want to use. Pos, then, will be an integer from 0 to 1023 (in the case of n = 10). If n = 11, the code will be from 0 to 2^11-1 = 2027, etc. Has to be somewhat fast/efficient.
Doesn't seem that hard but it's eluding me. Any help would be much appreciated. Thanks.
First, convert the first four bytes into an integer, with BitConverter.ToInt32. It's getting four bytes no matter what, but this probably won't make it measurably slower, since you're working with 32-bit registers for the rest of the calculations anyway, and complex stuff like "if it's < 16 then do this with the first two bytes" will just make it more complicated
Then, given that integer, take the lowest N bits. If you really want a specific number of bits [a power of two number of buckets] not known at compile time, ~((-1)<<N) is a nice trick to get 2^N-1.
Or you could simply use ToUInt32 instead and modulo a prime number [it might be slightly better to convert to UInt64 instead, then you've got fully half the bits to start with, in this case]
To obtain the first 10 bits, for example:
int result = ((int)key[0] << 2) | (((int)key[1] >> 6) & 0x03)
If you have an array like this,
unsigned char data[2000];
then you can just scrape off the first n bits into an integer like so:
typedef unsigned long long int MyInt;
MyInt scrape(size_t n, unsigned char * data)
{
MyInt result = 0;
size_t b;
for (b = 0; b < n / 8; ++b)
{
result <<= 8;
result += data[b];
}
const size_t remaining_bits = n % 8;
result <<= remaining_bits;
result += (data[b] >> (8 - remaining_bits));
return result;
}
I'm assuming that CHAR_BITS == 8, feel free to generalize the code if you like. Also the size of the array times 8 must be at least n.

Categories

Resources