I want convert integer to byte. If integer bigger than byte range{0,255} then counting from the start. I mean if integer = 260 then function return 5 or if int = 1120 then return 96 and so on.
You can use:
byte myByte = (byte)(myInt & 0xFF);
However, note that 260 will give 4 instead of five. (E.g. 255->valid, 256->0, 257->1, 258->2, 259->3, 260->4)
If you really want 260 to give 5, then you are probably looking for the remainder after dividing by 255. That can be calculated using:
byte myByte = (byte)(myInt % 255);
Related
I would like to convert a number to a BitArray, with the resulting BitArray only being as big as it needs to be.
For instance:
BitArray tooBig = new BitArray(new int[] { 9 });
results in a BitArray with a length of 32 bit, however for the value 9 only 4 bits are required. How can I create BitArrays which are only as long as they need to be? So in this example, 4 bits. Or for the number 260 I expected the BitArray to be 9 bits long
You can figure out all the bits first and then create the array by checking if the least significant bit is 1 or 0 and then right shifting until the number is 0. Note this will not work for negative numbers where the 32nd bit would be 1 to indicate the sign.
public BitArray ToShortestBitArray(int x)
{
var bits = new List<bool>();
while(x > 0)
{
bits.Add((x & 1) == 1);
x >>= 1;
}
return new BitArray(bits.ToArray());
}
Assuming you are working with exclusively unsigned integers, the number of bits you need is equal to the base 2 logarithm of the (number+1), rounded up.
Counting the bits is probably the easiest solution.
In JavaScript for example...
// count bits needed to store a positive number
const bits = (x, b = 0) => x > 0 ? bits(x >> 1, b + 1) : b;
Best way to describe my miss understanding is with the code itself:
var emptyByteArray = new byte[2];
var specificByteArray = new byte[] {150, 105}; //0x96 = 150, 0x69 = 105
var bitArray1 = new BitArray(specificByteArray);
bitArray1.CopyTo(emptyByteArray, 0); //[0]: 150, [1]:105
var hexString = "9669";
var intValueForHex = Convert.ToInt32(hexString, 16); //16 indicates to convert from hex
var bitArray2 = new BitArray(new[] {intValueForHex}) {Length = 16}; //Length=16 truncates the BitArray
bitArray2.CopyTo(emptyByteArray, 0); //[0]:105, [1]:150 (inversed, why??)
I've been reading that the bitarray iterates from the LSB to the MSB, what's the best way for me to initialize the bitarray starting from a hex string then?
I think you are thinking about it wrong. Why are you even using a BitArray? Endianness is a byte-related convention, BitArray is just an array of bits. Since it is least-significant bit first, the correct way to store a 32-bit number in a bit array is with bit 0 at index 0 and bit 31 at index 31. This isn't just just my personal bias towards little-endianness (bit 0 should be in byte 0 not byte 3 for goodness sake), it's because BitArray stores bit 0 of a byte at index 0 in the array. It also stores bit 0 of a 32-bit integer in bit 0 of the array, no matter the endianness of the platform you are on.
For example, instead of your integer 9669, let's look at 1234. No matter what platform you are on, that 16-bit number has the following bit representation, because we write a hex number with the most significant hex digit 1 to the left and the least significant hex digit 4 to the right, bit 0 is on the right (a human convention):
1 2 3 4
0001 0010 0011 0100
No matter how an architecture orders the bytes, bit 0 of a 16-bit number always means the least-significant bit (the right-most here) and bit 15 means the most-significant bit (the left-most here). Due to this, your bit array will always be like this, with bit 0 on the left because that's the way I read an array (with index 0 being bit 0 and index 15 being bit 15):
---4--- ---3--- ---2--- ---1---
0 0 1 0 1 1 0 0 0 1 0 0 1 0 0 0
What you are doing is trying to impose the byte order you want onto an array of bits where it doesn't belong. If you want to reverse the bytes, then you'll get this in the bit array which makes a lot less sense, and means you'll have to reverse the bytes again when you get the integer back out:
---2--- ---1--- ---4--- ---3---
0 1 0 0 1 0 0 0 0 0 1 0 1 1 0 0
I don't think this makes any kind of sense for storing an integer. If you want to store the big-endian representation of a 32-bit number in the BitArray then what you are really storing is a byte array that just happens to be the big-endian representation of a 32-bit number and you should convert to a byte array first make it big-endian if necessary before putting it in the BitArray:
int number = 0x1234;
byte[] bytes = BitConverter.GetBytes(number);
if (BitConverter.IsLittleEndian)
{
bytes = bytes.Reverse().ToArray();
}
BitArray ba = new BitArray(bytes);
I have a single byte which contains two values. Here's the documentation:
The authority byte is split into two fields. The three least significant bits carry the user’s authority level (0-5). The five most
significant bits carry an override reject threshold. If these bits are
set to zero, the system reject threshold is used to determine whether
a score for this user is considered an accept or reject. If they are
not zero, then the value of these bits multiplied by ten will be the
threshold score for this user.
Authority Byte:
7 6 5 4 3 ......... 2 1 0
Reject Threshold .. Authority
I don't have any experience of working with bits in C#.
Can someone please help me convert a Byte and get the values as mentioned above?
I've tried the following code:
BitArray BA = new BitArray(mybyte);
But the length comes back as 29 and I would have expected 8, being each bit in the byte.
-- Thanks for everyone's quick help. Got it working now! Awesome internet.
Instead of BitArray, you can more easily use the built-in bitwise AND and right-shift operator as follows:
byte authorityByte = ...
int authorityLevel = authorityByte & 7;
int rejectThreshold = authorityByte >> 3;
To get the single byte back, you can use the bitwise OR and left-shift operator:
int authorityLevel = ...
int rejectThreshold = ...
Debug.Assert(authorityLevel >= 0 && authorityLevel <= 7);
Debug.Assert(rejectThreshold >= 0 && rejectThreshold <= 31);
byte authorityByte = (byte)((rejectThreshold << 3) | authorityLevel);
Your use of the BitArray is incorrect. This:
BitArray BA = new BitArray(mybyte);
..will be implicitly converted to an int. When that happens, you're triggering this constructor:
BitArray(int length);
..therefore, its creating it with a specific length.
Looking at MSDN (http://msdn.microsoft.com/en-us/library/x1xda43a.aspx) you want this:
BitArray BA = new BitArray(new byte[] { myByte });
Length will then be 8 (as expected).
To get a value of the five most significant bits in a byte as an integer, shift the byte to the right by 3 (i.e. by 8-5), and set the three upper bits to zero using bitwise AND operation, like this:
byte orig = ...
int rejThreshold = (orig >> 3) & 0x1F;
>> is the "shift right" operator. It moves bits 7..3 into positions 4..0, dropping the three lower bits.
0x1F is the binary number 00011111, which has the upper three bits set to zero, and the lower five bits set to one. AND-ing with this number zeroes out three upper bits.
This technique can be generalized to get other bit patterns and other integral data types. You shift the bits that you want into the least-significant position, and apply a mask that "cuts out" the number of bits that you want. In some cases, shifting would not be necessary (e.g. when you get the least significant group of bits). In other cases, such as above, the masking would not be necessary, because you get the most significant group of bits in an unsigned type (if the type is signed, ANDing would be required).
You're using the wrong constructor (probably).
The one that you're using is probably this one, while you need this one:
var bitArray = new BitArray(new [] { myByte } );
How can I validate a value is xx byte integer (singed or unsigned)
xx stand for 1, 2, 4, 8.
Supposed that I need validate 65(65 was a string value currently) is 1 byte integer or not?
How can I write a tiny function to validate it?
I don't know the exact meaning for byte integer.
bool Is1Byte(string val)
{
try
{
int num = int.Parse(val)
return (num >= -128) && (num <= 127);
}
catch(Exception)
{
return false;
}
}
It sounds like what you need is something that will test a number to see if it fits within a 1 byte integer. A 1 byte integer can contain a number between 0 and 255 (if unsigned) or -128 and 127 if signed. So you just need something that tests to see if the number falls within this range. byte is unsigned by default in C# so you just need:
return (x >= 0 && x <= 255);
Why these values? It's because a byte is eight bits of storage, which can store 2 to the 8 possible values. 2^8 = 256.
I have a control that has a byte array in it.
Every now and then there are two bytes that tell me some info about number of future items in the array.
So as an example I could have:
...
...
Item [4] = 7
Item [5] = 0
...
...
The value of this is clearly 7.
But what about this?
...
...
Item [4] = 0
Item [5] = 7
...
...
Any idea on what that equates to (as an normal int)?
I went to binary and thought it may be 11100000000 which equals 1792. But I don't know if that is how it really works (ie does it use the whole 8 items for the byte).
Is there any way to know this with out testing?
Note: I am using C# 3.0 and visual studio 2008
BitConverter can easily convert the two bytes in a two-byte integer value:
// assumes byte[] Item = someObject.GetBytes():
short num = BitConverter.ToInt16(Item, 4); // makes a short
// out of Item[4] and Item[5]
A two-byte number has a low and a high byte. The high byte is worth 256 times as much as the low byte:
value = 256 * high + low;
So, for high=0 and low=7, the value is 7. But for high=7 and low=0, the value becomes 1792.
This of course assumes that the number is a simple 16-bit integer. If it's anything fancier, the above won't be enough. Then you need more knowledge about how the number is encoded, in order to decode it.
The order in which the high and low bytes appear is determined by the endianness of the byte stream. In big-endian, you will see high before low (at a lower address), in little-endian it's the other way around.
You say "this value is clearly 7", but it depends entirely on the encoding. If we assume full-width bytes, then in little-endian, yes; 7, 0 is 7. But in big endian it isn't.
For little-endian, what you want is
int i = byte[i] | (byte[i+1] << 8);
and for big-endian:
int i = (byte[i] << 8) | byte[i+1];
But other encoding schemes are available; for example, some schemes use 7-bit arithmetic, with the 8th bit as a continuation bit. Some schemes (UTF-8) put all the continuation bits in the first byte (so the first has only limited room for data bits), and 8 bits for the rest in the sequence.
If you simply want to put those two bytes next to each other in binary format, and see what that big number is in decimal, then you need to use this code:
if (BitConverter.IsLittleEndian)
{
byte[] tempByteArray = new byte[2] { Item[5], Item[4] };
ushort num = BitConverter.ToUInt16(tempByteArray, 0);
}
else
{
ushort num = BitConverter.ToUInt16(Item, 4);
}
If you use short num = BitConverter.ToInt16(Item, 4); as seen in the accepted answer, you are assuming that the first bit of those two bytes is the sign bit (1 = negative and 0 = positive). That answer also assumes you are using a big endian system. See this for more info on the sign bit.
If those bytes are the "parts" of an integer it works like that. But beware, that the order of bytes is platform specific and that it also depends on the length of the integer (16 bit=2 bytes, 32 bit=4bytes, ...)
In case that item[5] is the MSB
ushort result = BitConverter.ToUInt16(new byte[2] { Item[5], Item[4] }, 0);
int result = 256 * Item[5] + Item[4];