Read bit range from byte array - c#

I am looking for a method that will enable me to get a range of bits. For example if I have the binary data
0 1 0 1 1 0 1 1 1 1 0 1 0 1 1 1 (2 bytes)
I might need to get data from range bit 3 to 9. In other words I would be interested in:
0 1 0 1 1 0 1 1 1 1 0 1 0 1 1 1
so in short I will like to construct the method:
byte[] Read(byte[] data, int left, int right){
// implementation
}
so that if I pass the data new byte[]{91,215}, 3, 9 I will get byte[]{122} (note byte 91 and 215 = 0 1 0 1 1 0 1 1 1 1 0 1 0 1 1 1 and byte 122 = 1 1 1 1 0 1 0 same binary data as the example.
It would be nice if I could use the << operator on byte arrays such as doing something like:
byte[] myArray = new byte[] { 1, 2, 3 };
var shift = myArray << 2;
If you are interested to know why I need this functionality:
I am creating a project on a board and often need to read and write values to the memory. The cdf, sfr, or ddf (refereed to as chip definition file) contains information about a particular chip. That file may look like:
; Name Zone Address Bytesize Displaybase Bitrange
; ---- ---- ------- -------- ----------- --------
sfr = "SPI0_CONTROL" , "Memory", 0x40001000, 4, base=16
sfr = "SPI0_CONTROL.SPH" , "Memory", 0x40001000, 4, base=16, bitRange=25-25
sfr = "SPI0_CONTROL.SPO" , "Memory", 0x40001000, 4, base=16, bitRange=24-24
sfr = "SPI0_CONTROL.TXRXDFCOUNT" , "Memory", 0x40001000, 4, base=16, bitRange=8-23
sfr = "SPI0_CONTROL.INT_TXUR" , "Memory", 0x40001000, 4, base=16, bitRange=7-7
sfr = "SPI0_CONTROL.INT_RXOVF" , "Memory", 0x40001000, 4, base=16, bitRange=6-6
Since I am reading a lot of variables (sometimes 80 times per second) I will like to have an efficient algorithm. I guess my approach would be that if the bytesize is 8 then I will create a long from those 8 bytes then use the << and >> operators in order to get what I need. if the bytesize if 4 then I will create an int and use the << and >> operators but How will I do it if I need to read 16 bytes though? I guess my question should been how to implement the << and >> operators on custom struct types.

You need the BitArray class from System.Collections.

Looks like you could help a "bit stream". There is an implementation of such a concept here. Take a look, perhaps it fits your needs.

The BigInteger class in .NET 4+ takes a Byte[] in its constructor and has left and right shift operators.

It's been 10 years since this question and I could not yet find a simple C# implementation that extracts a range of bits from a byte array using bitwise operations.
Here's how you can do it using simple bitwise operations:
public static class ByteExtensions
{
public const int BitsPerByte = 8;
/// <summary>
/// Extracts a range of bits from a byte array into a new byte array.
/// </summary>
/// <param name="bytes">The byte array to extract the range from.</param>
/// <param name="start">The 0-based start index of the bit.</param>
/// <param name="length">The number of bits to extract.</param>
/// <returns>A new <see cref="byte"/> array with the extracted bits.</returns>
/// <exception cref="ArgumentOutOfRangeException">Thrown if <paramref name="start"/> or <paramref name="length"/> are out of range.</exception>
public static byte[] GetBitRange(this byte[] bytes, int start, int length)
{
// Calculate the length of the input byte array in bits
int maxLength = bytes.Length * BitsPerByte;
int end = start + length;
// Range validations
if (start >= maxLength || start < 0)
{
throw new ArgumentOutOfRangeException(nameof(start), start, $"Start must non-negative and lesser than {maxLength}");
}
if (length < 0)
{
throw new ArgumentOutOfRangeException(nameof(length), length, $"Length must be non-negative");
}
if (end > maxLength)
{
throw new ArgumentOutOfRangeException(nameof(length), length, $"Range length must be less than or equal to {maxLength}");
}
// Calculate the length of the new byte array and allocate
var (byteLength, remainderLength) = Math.DivRem(length, BitsPerByte);
byte[] result = new byte[byteLength + (remainderLength == 0 ? 0 : 1)];
// Iterate through each byte in the new byte array
for (int i = 0; i < result.Length; i++)
{
// Compute each of the 8 bits of the ith byte
// Stop if start index >= end index (rest of the bits in the current byte will be 0 by default)
for (int j = 0; j < BitsPerByte && start < end; j++)
{
var (byteIndex, bitIndex) = Math.DivRem(start++, BitsPerByte); // Note the increment(++) in start
int currentBitIndex = j;
result[i] |= (byte)(((bytes[byteIndex] >> bitIndex) & 1) << currentBitIndex);
}
}
return result;
}
}
Explanation
1. Allocate a new byte[] for the range
The GetBitRange(..) method above first (after validation) calculates the length of the new byte array(length in bytes) using the length parameter(length in bits) and allocates this array(result).
The outer loop iterates through each byte in result and the inner loop iterates through each of the 8 bits in the ith byte.
2. Extracting a bit from a byte
In the inner loop, the method calculates the index of the byte in the input array bytes which contains the bit indexed by start. It is the bitIndexth bit in the byteIndexth byte. To extract this bit, you perform the following operations:
int nextBit = (bytes[byteIndex] >> bitIndex) & 1;
Shift the bitIndexth bit in bytes[byteIndex] to the rightmost position so it is the least significant bit(LSB). And then perform a bitwise AND with 1. (A bitwise AND with 1 extracts only the LSB and makes the rest of the bits 0.)
3. Setting a bit in a byte
Now, nextBit is the next bit I need to add to my output byte array(result). Since, I'm currently working on the jth bit of my ith byte of result in my inner loop, I need to set that bit to nextBit. This is done as:
int currentBitIndex = j;
result[i] |= (byte) (nextBit << currentBitIndex);
Shift nextBit j times to the left (since I want to set the jth bit). Now to set the bit, I perform a bitwise OR between the shifted bit and result[i]. This sets the jth bit in result[i].
The steps 2 & 3 described above are implemented in the method as a single step:
result[i] |= (byte)(((bytes[byteIndex] >> bitIndex) & 1) << currentBitIndex);
Two important things to consider here:
Byte Endianness
Bit Numbering
Byte Endianness
The above implementation indexes into the input byte array in big endian order. So, as per the example in the question,
new byte[]{ 91 , 215 }.GetBitRange(3, 8)
does not return 122 but returns 235 instead. This is because the example in the question expects the answer in little-endian format.
To use little-endian format, a simple reversal of the output array does the job. Even better is to change byteIndex in the implementation:
byteIndex = bytes.Length - 1 - byteIndex;
Bit Numbering
Bit numbering determines if the least significant bit is the first bit in the byte (LSB 0) or the most significant bit is the first bit (MSB 0).
The implementation above assumes LSB0 bit numbering.
To use MSB0, change the bit indices as follows:
currentBitIndex = BitsPerByte - 1 - currentBitIndex;
bitIndex = BitsPerByte - 1 - bitIndex;
Extending for endianness and bit numbering
Here is the full method supporting both types of endianness as well as bit numbering:
public enum Endianness
{
BigEndian,
LittleEndian
}
public enum BitNumbering
{
Lsb0,
Msb0
}
public static class ByteExtensions
{
public const int BitsPerByte = 8;
public static byte[] GetBitRange(
this byte[] bytes,
int start,
int length,
Endianness endianness,
BitNumbering bitNumbering)
{
// Calculate the length of the input byte array in bits
int maxLength = bytes.Length * BitsPerByte;
int end = start + length;
// Range validations
if (start >= maxLength || start < 0)
{
throw new ArgumentOutOfRangeException(nameof(start), start, $"Start must non-negative and lesser than {maxLength}");
}
if (length < 0)
{
throw new ArgumentOutOfRangeException(nameof(length), length, $"Length must be non-negative");
}
if (end > maxLength)
{
throw new ArgumentOutOfRangeException(nameof(length), length, $"Range length must be less than or equal to {maxLength}");
}
// Calculate the length of the new byte array and allocate
var (byteLength, remainderLength) = Math.DivRem(length, BitsPerByte);
byte[] result = new byte[byteLength + (remainderLength == 0 ? 0 : 1)];
// Iterate through each byte in the new byte array
for (int i = 0; i < result.Length; i++)
{
// Compute each of the 8 bits of the ith byte
// Stop if start index >= end index (rest of the bits in the current byte will be 0 by default)
for (int j = 0; j < BitsPerByte && start < end; j++)
{
var (byteIndex, bitIndex) = Math.DivRem(start++, BitsPerByte); // Note the increment(++) in start
int currentBitIndex = j;
// Adjust for MSB 0
if (bitNumbering is BitNumbering.Msb0)
{
currentBitIndex = 7 - currentBitIndex;
bitIndex = 7 - bitIndex;
}
// Adjust for little-endian
if (endianness is Endianness.LittleEndian)
{
byteIndex = bytes.Length - 1 - byteIndex;
}
result[i] |= (byte)(((bytes[byteIndex] >> bitIndex) & 1) << currentBitIndex);
}
}
return result;
}
}

Related

Optimize the rearranging of bits

I have a core C# function that I am trying to speed up. Suggestions involving safe or unsafe code are equally welcome. Here is the method:
public byte[] Interleave(uint[] vector)
{
var byteVector = new byte[BytesNeeded + 1]; // Extra byte needed when creating a BigInteger, for sign bit.
foreach (var idx in PrecomputedIndices)
{
var bit = (byte)(((vector[idx.iFromUintVector] >> idx.iFromUintBit) & 1U) << idx.iToByteBit);
byteVector[idx.iToByteVector] |= bit;
}
return byteVector;
}
PrecomputedIndices is an array of the following class:
class Indices
{
public readonly int iFromUintVector;
public readonly int iFromUintBit;
public readonly int iToByteVector;
public readonly int iToByteBit;
public Indices(int fromUintVector, int fromUintBit, int toByteVector, int toByteBit)
{
iFromUintVector = fromUintVector;
iFromUintBit = fromUintBit;
iToByteVector = toByteVector;
iToByteBit = toByteBit;
}
}
The purpose of the Interleave method is to copy bits from an array of uints to an array of bytes. I have pre-computed the source and target array index and the source and target bit number and stored them in the Indices objects. No two adjacent bits in the source will be adjacent in the target, so that rules out certain optimizations.
To give you an idea of scale, the problem I am working on has about 4,200 dimensions, so "vector" has 4,200 elements. The values in vector range from zero to twelve, so I only need to use four bits to store their values in the byte array, thus I need 4,200 x 4 = 16,800 bits of data, or 2,100 bytes of output per vector. This method will be called millions of times. It consumes approximately a third of the time in the larger procedure I need to optimize.
UPDATE 1: Changing "Indices" to a struct and shrinking a few of the datatypes so that the object was just eight bytes (an int, a short, and two bytes) reduced the percentage of execution time from 35% to 30%.
These are the crucial parts of my revised implementation, with ideas drawn from the commenters:
Convert object to struct, shrink data types to smaller ints, and rearrange so that the object should fit into a 64-bit value, which is better for a 64-bit machine:
struct Indices
{
/// <summary>
/// Index into source vector of source uint to read.
/// </summary>
public readonly int iFromUintVector;
/// <summary>
/// Index into target vector of target byte to write.
/// </summary>
public readonly short iToByteVector;
/// <summary>
/// Index into source uint of source bit to read.
/// </summary>
public readonly byte iFromUintBit;
/// <summary>
/// Index into target byte of target bit to write.
/// </summary>
public readonly byte iToByteBit;
public Indices(int fromUintVector, byte fromUintBit, short toByteVector, byte toByteBit)
{
iFromUintVector = fromUintVector;
iFromUintBit = fromUintBit;
iToByteVector = toByteVector;
iToByteBit = toByteBit;
}
}
Sort the PrecomputedIndices so that I write each target byte and bit in ascending order, which improves memory cache access:
Comparison<Indices> sortByTargetByteAndBit = (a, b) =>
{
if (a.iToByteVector < b.iToByteVector) return -1;
if (a.iToByteVector > b.iToByteVector) return 1;
if (a.iToByteBit < b.iToByteBit) return -1;
if (a.iToByteBit > b.iToByteBit) return 1;
return 0;
};
Array.Sort(PrecomputedIndices, sortByTargetByteAndBit);
Unroll the loop so that a whole target byte is assembled at once, reducing the number of times I access the target array:
public byte[] Interleave(uint[] vector)
{
var byteVector = new byte[BytesNeeded + 1]; // An extra byte is needed to hold the extra bits and a sign bit for the BigInteger.
var extraBits = Bits - BytesNeeded << 3;
int iIndex = 0;
var iByte = 0;
for (; iByte < BytesNeeded; iByte++)
{
// Unroll the loop so we compute the bits for a whole byte at a time.
uint bits = 0;
var idx0 = PrecomputedIndices[iIndex];
var idx1 = PrecomputedIndices[iIndex + 1];
var idx2 = PrecomputedIndices[iIndex + 2];
var idx3 = PrecomputedIndices[iIndex + 3];
var idx4 = PrecomputedIndices[iIndex + 4];
var idx5 = PrecomputedIndices[iIndex + 5];
var idx6 = PrecomputedIndices[iIndex + 6];
var idx7 = PrecomputedIndices[iIndex + 7];
bits = (((vector[idx0.iFromUintVector] >> idx0.iFromUintBit) & 1U))
| (((vector[idx1.iFromUintVector] >> idx1.iFromUintBit) & 1U) << 1)
| (((vector[idx2.iFromUintVector] >> idx2.iFromUintBit) & 1U) << 2)
| (((vector[idx3.iFromUintVector] >> idx3.iFromUintBit) & 1U) << 3)
| (((vector[idx4.iFromUintVector] >> idx4.iFromUintBit) & 1U) << 4)
| (((vector[idx5.iFromUintVector] >> idx5.iFromUintBit) & 1U) << 5)
| (((vector[idx6.iFromUintVector] >> idx6.iFromUintBit) & 1U) << 6)
| (((vector[idx7.iFromUintVector] >> idx7.iFromUintBit) & 1U) << 7);
byteVector[iByte] = (Byte)bits;
iIndex += 8;
}
for (; iIndex < PrecomputedIndices.Length; iIndex++)
{
var idx = PrecomputedIndices[iIndex];
var bit = (byte)(((vector[idx.iFromUintVector] >> idx.iFromUintBit) & 1U) << idx.iToByteBit);
byteVector[idx.iToByteVector] |= bit;
}
return byteVector;
}
#1 cuts the function from taking up 35% of the execution time to 30% of the execution time (14% savings).
#2 did not speed the function up, but made #3 possible.
#3 cuts the function from 30% of exec time to 19.6%, another 33% in savings.
Total savings: 44%!!!

short to byte conversion

I am trying to convert a short type into 2 bytes type for store in a byte array, here is the snippet thats been working well "so far".
if (type == "short")
{
size = data.size;
databuffer[index+1] = (byte)(data.numeric_data >> 8);
databuffer[index] = (byte)(data.numeric_data & 255);
return size;
}
Numeric_data is int type. It all worked well till i process the value 284 (decimal). It turns out that 284 >> 8 is 1 instead of 4.
The main goal is to have:
byte[0] = 28
byte[1] = 4
Is this what you are looking for:
static void Main(string[] args)
{
short data=284;
byte[] bytes=BitConverter.GetBytes(data);
// bytes[0] = 28
// bytes[1] = 1
}
Just for fun:
public static byte[] ToByteArray(short s)
{
//return, if `short` can be cast to `byte` without overflow
if (s <= byte.MaxValue)
return new byte[] { (byte)s };
List<byte> bytes = new List<byte>();
byte b = 0;
//determine delta through the number of digits
short delta = (short)Math.Pow(10, s.ToString().Length - 3);
//as soon as byte can be not more than 3 digits length
for (int i = 0; i < 3; i++)
{
//take first 3 (or 2, or 1) digits from the high-order digit
short temp = (short)(s / delta);
if (temp > byte.MaxValue) //if it's still too big
delta *= 10;
else //the byte is found, break the loop
{
b = (byte)temp;
break;
}
}
//add the found byte
bytes.Add(b);
//recursively search in the rest of the number
bytes.AddRange(ToByteArray((short)(s % delta)));
return bytes.ToArray();
}
this recursive method does what the OP need with at least any positive short value.
Why would 284 >> 8 would be 4?
Why would 284 be split in two bytes equal to 28 and 4?
The binary representation of 284 is 0000 0001 0001 1100. As you can see, there are two bytes (eight bits) which are 0000 0001 (256 in decimal) and 0001 1100 (28 in decimal).
284 >> 8 is 1 (0000 0001) and it is correct.
284 should be split in two bytes equal to 256 and 24.
You conversion is correct!
If you insist:
short val = 284;
byte a = (byte)(val / 10);
byte b = (byte)(val % 10);
Disclaimer:
This does not make much sense, but it is what you want. I assume you want values from 0 to 99. The logical thing to do would be to use 100 as the denominator and not 10. But then again, I have no idea what you want to do.
Drop the nonsense conversion you are using and go for System.BitConverter.ToInt16
//to bytes
var buffer = System.BitConverter.GetBytes(284); //your short value
//from bytes
var value = System.BitConverter.ToInt16(buffer, 0);

Get the sum of powers of 2 for a given number + c#

I have a table with different codes. And their Id's are powers of 2. (20, 21, 22, 23...).
Based on different conditions my application will assign a value to the "Status" variable.
for ex :
Status = 272 ( which is 28+ 24)
Status = 21 ( Which is 24+ 22+20)
If Status = 21 then my method (C#) should tell me that 21 is sum of 16 + 4 + 1.
You can test all bits in the input value if they are checked:
int value = 21;
for (int i = 0; i < 32; i++)
{
int mask = 1 << i;
if ((value & mask) != 0)
{
Console.WriteLine(mask);
}
}
Output:
1
4
16
for (uint currentPow = 1; currentPow != 0; currentPow <<= 1)
{
if ((currentPow & QStatus) != 0)
Console.WriteLine(currentPow); //or save or print some other way
}
for QStatus == 21 it will give
1
4
16
Explanation:
A power of 2 has exactly one 1 in its binary representation. We take that one to be the rightmost one(least significant) and iteratively push it leftwards(towards more significant) until the number overflows and becomes 0. Each time we check that currentPow & QStatus is not 0.
This can probably be done much cleaner with an enum with the [Flags] attribute set.
This is basically binary (because binary is also base 2). You can bitshift values around !
uint i = 87;
uint mask;
for (short j = 0; j < sizeof(uint); j++)
{
mask = 1 << j;
if (i & mask == 1)
// 2^j is a factor
}
You can use bitwise operators for this (assuming that you have few enough codes that the values stay in an integer variable).
a & (a - 1) gives you back a after unsetting the last set bit. You can use that to get the value of the corresponding flag, like:
while (QStatus) {
uint nxtStatus = QStatus & (QStatus - 1);
processFlag(QStatus ^ nxtStatus);
QStatus = nxtStatus;
}
processFlag will be called with the set values in increasing order (e.g. 1, 4, 16 if QStatus is originally 21).

Converting a int to a BCD byte array

I want to convert an int to a byte[2] array using BCD.
The int in question will come from DateTime representing the Year and must be converted to two bytes.
Is there any pre-made function that does this or can you give me a simple way of doing this?
example:
int year = 2010
would output:
byte[2]{0x20, 0x10};
static byte[] Year2Bcd(int year) {
if (year < 0 || year > 9999) throw new ArgumentException();
int bcd = 0;
for (int digit = 0; digit < 4; ++digit) {
int nibble = year % 10;
bcd |= nibble << (digit * 4);
year /= 10;
}
return new byte[] { (byte)((bcd >> 8) & 0xff), (byte)(bcd & 0xff) };
}
Beware that you asked for a big-endian result, that's a bit unusual.
Use this method.
public static byte[] ToBcd(int value){
if(value<0 || value>99999999)
throw new ArgumentOutOfRangeException("value");
byte[] ret=new byte[4];
for(int i=0;i<4;i++){
ret[i]=(byte)(value%10);
value/=10;
ret[i]|=(byte)((value%10)<<4);
value/=10;
}
return ret;
}
This is essentially how it works.
If the value is less than 0 or greater than 99999999, the value won't fit in four bytes. More formally, if the value is less than 0 or is 10^(n*2) or greater, where n is the number of bytes, the value won't fit in n bytes.
For each byte:
Set that byte to the remainder of the value-divided-by-10 to the byte. (This will place the last digit in the low nibble [half-byte] of the current byte.)
Divide the value by 10.
Add 16 times the remainder of the value-divided-by-10 to the byte. (This will place the now-last digit in the high nibble of the current byte.)
Divide the value by 10.
(One optimization is to set every byte to 0 beforehand -- which is implicitly done by .NET when it allocates a new array -- and to stop iterating when the value reaches 0. This latter optimization is not done in the code above, for simplicity. Also, if available, some compilers or assemblers offer a divide/remainder routine that allows retrieving the quotient and remainder in one division step, an optimization which is not usually necessary though.)
Here's a terrible brute-force version. I'm sure there's a better way than this, but it ought to work anyway.
int digitOne = year / 1000;
int digitTwo = (year - digitOne * 1000) / 100;
int digitThree = (year - digitOne * 1000 - digitTwo * 100) / 10;
int digitFour = year - digitOne * 1000 - digitTwo * 100 - digitThree * 10;
byte[] bcdYear = new byte[] { digitOne << 4 | digitTwo, digitThree << 4 | digitFour };
The sad part about it is that fast binary to BCD conversions are built into the x86 microprocessor architecture, if you could get at them!
Here is a slightly cleaner version then Jeffrey's
static byte[] IntToBCD(int input)
{
if (input > 9999 || input < 0)
throw new ArgumentOutOfRangeException("input");
int thousands = input / 1000;
int hundreds = (input -= thousands * 1000) / 100;
int tens = (input -= hundreds * 100) / 10;
int ones = (input -= tens * 10);
byte[] bcd = new byte[] {
(byte)(thousands << 4 | hundreds),
(byte)(tens << 4 | ones)
};
return bcd;
}
maybe a simple parse function containing this loop
i=0;
while (id>0)
{
twodigits=id%100; //need 2 digits per byte
arr[i]=twodigits%10 + twodigits/10*16; //first digit on first 4 bits second digit shifted with 4 bits
id/=100;
i++;
}
More common solution
private IEnumerable<Byte> GetBytes(Decimal value)
{
Byte currentByte = 0;
Boolean odd = true;
while (value > 0)
{
if (odd)
currentByte = 0;
Decimal rest = value % 10;
value = (value-rest)/10;
currentByte |= (Byte)(odd ? (Byte)rest : (Byte)((Byte)rest << 4));
if(!odd)
yield return currentByte;
odd = !odd;
}
if(!odd)
yield return currentByte;
}
Same version as Peter O. but in VB.NET
Public Shared Function ToBcd(ByVal pValue As Integer) As Byte()
If pValue < 0 OrElse pValue > 99999999 Then Throw New ArgumentOutOfRangeException("value")
Dim ret As Byte() = New Byte(3) {} 'All bytes are init with 0's
For i As Integer = 0 To 3
ret(i) = CByte(pValue Mod 10)
pValue = Math.Floor(pValue / 10.0)
ret(i) = ret(i) Or CByte((pValue Mod 10) << 4)
pValue = Math.Floor(pValue / 10.0)
If pValue = 0 Then Exit For
Next
Return ret
End Function
The trick here is to be aware that simply using pValue /= 10 will round the value so if for instance the argument is "16", the first part of the byte will be correct, but the result of the division will be 2 (as 1.6 will be rounded up). Therefore I use the Math.Floor method.
I made a generic routine posted at IntToByteArray that you could use like:
var yearInBytes = ConvertBigIntToBcd(2010, 2);
static byte[] IntToBCD(int input) {
byte[] bcd = new byte[] {
(byte)(input>> 8),
(byte)(input& 0x00FF)
};
return bcd;
}

Why do I get the following output when inverting bits in a byte?

Assumption:
Converting a
byte[] from Little Endian to Big
Endian means inverting the order of the bits in
each byte of the byte[].
Assuming this is correct, I tried the following to understand this:
byte[] data = new byte[] { 1, 2, 3, 4, 5, 15, 24 };
byte[] inverted = ToBig(data);
var little = new BitArray(data);
var big = new BitArray(inverted);
int i = 1;
foreach (bool b in little)
{
Console.Write(b ? "1" : "0");
if (i == 8)
{
i = 0;
Console.Write(" ");
}
i++;
}
Console.WriteLine();
i = 1;
foreach (bool b in big)
{
Console.Write(b ? "1" : "0");
if (i == 8)
{
i = 0;
Console.Write(" ");
}
i++;
}
Console.WriteLine();
Console.WriteLine(BitConverter.ToString(data));
Console.WriteLine(BitConverter.ToString(ToBig(data)));
foreach (byte b in data)
{
Console.Write("{0} ", b);
}
Console.WriteLine();
foreach (byte b in inverted)
{
Console.Write("{0} ", b);
}
The convert method:
private static byte[] ToBig(byte[] data)
{
byte[] inverted = new byte[data.Length];
for (int i = 0; i < data.Length; i++)
{
var bits = new BitArray(new byte[] { data[i] });
var invertedBits = new BitArray(bits.Count);
int x = 0;
for (int p = bits.Count - 1; p >= 0; p--)
{
invertedBits[x] = bits[p];
x++;
}
invertedBits.CopyTo(inverted, i);
}
return inverted;
}
The output of this little application is different from what I expected:
00000001 00000010 00000011 00000100 00000101 00001111 00011000
00000001 00000010 00000011 00000100 00000101 00001111 00011000
80-40-C0-20-A0-F0-18
01-02-03-04-05-0F-18
1 2 3 4 5 15 24
1 2 3 4 5 15 24
For some reason the data remains the same, unless printed using BitConverter.
What am I not understanding?
Update
New code produces the following output:
10000000 01000000 11000000 00100000 10100000 11110000 00011000
00000001 00000010 00000011 00000100 00000101 00001111 00011000
01-02-03-04-05-0F-18
80-40-C0-20-A0-F0-18
1 2 3 4 5 15 24
128 64 192 32 160 240 24
But as I have been told now, my method is incorrect anyway because I should invert the bytes
and not the bits?
This hardware developer I'm working with told me to invert the bits because he cannot read the data.
Context where I'm using this
The application that will use this does not really work with numbers.
I'm supposed to save a stream of bits to file where
1 = white and 0 = black.
They represent pixels of a bitmap 256x64.
byte 0 to byte 31 represents the first row of pixels
byte 32 to byte 63 the second row of pixels.
I have code that outputs these bits... but the developer is telling
me they are in the wrong order... He says the bytes are fine but the bits are not.
So I'm left confused :p
No. Endianness refers to the order of bytes, not bits. Big endian systems store the most-significant byte first and little-endian systems store the least-significant first. The bits within a byte remain in the same order.
Your ToBig() function is returning the original data rather than the bit-swapped data, it seems.
Your method may be correct at this point. There are different meanings of endianness, and it depends on the hardware.
Typically, it's used for converting between computing platforms. Most CPU vendors (now) use the same bit ordering, but different byte ordering, for different chipsets. This means, that, if you are passing a 2-byte int from one system to another, you leave the bits alone, but swap bytes 1 and 2, ie:
int somenumber -> byte[2]: somenumber[high],somenumber[low] ->
byte[2]: somenumber[low],somenumber[high] -> int newNumber
However, this isn't always true. Some hardware still uses inverted BIT ordering, so what you have may be correct. You'll need to either trust your hardware dev. or look into it further.
I recommend reading up on this on Wikipedia - always a great source of info:
http://en.wikipedia.org/wiki/Endianness
Your ToBig method has a bug.
At the end:
invertedBits.CopyTo(data, i);
}
return data;
You need to change that to:
byte[] newData = new byte[data.Length];
invertedBits.CopyTo(newData, i);
}
return newData;
You're resetting your input data, so you're receiving both arrays inverted. The problem is that arrays are reference types, so you can modify the original data.
As greyfade already said, endianness is not about bit ordering.
The reason that your code doesn't do what you expect, is that the ToBig method changes the array that you send to it. That means that after calling the method the array is inverted, and data and inverted are just two references pointing to the same array.
Here's a corrected version of the method.
private static byte[] ToBig(byte[] data) {
byte[] result = new byte[data.length];
for (int i = 0; i < data.Length; i++) {
var bits = new BitArray(new byte[] { data[i] });
var invertedBits = new BitArray(bits.Count);
int x = 0;
for (int p = bits.Count - 1; p >= 0; p--) {
invertedBits[x] = bits[p];
x++;
}
invertedBits.CopyTo(result, i);
}
return result;
}
Edit:
Here's a method that changes endianness for a byte array:
static byte[] ConvertEndianness(byte[] data, int wordSize) {
if (data.Length % wordSize != 0) throw new ArgumentException("The data length does not divide into an even number of words.");
byte[] result = new byte[data.Length];
int offset = wordSize - 1;
for (int i = 0; i < data.Length; i++) {
result[i + offset] = data[i];
offset -= 2;
if (offset < -wordSize) {
offset += wordSize * 2;
}
}
return result;
}
Example:
byte[] data = { 1,2,3,4,5,6 };
byte[] inverted = ConvertEndianness(data, 2);
Console.WriteLine(BitConverter.ToString(inverted));
Output:
02-01-04-03-06-05
The second parameter is the word size. As endianness is the ordering of bytes in a word, you have to specify how large the words are.
Edit 2:
Here is a more efficient method for reversing the bits:
static byte[] ReverseBits(byte[] data) {
byte[] result = new byte[data.Length];
for (int i = 0; i < data.Length; i++) {
int b = data[i];
int r = 0;
for (int j = 0; j < 8; j++) {
r <<= 1;
r |= b & 1;
b >>= 1;
}
result[i] = (byte)r;
}
return result;
}
One big problem I see is ToBig changes the contents of the data[] array that is passed to it.
You're calling ToBig on an array named data, then assigning the result to inverted, but since you didn't create a new array inside ToBig, you modified both arrays, then you proceed to treat the arrays data and inverted as different when in reality they are not.

Categories

Resources