Jagged array versus one big array? - c#

Not so sure how to ask this question, but I have 2 ways (so far) for a lookup array
Option 1 is:
bool[][][] myJaggegArray;
myJaggegArray = new bool[120][][];
for (int i = 0; i < 120; ++i)
{
if ((i & 0x88) == 0)
{
//only 64 will be set
myJaggegArray[i] = new bool[120][];
for (int j = 0; j < 120; ++j)
{
if ((j & 0x88) == 0)
{
//only 64 will be set
myJaggegArray[i][j] = new bool[60];
}
}
}
}
Option 2 is:
bool[] myArray;
// [998520]
myArray = new bool[(120 | (120 << 7) | (60 << 14))];
Both ways work nicely, but is there another (better) way of doing a fast lookup and which one would you take if speed / performance is what matter?
This would be used in a chessboard implementation (0x88) and mostly is
[from][to][dataX] for option 1
[(from | (to << 7) | (dataX << 14))] for option 2

I would suggest using one large array, because of the advantages of having one large memory block, but I would also encourage writing a special accessor to that array.
class MyCustomDataStore
{
bool[] array;
int sizex, sizey, sizez;
MyCustomDataStore(int x, int y, int z) {
array=new bool[x*y*z];
this.sizex = x;
this.sizey = y;
this.sizez = z;
}
bool get(int px, int py, int pz) {
// change the order in whatever way you iterate
return array [ px*sizex*sizey + py*sizey + pz ];
}
}

I just update dariusz's solution with an array of longs for z-size <= 64
edit2: updated to '<<' version, size fixed to 128x128x64
class MyCustomDataStore
{
long[] array;
MyCustomDataStore()
{
array = new long[128 | 128 << 7];
}
bool get(int px, int py, int pz)
{
return (array[px | (py << 7)] & (1 << pz)) == 0;
}
void set(int px, int py, int pz, bool val)
{
long mask = (1 << pz);
int index = px | (py << 7);
if (val)
{
array[index] |= mask;
}
else
{
array[index] &= ~mask;
}
}
}
edit: performance test:
used 100 times 128x128x64 fill and read
long: 9885ms, 132096B
bool: 9740ms, 1065088B

Related

C# signed fixed point to floating point conversion

I have a temperature sensor returning 2 bytes.
The temperature is defined as follows :
What is the best way in C# to convert these 2 byte to a float ?
My sollution is the following, but I don't like the power of 2 and the for loop :
static void Main(string[] args)
{
byte[] sensorData = new byte[] { 0b11000010, 0b10000001 }; //(-1) * (2^(6) + 2^(1) + 2^(-1) + 2^(-8)) = -66.50390625
Console.WriteLine(ByteArrayToTemp(sensorData));
}
static double ByteArrayToTemp(byte[] data)
{
// Convert byte array to short to be able to shift it
if (BitConverter.IsLittleEndian)
Array.Reverse(data);
Int16 dataInt16 = BitConverter.ToInt16(data, 0);
double temp = 0;
for (int i = 0; i < 15; i++)
{
//We take the LSB of the data and multiply it by the corresponding second power (from -8 to 6)
//Then we shift the data for the next loop
temp += (dataInt16 & 0x01) * Math.Pow(2, -8 + i);
dataInt16 >>= 1;
}
if ((dataInt16 & 0x01) == 1) temp *= -1; //Sign bit
return temp;
}
This might be slightly more efficient, but I can't see it making much difference:
static double ByteArrayToTemp(byte[] data)
{
if (BitConverter.IsLittleEndian)
Array.Reverse(data);
ushort bits = BitConverter.ToUInt16(data, 0);
double scale = 1 << 6;
double result = 0;
for (int i = 0, bit = 1 << 14; i < 15; ++i, bit >>= 1, scale /= 2)
{
if ((bits & bit) != 0)
result += scale;
}
if ((bits & 0x8000) != 0)
result = -result;
return result;
}
You're not going to be able to avoid a loop when calculating this.

Method that returns uint16_t array properly in C++ [duplicate]

This question already has answers here:
How to return an array from a function?
(5 answers)
Closed 3 years ago.
I have my C# code that returns uint array but I want to do it in C++. I looked other posts; they use uint pointer array where my array is not. Does anyone know how to return uint16_t array properly?
This is C# code works fine
public static UInt16[] GetIntArrayFromByteArray(byte[] byteArray)
{
if ((byteArray.Length % 2) == 1)
Array.Resize(ref byteArray, byteArray.Length + 1);
UInt16[] intArray = new UInt16[byteArray.Length / 2];
for (int i = 0; i < byteArray.Length; i += 2)
intArray[i / 2] = (UInt16)((byteArray[i] << 8) | byteArray[i + 1]);
return intArray;
}
This is C++ code that creates syntax error
uint16_t[] GetIntArrayFromByteArray(byte[] byteArray)
{
//if ((byteArray.Length % 2) == 1)
//Array.Resize(ref byteArray, byteArray.Length + 1);
uint16_t[] intArray = new uint16_t[10];
for (int i = 0; i < 10; i += 2)
intArray[i / 2] = (uint16_t)((byteArray[i] << 8) | byteArray[i + 1]);
return intArray;
}
Do not use Type[] ever. Use std::vector:
std::vector<uint16_t> GetIntArrayFromByteArray(std::vector<byte> byteArray)
{
// If the number of bytes is not even, put a zero at the end
if ((byteArray.size() % 2) == 1)
byteArray.push_back(0);
std::vector<uint16_t> intArray;
for (int i = 0; i < byteArray.size(); i += 2)
intArray.push_back((uint16_t)((byteArray[i] << 8) | byteArray[i + 1]));
return intArray;
}
You can also use std::array<Type, Size> if the array would be fixed size.
More optimal version (thanks to #Aconcagua) (demo)
Here is a full code with more optimal version that doesn't copy or alter the input. This is better if you'll have long input arrays. It's possible to write it shorter, but I wanted to keep it verbose and beginner-friendly.
#include <iostream>
#include <vector>
using byte = unsigned char;
std::vector<uint16_t> GetIntArrayFromByteArray(const std::vector<byte>& byteArray)
{
const int inputSize = byteArray.size();
const bool inputIsOddCount = inputSize % 2 != 0;
const int finalSize = (int)(inputSize/2.0 + 0.5);
// Ignore the last odd item in loop and handle it later
const int loopLength = inputIsOddCount ? inputSize - 1 : inputSize;
std::vector<uint16_t> intArray;
// Reserve space for all items
intArray.reserve(finalSize);
for (int i = 0; i < loopLength; i += 2)
{
intArray.push_back((uint16_t)((byteArray[i] << 8) | byteArray[i + 1]));
}
// If the input was odd-count, we still have one byte to add, along with a zero
if(inputIsOddCount)
{
// The zero in this expression is redundant but illustrative
intArray.push_back((uint16_t)((byteArray[inputSize-1] << 8) | 0));
}
return intArray;
}
int main() {
const std::vector<byte> numbers{2,0,0,0,1,0,0,1};
const std::vector<uint16_t> result(GetIntArrayFromByteArray(numbers));
for(uint16_t num: result) {
std::cout << num << "\n";
}
return 0;
}

Optimize bit reader for ReadInt on datastream

Could anyone help me optimize this piece of code? Its currently a large bottleneck as it gets called very often. Even a 25% speed improvement would be significant.
public int ReadInt(int length)
{
if (Position + length > Length)
throw new BitBufferException("Not enough bits remaining.");
int result = 0;
while (length > 0)
{
int off = Position & 7;
int count = 8 - off;
if (count > length)
count = length;
int mask = (1 << count) - 1;
int bits = (Data[Position >> 3] >> off);
result |= (bits & mask) << (length - count);
length -= count;
Position += count;
}
return result;
}
Best answer would go to fastest solution. Benchmarks done with dottrace. Currently this block of code takes up about 15% of the total cpu time. Lowest number wins best answer.
EDIT: Sample usage:
public class Auth : Packet
{
int Field0;
int ProtocolHash;
int Field1;
public override void Parse(buffer)
{
Field0 = buffer.ReadInt(9);
ProtocolHash = buffer.ReadInt(32);
Field1 = buffer.ReadInt(8);
}
}
Size of Data is variable but in most cases 512 bytes;
How about using pointers and unsafe context? You didn't say anything about your input data, method context, etc. so I tried to deduct all of these by myself.
public class BitTest
{
private int[] _data;
public BitTest(int[] data)
{
Length = data.Length * 4 * 8;
// +2, because we use byte* and long* later
// and don't want to read outside the array memory
_data = new int[data.Length + 2];
Array.Copy(data, _data, data.Length);
}
public int Position { get; private set; }
public int Length { get; private set; }
and ReadInt method. Hope comments give a little light on the solution:
public unsafe int ReadInt(int length)
{
if (Position + length > Length)
throw new ArgumentException("Not enough bits remaining.");
// method returns int, so getting more then 32 bits is pointless
if (length > 4 * 8)
throw new ArgumentException();
//
int bytePosition = Position / 8;
int bitPosition = Position % 8;
Position += length;
// get int* on array to start with
fixed (int* array = _data)
{
// change pointer to byte*
byte* bt = (byte*)array;
// skip already read bytes and change pointer type to long*
long* ptr = (long*)(bt + bytePosition);
// read value from current pointer position
long value = *ptr;
// take only necessary bits
value &= (1L << (length + bitPosition)) - 1;
value >>= bitPosition;
// cast value to int before returning
return (int)value;
}
}
}
I didn't test the method, but would bet it's much faster then your approach.
My simple test code:
var data = new[] { 1 | (1 << 8 + 1) | (1 << 16 + 2) | (1 << 24 + 3) };
var test = new BitTest(data);
var bytes = Enumerable.Range(0, 4)
.Select(x => test.ReadInt(8))
.ToArray();
bytes contains { 1, 2, 4, 8}, as expected.
I Don't know if this give you a significant improvements but it should give you some numbers.
Instead of creating new int variables inside the loop (this requires a time to create) let reserved those variables before entering the loop.
public int ReadInt(int length)
{
if (Position + length > Length)
throw new BitBufferException("Not enough bits remaining.");
int result = 0;
int off = 0;
int count = 0;
int mask = 0;
int bits = 0
while (length > 0)
{
off = Position & 7;
count = 8 - off;
if (count > length)
count = length;
mask = (1 << count) - 1;
bits = (Data[Position >> 3] >> off);
result |= (bits & mask) << (length - count);
length -= count;
Position += count;
}
return result;
}
HOPE THIS increase your performance even a bit

Split 8 bit byte

So Say I have an array of bytes that is 16 long, with each 8 bits representing my data and an array that is 8 long with each 4 bits (so 2 per byte) representing my data.
If I wanted to loop over these and get the values, what would be the easiest way of doing so?
My poor attempt would be something like this, but this doesn't appear to be working as I expect.
for(int i = 0; i < bigByteArray.Length; i++)
{
byte BigByteInfo = bigByteArray[i];
byte SmallByteInfo;
if(i % 2 == 0)
{
SmallByteInfo = smallByteArray[i / 2] % 16;
}
else
{
SmallByteInfo = smallByteArray[i / 2] / 16;
}
//Use of data Here.
}
you can use this class as helper class
public class FoutBitsArrayEnumerator : IEnumeable<byte>
{
FoutBitsArrayEnumerator(byte[] array)
{
this.array = array;
}
public IEnumerator<byte> GetEnumerator
{
foreach (byte i in array)
{
yield return i & 15;
yield return (i >> 4) & 15;
}
}
byte[] array;
}
If I understand right (bigByteArray is 16 long, smallByteArray is 8 long, packed):
for(int i = 0; i < bigByteArray.Length; i++)
{
bigByteArray[i] = (byte)((smallByteArray[i / 2] >> (i % 2 == 0 ? 4 : 0)) & 0xF);
}

Converting a range into a bit array

I'm writing a time-critical piece of code in C# that requires me to convert two unsigned integers that define an inclusive range into a bit field. Ex:
uint x1 = 3;
uint x2 = 9;
//defines the range [3-9]
// 98 7654 3
//must be converted to: 0000 0011 1111 1000
It may help to visualize the bits in reverse order
The maximum value for this range is a parameter given at run-time which we'll call max_val. Therefore, the bit field variable ought to be defined as a UInt32 array with size equal to max_val/32:
UInt32 MAX_DIV_32 = max_val / 32;
UInt32[] bitArray = new UInt32[MAX_DIV_32];
Given a range defined by the variables x1 and x2, what is the fastest way to perform this conversion?
Try this. Calculate the range of array items that must be filled with all ones and do this by iterating over this range. Finally set the items at both borders.
Int32 startIndex = x1 >> 5;
Int32 endIndex = x2 >> 5;
bitArray[startIndex] = UInt32.MaxValue << (x1 & 31);
for (Int32 i = startIndex + 1; i <= endIndex; i++)
{
bitArray[i] = UInt32.MaxValue;
}
bitArray[endIndex] &= UInt32.MaxValue >> (31 - (x2 & 31));
May be the code is not 100% correct, but the idea should work.
Just tested it and found three bugs. The calculation at start index required a mod 32 and at end index the 32 must be 31 and a logical and instead of a assignment to handle the case of start and end index being the same. Should be quite fast.
Just benchmarked it with equal distribution of x1 and x2 over the array.
Intel Core 2 Duo E8400 3.0 GHz, MS VirtualPC with Server 2003 R2 on Windows XP host.
Array length [bits] 320 160 64
Performance [executions/s] 33 million 43 million 54 million
One more optimazation x % 32 == x & 31 but I am unable to meassure a performance gain. Because of only 10.000.000 iterations in my test the fluctuations are quite high. And I am running in VirtualPC making the situation even more unpredictable.
My solution for setting a whole range of bits in a BitArray to true or false:
public static BitArray SetRange(BitArray bitArray, Int32 offset, Int32 length, Boolean value)
{
Int32[] ints = new Int32[(bitArray.Count >> 5) + 1];
bitArray.CopyTo(ints, 0);
var firstInt = offset >> 5;
var lastInt = (offset + length) >> 5;
Int32 mask = 0;
if (value)
{
// set first and last int
mask = (-1 << (offset & 31));
if (lastInt != firstInt)
ints[lastInt] |= ~(-1 << ((offset + length) & 31));
else
mask &= ~(-1 << ((offset + length) & 31));
ints[firstInt] |= mask;
// set all ints in between
for (Int32 i = firstInt + 1; i < lastInt; i++)
ints[i] = -1;
}
else
{
// set first and last int
mask = ~(-1 << (offset & 31));
if (lastInt != firstInt)
ints[lastInt] &= -1 << ((offset + length) & 31);
else
mask |= -1 << ((offset + length) & 31);
ints[firstInt] &= mask;
// set all ints in between
for (Int32 i = firstInt + 1; i < lastInt; i++)
ints[i] = 0;
}
return new BitArray(ints) { Length = bitArray.Length };
}
You could try:
UInt32 x1 = 3;
UInt32 x2 = 9;
UInt32 newInteger = (UInt32)(Math.Pow(2, x2 + 1) - 1) &
~(UInt32)(Math.Pow(2, x1)-1);
Is there a reason not to use the System.Collections.BitArray class instead of a UInt32[]? Otherwise, I'd try something like this:
int minIndex = (int)x1/32;
int maxIndex = (int)x2/32;
// first handle the all zero regions and the all one region (if any)
for (int i = 0; i < minIndex; i++) {
bitArray[i] = 0;
}
for (int i = minIndex + 1; i < maxIndex; i++) {
bitArray[i] = UInt32.MaxValue; // set to all 1s
}
for (int i = maxIndex + 1; i < MAX_DIV_32; i++) {
bitArray[i] = 0;
}
// now handle the tricky parts
uint maxBits = (2u << ((int)x2 - 32 * maxIndex)) - 1; // set to 1s up to max
uint minBits = ~((1u << ((int)x1 - 32 * minIndex)) - 1); // set to 1s after min
if (minIndex == maxIndex) {
bitArray[minIndex] = maxBits & minBits;
}
else {
bitArray[minIndex] = minBits;
bitArray[maxIndex] = maxBits;
}
I was bored enough to try doing it with a char array and using Convert.ToUInt32(string, int) to convert to a uint from base 2.
uint Range(int l, int h)
{
char[] buffer = new char[h];
for (int i = 0; i < buffer.Length; i++)
{
buffer[i] = i < h - l ? '1' : '0';
}
return Convert.ToUInt32(new string(buffer), 2);
}
A simple benchmark shows that my method is about 5% faster than Angrey Jim's (even if you replace second Pow with a bit shift.)
It is probably the easiest to convert to producing a uint array if the upper bound is too big to fit into a single int. It's a little cryptic but I believe it works.
uint[] Range(int l, int h)
{
char[] buffer = new char[h];
for (int i = 0; i < buffer.Length; i++)
{
buffer[i] = i < h - l ? '1' : '0';
}
int bitsInUInt = sizeof(uint) * 8;
int numNeededUInts = (int)Math.Ceiling((decimal)buffer.Length /
(decimal)bitsInUInt);
uint[] uints = new uint[numNeededUInts];
for (int j = uints.Length - 1, s = buffer.Length - bitsInUInt;
j >= 0 && s >= 0;
j--, s -= bitsInUInt)
{
uints[j] = Convert.ToUInt32(new string(buffer, s, bitsInUInt), 2);
}
int remainder = buffer.Length % bitsInUInt;
if (remainder > 0)
{
uints[0] = Convert.ToUInt32(new string(buffer, 0, remainder), 2);
}
return uints;
}
Try this:
uint x1 = 3;
uint x2 = 9;
int cbToShift = x2 - x1; // 6
int nResult = ((1 << cbToShift) - 1) << x1;
/*
(1<<6)-1 gives you 63 = 111111, then you shift it on 3 bits left
*/

Categories

Resources