De / Interleave array fast in C# - c#

I am looking for the fastest way to de/interleave a buffer.
To be more specific, I am dealing with audio data, so I am trying to optimize the time I spend on splitting/combining channels and FFT buffers.
Currently I am using a for loop with 2 index variables for each array, so only plus operations, but all the managed array checks will not compare to a C pointer method.
I like the Buffer.BlockCopy and Array.Copy methods, which cut a lot of time when I process channels, but there is no way for an array to have a custom indexer.
I was trying to find a way to make an array mask, where it would be a fake array with a custom indexer, but that proves to be two times slower when using it in my FFT operation. I guess there are a lot of optimization tricks the compiler can pull when accessing an array directly, but accessing through a class indexer cannot be optimized.
I do not want an unsafe solution, although from the looks of it, that might be the only way to optimize this type of operation.
Thanks.
Here is the type of thing I'm doing right now:
private float[][] DeInterleave(float[] buffer, int channels)
{
float[][] tempbuf = new float[channels][];
int length = buffer.Length / channels;
for (int c = 0; c < channels; c++)
{
tempbuf[c] = new float[length];
for (int i = 0, offset = c; i < tempbuf[c].Length; i++, offset += channels)
tempbuf[c][i] = buffer[offset];
}
return tempbuf;
}

I ran some tests and here is the code I tested:
delegate(float[] inout)
{ // My Original Code
float[][] tempbuf = new float[2][];
int length = inout.Length / 2;
for (int c = 0; c < 2; c++)
{
tempbuf[c] = new float[length];
for (int i = 0, offset = c; i < tempbuf[c].Length; i++, offset += 2)
tempbuf[c][i] = inout[offset];
}
}
delegate(float[] inout)
{ // jerryjvl's recommendation: loop unrolling
float[][] tempbuf = new float[2][];
int length = inout.Length / 2;
for (int c = 0; c < 2; c++)
tempbuf[c] = new float[length];
for (int ix = 0, i = 0; ix < length; ix++)
{
tempbuf[0][ix] = inout[i++];
tempbuf[1][ix] = inout[i++];
}
}
delegate(float[] inout)
{ // Unsafe Code
unsafe
{
float[][] tempbuf = new float[2][];
int length = inout.Length / 2;
fixed (float* buffer = inout)
for (int c = 0; c < 2; c++)
{
tempbuf[c] = new float[length];
float* offset = buffer + c;
fixed (float* buffer2 = tempbuf[c])
{
float* p = buffer2;
for (int i = 0; i < length; i++, offset += 2)
*p++ = *offset;
}
}
}
}
delegate(float[] inout)
{ // Modifying my original code to see if the compiler is not as smart as i think it is.
float[][] tempbuf = new float[2][];
int length = inout.Length / 2;
for (int c = 0; c < 2; c++)
{
float[] buf = tempbuf[c] = new float[length];
for (int i = 0, offset = c; i < buf.Length; i++, offset += 2)
buf[i] = inout[offset];
}
}
and results: (buffer size = 2^17, number iterations timed per test = 200)
Average for test #1: 0.001286 seconds +/- 0.000026
Average for test #2: 0.001193 seconds +/- 0.000025
Average for test #3: 0.000686 seconds +/- 0.000009
Average for test #4: 0.000847 seconds +/- 0.000008
Average for test #1: 0.001210 seconds +/- 0.000012
Average for test #2: 0.001048 seconds +/- 0.000012
Average for test #3: 0.000690 seconds +/- 0.000009
Average for test #4: 0.000883 seconds +/- 0.000011
Average for test #1: 0.001209 seconds +/- 0.000015
Average for test #2: 0.001060 seconds +/- 0.000013
Average for test #3: 0.000695 seconds +/- 0.000010
Average for test #4: 0.000861 seconds +/- 0.000009
I got similar results every test. Obviously the unsafe code is the fastest, but I was surprised to see that the CLS couldn't figure out that that it can drop the index checks when dealing with jagged array. Maybe someone can think of more ways to optimize my tests.
Edit:
I tried loop unrolling with the unsafe code and it didn't have an effect.
I also tried optimizing the loop unrolling method:
delegate(float[] inout)
{
float[][] tempbuf = new float[2][];
int length = inout.Length / 2;
float[] tempbuf0 = tempbuf[0] = new float[length];
float[] tempbuf1 = tempbuf[1] = new float[length];
for (int ix = 0, i = 0; ix < length; ix++)
{
tempbuf0[ix] = inout[i++];
tempbuf1[ix] = inout[i++];
}
}
The results are also a hit-miss compared test#4 with 1% difference. Test #4 is my best way to go, so far.
As I told jerryjvl, the problem is getting the CLS to not index check the input buffer, since adding a second check (&& offset < inout.Length) will slow it down...
Edit 2:
I ran the tests before in the IDE, so here are the results outside:
2^17 items, repeated 200 times
******************************************
Average for test #1: 0.000533 seconds +/- 0.000017
Average for test #2: 0.000527 seconds +/- 0.000016
Average for test #3: 0.000407 seconds +/- 0.000008
Average for test #4: 0.000374 seconds +/- 0.000008
Average for test #5: 0.000424 seconds +/- 0.000009
2^17 items, repeated 200 times
******************************************
Average for test #1: 0.000547 seconds +/- 0.000016
Average for test #2: 0.000732 seconds +/- 0.000020
Average for test #3: 0.000423 seconds +/- 0.000009
Average for test #4: 0.000360 seconds +/- 0.000008
Average for test #5: 0.000406 seconds +/- 0.000008
2^18 items, repeated 200 times
******************************************
Average for test #1: 0.001295 seconds +/- 0.000036
Average for test #2: 0.001283 seconds +/- 0.000020
Average for test #3: 0.001085 seconds +/- 0.000027
Average for test #4: 0.001035 seconds +/- 0.000025
Average for test #5: 0.001130 seconds +/- 0.000025
2^18 items, repeated 200 times
******************************************
Average for test #1: 0.001234 seconds +/- 0.000026
Average for test #2: 0.001319 seconds +/- 0.000023
Average for test #3: 0.001309 seconds +/- 0.000025
Average for test #4: 0.001191 seconds +/- 0.000026
Average for test #5: 0.001196 seconds +/- 0.000022
Test#1 = My Original Code
Test#2 = Optimized safe loop unrolling
Test#3 = Unsafe code - loop unrolling
Test#4 = Unsafe code
Test#5 = My Optimized Code
Looks like loop unrolling is not favorable. My optimized code is still my best way to go and with only 10% difference compared to the unsafe code. If only I could tell the compiler that (i < buf.Length) implies that (offset < inout.Length), it will drop the check (inout[offset]) and I will basically get the unsafe performance.

As there's no built in function to do that, using array indexes is the fastest operation you could think of. Indexers and solutions like that only make things worse by introducing method calls and preventing the JIT optimizer to be able to optimize bound checks.
Anyway, I think your current method is the fastest non-unsafe solution you could be using. If performance really matter to you (which usually does in signal processing applications), you can do the whole thing in unsafe C# (which is fast enough, probably comparable with C) and wrap it in a method that you'd call from your safe methods.

It's not going to get you a major performance boost (I roughly measured 20% on my machine), but you could consider some loop unrolling for common cases. If most of the time you have a relatively limited number of channels:
static private float[][] Alternative(float[] buffer, int channels)
{
float[][] result = new float[channels][];
int length = buffer.Length / channels;
for (int c = 0; c < channels; c++)
result[c] = new float[length];
int i = 0;
if (channels == 8)
{
for (int ix = 0; ix < length; ix++)
{
result[0][ix] = buffer[i++];
result[1][ix] = buffer[i++];
result[2][ix] = buffer[i++];
result[3][ix] = buffer[i++];
result[4][ix] = buffer[i++];
result[5][ix] = buffer[i++];
result[6][ix] = buffer[i++];
result[7][ix] = buffer[i++];
}
}
else
for (int ix = 0; ix < length; ix++)
for (int ch = 0; ch < channels; ch++)
result[ch][ix] = buffer[i++];
return result;
}
As long as you leave the general fallback variant there it'll handle any number of channels, but you'll get a speed boost if it is one of the unrolled variants.

Maybe some unrolling in your own best answer:
delegate(float[] inout)
{
unsafe
{
float[][] tempbuf = new float[2][];
int length = inout.Length / 2;
fixed (float* buffer = inout)
{
float* pbuffer = buffer;
tempbuf[0] = new float[length];
tempbuf[1] = new float[length];
fixed (float* buffer0 = tempbuf[0])
fixed (float* buffer1 = tempbuf[1])
{
float* pbuffer0 = buffer0;
float* pbuffer1 = buffer1;
for (int i = 0; i < length; i++)
{
*pbuffer0++ = *pbuffer++;
*pbuffer1++ = *pbuffer++;
}
}
}
}
}
This might get a little more performance still.

I think a lot of readers would question why you don't want an unsafe solution for something like audio processing. It's the type of thing that begs for hot-blooded optimization and I would peronally be unhappy knowing it's being forced through a vm.

Related

time to get in array of bitarrays

I have a problem that I don't understand, in that code:
ilProbekUcz= valuesUcz.Count; //valuesUcz is the list of <float[]>
for (int i = 0; i < ilWezlowDanych; i++) nodesValueArrayUcz[i] = new BitArray(ilProbekUcz);
Stopwatch sw = new Stopwatch();
sw.Start();
for (int i = 0; i < ilProbekUcz; i++)
{
int index = 0;
linia = (float[])valuesUcz[i];//removing this line not solve problem
for (int a = 0; a < ileRazem; a++)
for (int b = 0; b < ileRazem; b++)
if (a != b)
{
bool value = linia[a] >= linia[b];
nodesValueArrayUcz[index][i] = value;
nodesValueArrayUcz[ilWezlowDanychP2 + index][i] = !value;
index++;
}
}
sw.Stop();
When i increase size of valuesUcz 2x, time of execution is 4x bigger
When i increase size of valuesUcz 4x, time of execution is 8x bigger
etc ...
(ileRazem,ilWezlowDanych is the same)
I understand: increase of ilProbekUcz increases size of BitArrays but i test it many times and it is no problem - time should grow linearly - in code:
ilProbekUcz= valuesUcz.Count; //valuesTest is the list of float[]
for (int i = 0; i < ilWezlowDanych; i++) nodesValueArrayUcz[i] = new BitArray(ilProbekUcz);
BitArray test1 = nodesValueArrayUcz[10];
BitArray test2 = nodesValueArrayUcz[20];
Stopwatch sw = new Stopwatch();
sw.Start();
for (int i = 0; i < ilProbekUcz; i++)
{
int index = 0;
linia = (float[])valuesUcz[i];//removing this line not solve problem
for (int a = 0; a < ileRazem; a++)
for (int b = 0; b < ileRazem; b++)
if (a != b)
{
bool value = linia[a] >= linia[b];
test1[i] = value;
test2[i] = !value;
index++;
}
}
time grows linearly, so the problem is to take a BitArray from the array...
Is any method to do it faster ? (i want time to grow linearly)
You have to understand that measuring time there are many factors that makes them inacurate. The biggest factor when you have huuuuuge arrays as in your example is cashe misses. Many times the same thing written when taking account of cashe, can be as much as 2-5 or more times faster. Two words how cashe works, very roughly. Cache is memory inside cpu. It is waaaaaaaaaaaaaay faster than ram so when you want to fetch a variable from memory you want to make sure this variable is stored in cache and not in ram. If it is stored in cache we say we have a hit otherwise a miss. Some times, not so often, a program is so big that it stores variables in hard drive. In that case you have a huuuuuuuuuuuge hit in delay when you fetch these! An example of cache:
Lets say we have an array of 10 elements in memory(ram)
when you get the first element testArray[0], because testArray[0] is not in cache the cpu brings this value along with a number(lets say 3, the number depends on the cpu) of adjacent elements of the array eg it stores to cache testArray[0], testArray[1], testArray[2], testArray[3]
Now when we get testArray[1] it is in cache so we have a hit. The same with testArray[2] and testArray[3]. testArray[4] isn't in cache so it gets testArray[4] along with another 3 testArray[5], testArray[6], testArray[7]
and so on...
Cache misses are very costly. That means you may expect an array of double the size is going to be accessible double the time. But this is not true. Bigger arrays more misses
and the time may increase 2 or 3 or 4 or more times from what you expect. This is normal. In your example that is what is happening. From 100 million elemensts(first array) you go t0 400 million (second one). The missesare not double but waaay more as you saw. A very cool trick has to do with the way you access an array. In your example ba1[j][i] = (j % 2) == 0; is way worse than ba1[i][j] = (j % 2) == 0;. The same with ba2[j][i] = (j % 2) == 0; and ba1[i][j] = (j % 2) == 0;. You can test it. Just reverse i and j. It has to do with the way the 2D array is stored in memory so in the second case you have more hits that the first one.

How to make this small C# program which calculates prime numbers using the Sieve of Eratosthenes as efficient and economical as possible?

I've made a small C# program which calculates prime numbers using the Sieve of Eratosthenes.
long n = 100000;
bool[] p = new bool[n+1];
for(long i=2; i<=n; i++)
{
p[i]=true;
}
for(long i=2; i<=X; i++)
{
for(long j=Y; j<=Z; j++)
{
p[i*j]=false;
}
}
for(long i=0; i<=n; i++)
{
if(p[i])
{
Console.Write(" "+i);
}
}
Console.ReadKey(true);
My question is: which X, Y and Z should I choose to make my program as efficient and economical as possible?
Of course we can just take:
X = n
Y = 2
Z = n
But then the program won't be very efficient.
It seems we can take:
X = Math.Sqrt(n)
Y = i
Z = n/i
And apparently the first 100 primes that the program gives are all correct.
There are several optimisations that can be applied without making the program overly complicated.
you can start the crossing out at j = i (effectively i * i instead of 2 * i) since all lower multiples of i have already been crossed out
you can save some work by leaving all even numbers out of the array (remembering to produce the prime 2 out of thin air when needed); hence array cell k represents the odd integer 2 * k + 1
you can make things faster by turning repeated multiplication (i * j) into iterated addition (k += i); instead of looping over j in the inner loop you loop (k = i * i; k <= N; k += i)
in some cases it can be advantageous to initialise the array with 0 (false) and set cells to 1 (true) for composites; its meaning is thus 'is_composite' instead of 'is_prime'
Harvesting all the low-hanging fruit, the loops thus become (in C++, but C# should be sort of similar):
uint32_t max_factor_bit = uint32_t(sqrt(double(n))) >> 1;
uint32_t max_bit = n >> 1;
for (uint32_t i = 3 >> 1; i <= max_factor_bit; ++i)
{
if (composite[i]) continue;
uint32_t n = (i << 1) + 1;
uint32_t k = (n * n) >> 1;
for ( ; k <= max_bit; k += n)
{
composite[k] = true;
}
}
Regarding the computation of max_factor there are some caveats where the compiler can bite you, for larger values of n. There's a topic for that on Code Review.
A further, easy optimisation is to represent the bitmap as an array of bytes, with each byte standing for eight odd integers. For setting bit k in byte array a you would do a[k / CHAR_BIT] |= (1 << (k % CHAR_BIT)) where CHAR_BIT is the number of bits in a byte. However, such bit trickery is normally wrapped into an inline function to keep the code clean. E.g. in C++ I tell the compiler how to generate such functions using a template like this:
template<typename word_t>
inline
void set_bit (word_t *p, uint32_t index)
{
enum { BITS_PER_WORD = sizeof(word_t) * CHAR_BIT };
// we can trust the compiler to use masking and shifting instead of division; we cannot do that
// ourselves without having the log2 which cannot easily be computed as a constexpr
p[index / BITS_PER_WORD] |= word_t(1) << (index % BITS_PER_WORD);
}
This allows me to say set_bit(a, k) for any type of array - byte, integer, whatever - without having to write special code or use invocations; it's basically a type-safe equivalent to the old C-style macros. I'm not certain whether something similar is possible in C#. There is, however, the C# type BitArray where all that stuff is already done for you under the hood.
On pastebin there's a small demo .cpp for the segmented Sieve of Eratosthenes, where two further optimisations are applied: presieving by small integers, and sieving in small, cache friendly blocks so that the full range of 32-bit integers can be sieved in 2 seconds flat. This could give you some inspiration...
When doing the Sieve of Eratosthenes, memory savings easily translate to speed gains because the algorithm is memory-intensive and it tends to stride all over the memory instead of accessing it locally. That's why space savings due to compact representation (only odd integers, packed bits - i.e. BitArray) and localisation of access (by sieving in small blocks instead of the whole array in one go) can speed up the code by one or more orders of magnitude, without making the code significantly more complicated.
It is possible to go far beyond the easy optimisations mentioned here, but that tends to make the code increasingly complicated. One word that often occurs in this context is the 'wheel', which can save a further 50% of memory space. The wiki has an explanation of wheels here, and in a sense the odds-only sieve is already using a 'modulo 2 wheel'. Conversely, a wheel is the extension of the odds-only idea to dropping further small primes from the array, like 3 and 5 in the famous 'mod 30' wheel with modulus 2 * 3 * 5. That wheel effectively stuffs 30 integers into one 8-bit byte.
Here's a runnable rendition of the above code in C#:
static uint max_factor32 (double n)
{
double r = System.Math.Sqrt(n);
if (r < uint.MaxValue)
{
uint r32 = (uint)r;
return r32 - ((ulong)r32 * r32 > n ? 1u : 0u);
}
return uint.MaxValue;
}
static void sieve32 (System.Collections.BitArray odd_composites)
{
uint max_bit = (uint)odd_composites.Length - 1;
uint max_factor_bit = max_factor32((max_bit << 1) + 1) >> 1;
for (uint i = 3 >> 1; i <= max_factor_bit; ++i)
{
if (odd_composites[(int)i]) continue;
uint p = (i << 1) + 1; // the prime represented by bit i
uint k = (p * p) >> 1; // starting point for striding through the array
for ( ; k <= max_bit; k += p)
{
odd_composites[(int)k] = true;
}
}
}
static int Main (string[] args)
{
int n = 100000000;
System.Console.WriteLine("Hello, Eratosthenes! Sieving up to {0}...", n);
System.Collections.BitArray odd_composites = new System.Collections.BitArray(n >> 1);
sieve32(odd_composites);
uint cnt = 1;
ulong sum = 2;
for (int i = 1; i < odd_composites.Length; ++i)
{
if (odd_composites[i]) continue;
uint prime = ((uint)i << 1) + 1;
cnt += 1;
sum += prime;
}
System.Console.WriteLine("\n{0} primes, sum {1}", cnt, sum);
return 0;
}
This does 10^8 in about a second, but for higher values of n it gets slow. If you want to do faster then you have to employ sieving in small, cache-sized blocks.

loop starts from 0 is faster than loop starts from 1?

look at these 2 loops
const int arrayLength = ...
Version 0
public void RunTestFrom0()
{
int sum = 0;
for (int i = 0; i < arrayLength; i++)
for (int j = 0; j < arrayLength; j++)
for (int k = 0; k < arrayLength; k++)
for (int l = 0; l < arrayLength; l++)
for (int m = 0; m < arrayLength; m++)
{
sum += myArray[i][j][k][l][m];
}
}
Version 1
public void RunTestFrom1()
{
int sum = 0;
for (int i = 1; i < arrayLength; i++)
for (int j = 1; j < arrayLength; j++)
for (int k = 1; k < arrayLength; k++)
for (int l = 1; l < arrayLength; l++)
for (int m = 1; m < arrayLength; m++)
{
sum += myArray[i][j][k][l][m];
}
}
Version 2
public void RunTestFrom2()
{
int sum = 0;
for (int i = 2; i < arrayLength; i++)
for (int j = 2; j < arrayLength; j++)
for (int k = 2; k < arrayLength; k++)
for (int l = 2; l < arrayLength; l++)
for (int m = 2; m < arrayLength; m++)
{
sum += myArray[i][j][k][l][m];
}
}
Results for arrayLength=50 are (average from multiple sampling compiled X64):
Version 0: 0.998s (Standard error of the mean 0.001s) total loops: 312500000
Version 1: 1.449s (Standard error of the mean 0.000s) total loops: 282475249
Version 2: 0.774s (Standard error of the mean 0.006s) total loops: 254803968
Version 3: 1.183s (Standard error of the mean 0.001s) total loops: 229345007
if we make arrayLength=45 then
Version 0: 0.495s (Standard error of the mean 0.003s) total loops: 184528125
Version 1: 0.527s (Standard error of the mean 0.001s) total loops: 164916224
Version 2: 0.752s (Standard error of the mean 0.001s) total loops: 147008443
Version 3: 0.356s (Standard error of the mean 0.000s) total loops: 130691232
why:
loop start from 0 is faster than loop start from 1 though more loops
why loop start from 2 behaves weird?
Update:
I did each run 10 times, (that's where standard error of the mean comes from)
I also switched the order of version tests a couple of time. No big difference.
The length of myArray of each dimension = arrayLength, I initialized it in the beginning and the time taken is excluded. The value is 1. So sum gives the total loops.
The complied version is Released mode, and I run it from Outside VS. (Closed VS)
Update2:
Now I discard myArray completely, sum++ instead, and added GC.Collect()
public void RunTestConstStartConstEnd()
{
int sum = 0;
for (int i = constStart; i < constEnd; i++)
for (int j = constStart; j < constEnd; j++)
for (int k = constStart; k < constEnd; k++)
for (int l = constStart; l < constEnd; l++)
for (int m = constStart; m < constEnd; m++)
{
sum++;
}
}
Update
This appears to me to be a result of an unsuccessful attempt at optimization by the jitter, not the compiler. In short, if the jitter can determine the lower bound is a constant it will do something different which turns out to actually be slower. The basis for my conclusions takes some proving, so bear with me. Or go read something else if you're not interested!
I concluded this after testing four different ways to set the lower bound of the loop:
Hard coded in each level, as in colinfang's question
Use a local variable, assigned dynamically through a command line argument
Use a local variable but assign it a constant value
Use a local variable and assign it a constant value, but first pass the value through a goofy sausage-grinding identity function. This confuses the jitter enough to prevent it from applying its constant-value "optimization".
The compiled intermediate language for all four versions of the looping section is almost identical. The only difference is that in version 1 the lower bound is loaded with the command ldc.i4.#, where # is 0, 1, 2, or 3. That stands for load constant. (See ldc.i4 opcode). In all other versions, the lower bound is loaded with ldloc. This is true even in case 3, where the compiler could infer that lowerBound is really a constant.
The resulting performance is not constant. Version 1 (explicit constant) is slower than version 2 (run-time argument) along similar lines as found by the OP. What is very interesting is that version 3 is also slower, with comparable times to version 1. So even though the IL treats the lower bound as a variable, the jitter appears to have figured out that the value never changes, and substitutes a constant as in version 1, with the corresponding performance reduction. In version 4 the jitter can't infer what I know -- that Confuser is actually an identity function -- and so it leaves the variable as a variable. The resulting performance is the same as the command line argument version (2).
My theory on the cause of the performance difference: The jitter is aware and makes use of the fine details of actual processor architecture. When it decides to use a constant other than 0, it has to actually go fetch that literal value from some storage which is not in the L2 cache. When it is fetching a frequently used local variable it instead reads its value from the L2 cache, which is insanely fast. Normally it doesn't make sense to be taking up room in the precious cache with something as dumb as a known literal integer value. In this case we care more about read time than storage, though, so it has an undesired impact on performance.
Here is the full code for the version 2 (command line arg):
class Program {
static void Main(string[] args) {
List<double> testResults = new List<double>();
Stopwatch sw = new Stopwatch();
int upperBound = int.Parse(args[0]) + 1;
int tests = int.Parse(args[1]);
int lowerBound = int.Parse(args[2]); // THIS LINE CHANGES
int sum = 0;
for (int iTest = 0; iTest < tests; iTest++) {
sum = 0;
GC.Collect();
sw.Reset();
sw.Start();
for (int lvl1 = lowerBound; lvl1 < upperBound; lvl1++)
for (int lvl2 = lowerBound; lvl2 < upperBound; lvl2++)
for (int lvl3 = lowerBound; lvl3 < upperBound; lvl3++)
for (int lvl4 = lowerBound; lvl4 < upperBound; lvl4++)
for (int lvl5 = lowerBound; lvl5 < upperBound; lvl5++)
sum++;
sw.Stop();
testResults.Add(sw.Elapsed.TotalMilliseconds);
}
double avg = testResults.Average();
double stdev = testResults.StdDev();
string fmt = "{0,13} {1,13} {2,13} {3,13}"; string bar = new string('-', 13);
Console.WriteLine();
Console.WriteLine(fmt, "Iterations", "Average (ms)", "Std Dev (ms)", "Per It. (ns)");
Console.WriteLine(fmt, bar, bar, bar, bar);
Console.WriteLine(fmt, sum, avg.ToString("F3"), stdev.ToString("F3"),
((avg * 1000000) / (double)sum).ToString("F3"));
}
}
public static class Ext {
public static double StdDev(this IEnumerable<double> vals) {
double result = 0;
int cnt = vals.Count();
if (cnt > 1) {
double avg = vals.Average();
double sum = vals.Sum(d => Math.Pow(d - avg, 2));
result = Math.Sqrt((sum) / (cnt - 1));
}
return result;
}
}
For version 1: same as above except remove lowerBound declaration and replace all lowerBound instances with literal 0, 1, 2, or 3 (compiled and executed separately).
For version 3: same as above except replace lowerBound declaration with
int lowerBound = 0; // or 1, 2, or 3
For version 4: same as above except replace lowerBound declaration with
int lowerBound = Ext.Confuser<int>(0); // or 1, 2, or 3
Where Confuser is:
public static T Confuser<T>(T d) {
decimal d1 = (decimal)Convert.ChangeType(d, typeof(decimal));
List<decimal> L = new List<decimal>() { d1, d1 };
decimal d2 = L.Average();
if (d1 - d2 < 0.1m) {
return (T)Convert.ChangeType(d2, typeof(T));
} else {
// This will never actually happen :)
return (T)Convert.ChangeType(0, typeof(T));
}
}
Results (50 iterations of each test, in 5 batches of 10):
1: Lower bound hard-coded in all loops:
Program Iterations Average (ms) Std Dev (ms) Per It. (ns)
-------- ------------- ------------- ------------- -------------
Looper0 345025251 267.813 1.776 0.776
Looper1 312500000 344.596 0.597 1.103
Looper2 282475249 311.951 0.803 1.104
Looper3 254803968 282.710 2.042 1.109
2: Lower bound supplied at command line:
Program Iterations Average (ms) Std Dev (ms) Per It. (ns)
-------- ------------- ------------- ------------- -------------
Looper 345025251 269.317 0.853 0.781
Looper 312500000 244.946 1.434 0.784
Looper 282475249 222.029 0.919 0.786
Looper 254803968 201.238 1.158 0.790
3: Lower bound hard-coded but copied to local variable:
Program Iterations Average (ms) Std Dev (ms) Per It. (ns)
-------- ------------- ------------- ------------- -------------
LooperX0 345025251 267.496 1.055 0.775
LooperX1 312500000 345.614 1.633 1.106
LooperX2 282475249 311.868 0.441 1.104
LooperX3 254803968 281.983 0.681 1.107
4: Lower bound hard-coded but ground through Confuser:
Program Iterations Average (ms) Std Dev (ms) Per It. (ns)
-------- ------------- ------------- ------------- -------------
LooperZ0 345025251 266.203 0.489 0.772
LooperZ1 312500000 241.689 0.571 0.774
LooperZ2 282475249 219.533 1.205 0.777
LooperZ3 254803968 198.308 0.416 0.778
That is an enourmous array. For all practical purposes you are testing how long it takes your operating system to fetch the values of each element from memory, not to compare whether j, k, etc are less than arrayLength, to increment the counters, and increment your sum. The latency to fetch those values has little to do with the runtime or jitter per se and a lot to do with whatever else happens to be running on your system as a whole and the current compression and organization of the heap.
In addition, because your array is taking up so much room and being accessed frequently it's quite possible that garbage collection is running during some of your test iterations, which would completely inflate the apparent CPU time.
Try doing your test without the array lookup -- just add 1 (sum++) and then see what happens. To be even more thorough, call GC.Collect() just before each test to avoid a collection during the loop.
I think Version 0 is faster because the compiler generates a special code without range checking in that case. See http://msdn.microsoft.com/library/ms973858.aspx (section Range Check Elimination)
Just an Idea :
Maybe there are bitshifting optimizations in the loops so it will take longer when beginning in an uneven count.
I also don't know if your processor might be an indicator for the different results
Off the top of my head, perhaps there's a compiler optimization for 0 -> length. Check your build settings (release vs debug).
Beyond that, unless it's just an issue of your computer doing other work that is influencing the benchmarks, I'm not sure. Perhaps you should alter your benchmark to run each test multiple times and average the results.

c# fixed arrays - which structure is fastest to read from?

I have some large arrays of 2D data elements. A and B aren't equally sized dimensions.
A) is between 5 and 20
B) is between 1000 and 100000
The initialization time is no problem as its only going to be lookup tables for realtime application, so performance on indexing elements from knowing value A and B is crucial. The data stored is currently a single byte-value.
I was thinking around these solutions:
byte[A][B] datalist1a;
or
byte[B][A] datalist2a;
or
byte[A,B] datalist1b;
or
byte[B,A] datalist2b;
or perhaps loosing the multidimension as I know the fixed size and just multiply the to values before looking it up.
byte[A*Bmax + B] datalist3;
or
byte[B*Amax + A] datalist4;
What I need is to know, what datatype/array structure to use for most efficient lookup in C# when I have this setup.
Edit 1
the first two solutions were supposed to be multidimensional, not multi arrays.
Edit 2
All data in the smallest dimension is read at each lookup, but the large one is only used for indexing once at a time.
So its something like - Grab all A's from sample B.
I'd bet on the jagged arrays, unless the Amax or Bmax are a power of 2.
I'd say so, because a jagged array needs two indexed accesses, thus very fast. The other forms implies a multiplication, either implicit or explicit. Unless that multiplication is a simple shift, I think could be a bit heavier than a couple of indexed accesses.
EDIT: Here is the small program used for the test:
class Program
{
private static int A = 10;
private static int B = 100;
private static byte[] _linear;
private static byte[,] _square;
private static byte[][] _jagged;
unsafe static void Main(string[] args)
{
//init arrays
_linear = new byte[A * B];
_square = new byte[A, B];
_jagged = new byte[A][];
for (int i = 0; i < A; i++)
_jagged[i] = new byte[B];
//set-up the params
var sw = new Stopwatch();
byte b;
const int N = 100000;
//one-dim array (buffer)
sw.Restart();
for (int i = 0; i < N; i++)
{
for (int r = 0; r < A; r++)
{
for (int c = 0; c < B; c++)
{
b = _linear[r * B + c];
}
}
}
sw.Stop();
Console.WriteLine("linear={0}", sw.ElapsedMilliseconds);
//two-dim array
sw.Restart();
for (int i = 0; i < N; i++)
{
for (int r = 0; r < A; r++)
{
for (int c = 0; c < B; c++)
{
b = _square[r, c];
}
}
}
sw.Stop();
Console.WriteLine("square={0}", sw.ElapsedMilliseconds);
//jagged array
sw.Restart();
for (int i = 0; i < N; i++)
{
for (int r = 0; r < A; r++)
{
for (int c = 0; c < B; c++)
{
b = _jagged[r][c];
}
}
}
sw.Stop();
Console.WriteLine("jagged={0}", sw.ElapsedMilliseconds);
//one-dim array within unsafe access (and context)
sw.Restart();
for (int i = 0; i < N; i++)
{
for (int r = 0; r < A; r++)
{
fixed (byte* offset = &_linear[r * B])
{
for (int c = 0; c < B; c++)
{
b = *(byte*)(offset + c);
}
}
}
}
sw.Stop();
Console.WriteLine("unsafe={0}", sw.ElapsedMilliseconds);
Console.Write("Press any key...");
Console.ReadKey();
Console.WriteLine();
}
}
Multidimensional ([,]) arrays are nearly always the slowest, unless under a heavy random access scenario. In theory they shouldn't be, but it's one of the CLR oddities.
Jagged arrays ([][]) are nearly always faster than multidimensional arrays; even under random access scenarios. These have a memory overhead.
Singledimensional ([]) and algebraic arrays ([y * stride + x]) are the fastest for random access in safe code.
Unsafe code is, normally, fastest in all cases (provided you don't pin it repeatedly).
The only useful answer to "which X is faster" (for all X) is: you have to do performance tests that reflect your requirements.
And remember to consider, in general*:
Maintenance of the program. If this is not a quick one off, a slightly slower but maintainable program is a better option in most cases.
Micro benchmarks can be deceptive. For instance a tight loop just reading from a collection might be optimised away in ways not possible when real work is being done.
Additionally consider that you need to look at the complete program to decide where to optimise. Speeding up a loop by 1% might be useful for that loop, but if it is only 1% of the complete runtime then it is not making much differences.
* But all rules have exceptions.
On most modern computers, arithmetic operations are far, far faster than memory lookups.
If you fetch a memory address that isn't in a cache or where the out of order execution pulls from the wrong place you are looking at 10-100 clocks, a pipelined multiply is 1 clock.
The other issue is cache locality.
byte[BAmax + A] datalist4; seems like the best bet if you are accessing with A's varying sequentially.
When datalist4[bAmax + a] is accessed, the computer will usually start pulling in datalist4[bAmax + a+ 64/sizeof(dataListType)], ... +128 ... etc, or if it detects a reverse iteration, datalist4[bAmax + a - 64/sizeof(dataListType)]
Hope that helps!
May be best way for u will be use HashMap
Dictionary?

Is there a good radixsort-implementation for floats in C#

I have a datastructure with a field of the float-type. A collection of these structures needs to be sorted by the value of the float. Is there a radix-sort implementation for this.
If there isn't, is there a fast way to access the exponent, the sign and the mantissa.
Because if you sort the floats first on mantissa, exponent, and on exponent the last time. You sort floats in O(n).
Update:
I was quite interested in this topic, so I sat down and implemented it (using this very fast and memory conservative implementation). I also read this one (thanks celion) and found out that you even dont have to split the floats into mantissa and exponent to sort it. You just have to take the bits one-to-one and perform an int sort. You just have to care about the negative values, that have to be inversely put in front of the positive ones at the end of the algorithm (I made that in one step with the last iteration of the algorithm to save some cpu time).
So heres my float radixsort:
public static float[] RadixSort(this float[] array)
{
// temporary array and the array of converted floats to ints
int[] t = new int[array.Length];
int[] a = new int[array.Length];
for (int i = 0; i < array.Length; i++)
a[i] = BitConverter.ToInt32(BitConverter.GetBytes(array[i]), 0);
// set the group length to 1, 2, 4, 8 or 16
// and see which one is quicker
int groupLength = 4;
int bitLength = 32;
// counting and prefix arrays
// (dimension is 2^r, the number of possible values of a r-bit number)
int[] count = new int[1 << groupLength];
int[] pref = new int[1 << groupLength];
int groups = bitLength / groupLength;
int mask = (1 << groupLength) - 1;
int negatives = 0, positives = 0;
for (int c = 0, shift = 0; c < groups; c++, shift += groupLength)
{
// reset count array
for (int j = 0; j < count.Length; j++)
count[j] = 0;
// counting elements of the c-th group
for (int i = 0; i < a.Length; i++)
{
count[(a[i] >> shift) & mask]++;
// additionally count all negative
// values in first round
if (c == 0 && a[i] < 0)
negatives++;
}
if (c == 0) positives = a.Length - negatives;
// calculating prefixes
pref[0] = 0;
for (int i = 1; i < count.Length; i++)
pref[i] = pref[i - 1] + count[i - 1];
// from a[] to t[] elements ordered by c-th group
for (int i = 0; i < a.Length; i++){
// Get the right index to sort the number in
int index = pref[(a[i] >> shift) & mask]++;
if (c == groups - 1)
{
// We're in the last (most significant) group, if the
// number is negative, order them inversely in front
// of the array, pushing positive ones back.
if (a[i] < 0)
index = positives - (index - negatives) - 1;
else
index += negatives;
}
t[index] = a[i];
}
// a[]=t[] and start again until the last group
t.CopyTo(a, 0);
}
// Convert back the ints to the float array
float[] ret = new float[a.Length];
for (int i = 0; i < a.Length; i++)
ret[i] = BitConverter.ToSingle(BitConverter.GetBytes(a[i]), 0);
return ret;
}
It is slightly slower than an int radix sort, because of the array copying at the beginning and end of the function, where the floats are bitwise copied to ints and back. The whole function nevertheless is again O(n). In any case much faster than sorting 3 times in a row like you proposed. I dont see much room for optimizations anymore, but if anyone does: feel free to tell me.
To sort descending change this line at the very end:
ret[i] = BitConverter.ToSingle(BitConverter.GetBytes(a[i]), 0);
to this:
ret[a.Length - i - 1] = BitConverter.ToSingle(BitConverter.GetBytes(a[i]), 0);
Measuring:
I set up some short test, containing all special cases of floats (NaN, +/-Inf, Min/Max value, 0) and random numbers. It sorts exactly the same order as Linq or Array.Sort sorts floats:
NaN -> -Inf -> Min -> Negative Nums -> 0 -> Positive Nums -> Max -> +Inf
So i ran a test with a huge array of 10M numbers:
float[] test = new float[10000000];
Random rnd = new Random();
for (int i = 0; i < test.Length; i++)
{
byte[] buffer = new byte[4];
rnd.NextBytes(buffer);
float rndfloat = BitConverter.ToSingle(buffer, 0);
switch(i){
case 0: { test[i] = float.MaxValue; break; }
case 1: { test[i] = float.MinValue; break; }
case 2: { test[i] = float.NaN; break; }
case 3: { test[i] = float.NegativeInfinity; break; }
case 4: { test[i] = float.PositiveInfinity; break; }
case 5: { test[i] = 0f; break; }
default: { test[i] = test[i] = rndfloat; break; }
}
}
And stopped the time of the different sorting algorithms:
Stopwatch sw = new Stopwatch();
sw.Start();
float[] sorted1 = test.RadixSort();
sw.Stop();
Console.WriteLine(string.Format("RadixSort: {0}", sw.Elapsed));
sw.Reset();
sw.Start();
float[] sorted2 = test.OrderBy(x => x).ToArray();
sw.Stop();
Console.WriteLine(string.Format("Linq OrderBy: {0}", sw.Elapsed));
sw.Reset();
sw.Start();
Array.Sort(test);
float[] sorted3 = test;
sw.Stop();
Console.WriteLine(string.Format("Array.Sort: {0}", sw.Elapsed));
And the output was (update: now ran with release build, not debug):
RadixSort: 00:00:03.9902332
Linq OrderBy: 00:00:17.4983272
Array.Sort: 00:00:03.1536785
roughly more than four times as fast as Linq. That is not bad. But still not yet that fast as Array.Sort, but also not that much worse. But i was really surprised by this one: I expected it to be slightly slower than Linq on very small arrays. But then I ran a test with just 20 elements:
RadixSort: 00:00:00.0012944
Linq OrderBy: 00:00:00.0072271
Array.Sort: 00:00:00.0002979
and even this time my Radixsort is quicker than Linq, but way slower than array sort. :)
Update 2:
I made some more measurements and found out some interesting things: longer group length constants mean less iterations and more memory usage. If you use a group length of 16 bits (only 2 iterations), you have a huge memory overhead when sorting small arrays, but you can beat Array.Sort if it comes to arrays larger than about 100k elements, even if not very much. The charts axes are both logarithmized:
(source: daubmeier.de)
There's a nice explanation of how to perform radix sort on floats here:
http://www.codercorner.com/RadixSortRevisited.htm
If all your values are positive, you can get away with using the binary representation; the link explains how to handle negative values.
By doing some fancy casting and swapping arrays instead of copying this version is 2x faster for 10M numbers as Philip Daubmeiers original with grouplength set to 8. It is 3x faster as Array.Sort for that arraysize.
static public void RadixSortFloat(this float[] array, int arrayLen = -1)
{
// Some use cases have an array that is longer as the filled part which we want to sort
if (arrayLen < 0) arrayLen = array.Length;
// Cast our original array as long
Span<float> asFloat = array;
Span<int> a = MemoryMarshal.Cast<float, int>(asFloat);
// Create a temp array
Span<int> t = new Span<int>(new int[arrayLen]);
// set the group length to 1, 2, 4, 8 or 16 and see which one is quicker
int groupLength = 8;
int bitLength = 32;
// counting and prefix arrays
// (dimension is 2^r, the number of possible values of a r-bit number)
var dim = 1 << groupLength;
int groups = bitLength / groupLength;
if (groups % 2 != 0) throw new Exception("groups must be even so data is in original array at end");
var count = new int[dim];
var pref = new int[dim];
int mask = (dim) - 1;
int negatives = 0, positives = 0;
// counting elements of the 1st group incuding negative/positive
for (int i = 0; i < arrayLen; i++)
{
if (a[i] < 0) negatives++;
count[(a[i] >> 0) & mask]++;
}
positives = arrayLen - negatives;
int c;
int shift;
for (c = 0, shift = 0; c < groups - 1; c++, shift += groupLength)
{
CalcPrefixes();
var nextShift = shift + groupLength;
//
for (var i = 0; i < arrayLen; i++)
{
var ai = a[i];
// Get the right index to sort the number in
int index = pref[( ai >> shift) & mask]++;
count[( ai>> nextShift) & mask]++;
t[index] = ai;
}
// swap the arrays and start again until the last group
var temp = a;
a = t;
t = temp;
}
// Last round
CalcPrefixes();
for (var i = 0; i < arrayLen; i++)
{
var ai = a[i];
// Get the right index to sort the number in
int index = pref[( ai >> shift) & mask]++;
// We're in the last (most significant) group, if the
// number is negative, order them inversely in front
// of the array, pushing positive ones back.
if ( ai < 0) index = positives - (index - negatives) - 1; else index += negatives;
//
t[index] = ai;
}
void CalcPrefixes()
{
pref[0] = 0;
for (int i = 1; i < dim; i++)
{
pref[i] = pref[i - 1] + count[i - 1];
count[i - 1] = 0;
}
}
}
You can use an unsafe block to memcpy or alias a float * to a uint * to extract the bits.
I think your best bet if the values aren't too close and there's a reasonable precision requirement, you can just use the actual float digits before and after the decimal point to do the sorting.
For example, you can just use the first 4 decimals (be they 0 or not) to do the sorting.

Categories

Resources