How to do this the fastest way? Image arrays - c#

I have a loop that is too slow in C#. I want to know if there is a faster way to process through these arrays. I'm currently working in .NET 2.0. i'm not opposed to upgrading this project. This is part of a theoretical image processing concept involving gray levels.
Pixel count (PixCnt = 21144402)
g_len = 4625
list1d - 1Dimensional array of an image with upper bound of the above pixel count.
pg - gray level intensity holder.
This function creates an index of those values. hence pgidx.
int[] pgidx = new int[PixCnt];
sw = new Stopwatch();
sw.Start();
for (i = 0; i < PixCnt; i++)
{
j = 0;
pgidx[i] = 0;
while (list_1d[i] != pg[j] && j < g_len) j++;
if (list_id[i] == pg[j])
pgidx[i] = j
}
sw.stop();
Debug.WriteLine("PixCnt Loop took" + sw.ElapsedMilliseconds + " ms");

I think using a dictionary to store what's in the pg array will speed it up. g_len is 4625 elements, so you will likely average around 2312 iterations of the inner while loop. Replacing that with a single hashed look up in a dictionary should be faster. Since the outer loop executes 21 million times, speeding up the body of that loop should reap big rewards. I'm guessing the code below will speed up your time by 100 to 1000 time faster.
var pgDict = new Dictionary<int,int>(g_len);
for (int i = 0; i < g_len; i++) pgDict.Add(pg[i], i);
int[] pgidx = new int[PixCnt];
int value = 0;
for (int i = 0; i < PixCnt; i++) {
if (pgDict.TryGetValue(list_id[i], out value)) pgidx[i] = value;
}
Note that setting pgidx[i] to zero when a match isn't found is not necessary, because all elements of the array are already initialized to zero when the array is created.
If there is the possibility for a value in pg to appear more than once, you would want to check first to see if that key has already been added, and skip adding it to the dictionary if it has. That would mimic your current behavior of finding the first match. To do that replace the line where the dictionary is built with this:
for (int i = 0; i < g_len; i++) if (!pgDict.ContainsKey(pg[i])) pgDict.Add(pg[i], i);

If the range of the pixel values in pq allows it (say 16 bpp = 65536 entries), you can create an auxiliary array that maps all possible gray levels to the index value in pg. Filling this array is done with a single pass over pg (after initializing to all zeroes).
Then convert list_1d to pgidx with straight table lookups.
If the table is too big (bigger than the image), then do as #hatchet answered.

Related

What sorting method is this being applied and what is the algorithmic complexity of the method

I came across the code below for implementing sorting array.
I have applied to a very long array and it was able to do so in under a sec may be 20 millisec or less.
I have been reading about Algorithm complexity and the Big O notation and would like to know:
Which is the sorting method (of the existing ones) that is implemented in this code.
What is the complexity of the algorithm used here.
If you were to improve the algorithm/ code below what would you alter.
using System;
using System.Text;
//This program sorts an array
public class SortArray
{
static void Main(String []args)
{
// declaring and initializing the array
//int[] arr = new int[] {3,1,4,5,7,2,6,1, 9,11, 7, 2,5,8,4};
int[] arr = new int[] {489,491,493,495,497,529,531,533,535,369,507,509,511,513,515,203,205,207,209,211,213,107,109,111,113,115,117,11913,415,417,419,421,423,425,427,15,17,19,21,4,517,519,521,523,525,527,4,39,441,443,445,447,449,451,453,455,457,459,461,537,539,541,543,545,547,1,3,5,7,9,11,13,463,465,467,23,399,401,403,405,407,409,411,499,501,503,505,333,335,337,339,341,343,345,347,65,67,69,71,73,75,77,79,81,83,85,87,89,91,93,95,9,171,173,175,177,179,181,183,185,187,269,271,273,275,277,279,281,283,25,27,29,31,33,35,37,39,41,43,45,47,49,51,53,55,57,59,61,63,133,135,137,139,141,143,145,285,287,289,291,121,123,125,127,129,131,297,299,373,375,377,379,381,383,385,387,389,97,99,101,103,105,147,149,151,153,155,157,159,161,163,165,167,16,391,393,395,397,399,401,403,189,191,193,195,197,199,201,247,249,251,253,255,257,259,261,263,265,267,343,345,347,349,501,503,505,333,335,337,339,341,417,419,421,423,425,561,563,565,567,569,571,573,587,589,591,593,595,597,599,427,429,431,433,301,303,305,307,309,311,313,315,317,319,321,323,325,327,329,331,371,359,361,363,365,367,369,507,509,511,513,515,351,353,355,57,517,519,521,523,525,527,413,415,405,407,409,411,499,435,437,469,471,473,475,477,479,481,483,485,487,545,547,549,551,553,555,575,577,579,581,583,585,557,559,489,491,493,495,497,529,531,533,535,537,539,541,543,215,217,219,221,223,225,227,229,231,233,235,237,239,241,243,245,293,295};
int temp;
// traverse 0 to array length
for (int i = 0; i < arr.Length ; i++)
{
// traverse i+1 to array length
//for (int j = i + 1; j < arr.Length; j++)
for (int j = i+1; j < arr.Length; j++)
{
// compare array element with
// all next element
if (arr[i] > arr[j]) {
///Console.WriteLine(i+"i before"+arr[i]);
temp = arr[i];
arr[i] = arr[j];
arr[j] = temp;
//Console.WriteLine("i After"+arr[i]);
}
}
}
// print all element of array
foreach(int value in arr)
{
Console.Write(value + " ");
}
}
}
This is selection sort. It's time complexity is O(𝑛²). It has nested loops over i and j, and you can see these produce every possible set of two indices in the range {0,...,𝑛-1}, where 𝑛 is arr.Length. The number of pairs is a triangular number, and is equal to:
𝑛(𝑛-1)/2
...which is O(𝑛²)
If we stick to selection sort, we can still find some improvements.
We can see that the role of the outer loop is to store in arr[i] the value that belongs there in the final sorted array, and never touch that entry again. It does so by searching the minimum value in the right part of the array that starts at this index 𝑖.
Now during that search, which takes place in the inner loop, it keeps swapping lesser values into arr[i]. This may happen a few times, as it might find even lesser values as j walks to the right. That is a waste of operations, as we would prefer to only perform one swap. And this is possible: instead of swapping immediately, delay this operation. Instead keep track of where the minimum value is located (initially at i, but this may become some j index). Only when the inner loop completes, perform the swap.
There is less important improvement: i does not have to get equal to arr.Length - 1, as then there are no iterations of the inner loop. So the ending condition for the outer loop can exclude that iteration from happening.
Here is how that looks:
for (int i = 0, last = arr.Length - 1; i < last; i++)
{
int k = i; // index that has the least value in the range arr[i..n-1] so far
for (int j = i+1; j < arr.Length; j++)
{
if (arr[k] > arr[j]) {
k = j; // don't swap yet -- just track where the minimum is located
}
}
if (k > i) { // now perform the swap
int temp = arr[i];
arr[i] = arr[k];
arr[k] = temp;
}
}
A further improvement can be to use the inner loop to not only locate the minimum value, but also the maximum value, and to move the found maximum to the right end of the array. This way both ends of the array get sorted gradually, and the inner loop will shorten twice as fast. Still, the number of comparisons remains the same, and the average number of swaps as well. So the gain is only in the iteration overhead.
This is "Bubble sort" with O(n^2). You can use "Mergesort" or "Quicksort" to improve your algorithm to O(n*log(n)). If you always know the minimum and maximum of your numbers, you can use "Digit sort" or "Radix Sort" with O(n)
See: Sorting alrogithms in c#

2D Array vs Array of Arrays in tight loop performance C#

I had a look and couldn't see anything quite answering my question.
I'm not exactly the best at creating accurate 'real life' tests, so i'm not sure if that's the problem here. Basically I want to create a few simple neural networks to create something to the effect of Gridworld. Performance of these neural networks will be critical and i dont want the hidden layer to be a bottleneck as much as possible.
I would rather use more memory and be faster, so I opted to use arrays instead of lists (due to lists having an extra bounds check over arrays). The arrays aren't always full, but because the if statement (check if the element is null) is the same until the end, it can be predicted and there is no performance drop from that at all.
My question comes from how I store the data for the network to process. I figured due to 2D arrays storing all the data together it would be better cache wise and would run faster. But from my mock up test that an array of arrays performs much better in this scenario.
Some Code:
private void RunArrayOfArrayTest(float[][] testArray, Data[] data)
{
for (int i = 0; i < testArray.Length; i++) {
for (int j = 0; j < testArray[i].Length; j++) {
var inputTotal = data[i].bias;
for (int k = 0; k < data[i].weights.Length; k++) {
inputTotal += testArray[i][k];
}
}
}
}
private void Run2DArrayTest(float[,] testArray, Data[] data, int maxI, int maxJ)
{
for (int i = 0; i < maxI; i++) {
for (int j = 0; j < maxJ; j++) {
var inputTotal = data[i].bias;
for (int k = 0; k < maxJ; k++) {
inputTotal += testArray[i, k];
}
}
}
}
These are the two functions that are timed. Each 'creature' has its own network (The first for loop), each network has hidden nodes (The second for loop) and i need to find the sum of the weights for each input (The third loop). In my test i stripped it so that it's not really what i am doing in my actual code, but the same amount of loops happen (The data variable would have it's own 2D array, but i didn't want to possibly skew the results). From this i was trying to get a feel for which one is faster, and to my surprise the array of arrays was.
Code to start the tests:
// Array of Array test
Stopwatch timer = Stopwatch.StartNew();
RunArrayOfArrayTest(arrayOfArrays, dataArrays);
timer.Stop();
Console.WriteLine("Array of Arrays finished in: " + timer.ElapsedTicks);
// 2D Array test
timer = Stopwatch.StartNew();
Run2DArrayTest(array2D, dataArrays, NumberOfNetworks, NumberOfInputNeurons);
timer.Stop();
Console.WriteLine("2D Array finished in: " + timer.ElapsedTicks);
Just wanted to show how i was testing it. The results from this in release mode give me values like:
Array of Arrays finished in: 8972
2D Array finished in: 16376
Can someone explain to me what i'm doing wrong? Why is an array of arrays faster in this situation by so much? Isn't a 2D array all stored together, meaning it would be more cache friendly?
Note i really do need this to be fast as it needs to sum up hundreds of thousands - millions of numbers per frame, and like i said i don't want this is be a problem. I know this can be multi threaded in the future quite easily because each network is completely separate and even each node is completely separate.
Last question i suppose, would something like this be possible to run on the GPU instead? I figure a GPU would not struggle to have much larger amounts of networks with much larger numbers of input/hidden neurons.
In the CLR, there are two different types of array:
Vectors, which are zero-based, single-dimensional arrays
Arrays, which can have non-zero bases and multiple dimensions
Your "array of arrays" is a "vector of vectors" in CLR terms.
Vectors are significantly faster than arrays, basically. It's possible that arrays could be optimized further in later CLR versions, but I doubt that there'll get the same amount of love as vectors, as they're so relatively rarely used. There's not a lot you can do to make CLR arrays faster. As you say, they'll be more cache friendly, but they have this CLR penalty.
You can improve your array-of-arrays code already, however, by only performing the first indexing operation once per row:
private void RunArrayOfArrayTest(float[][] testArray, Data[] data)
{
for (int i = 0; i < testArray.Length; i++) {
// These don't change in the loop below, so extract them
var row = testArray[i];
var inputTotal = data[i].bias;
var weightLength = data[i].weights.Length;
for (int j = 0; j < row.Length; j++) {
for (int k = 0; k < weightLength; k++) {
inputTotal += row[k];
}
}
}
}
If you want to get the cache friendliness and still use a vector, you could have a single float[] and perform the indexing yourself... but I'd probably start off with the array-of-arrays approach.

How does `foreach` iterate through a 2D array?

I was curious as to how a foreach loop in C# iterates over a multidimensional array. In the following code, the second nested for loop was originally a foreach which would give the incorrect position of the pitches placed in the loop. I know it's kind of difficult to intuit what it does, but it's basically this: Pitches are put into a multidimensional array (here, numVoices is 2 and exLength is 10) so that you will have a 2x10 array of pitches; each of these rows of pitches are then played at the same time by the MIDI output device. When I used a foreach to then put the pitches' names into a string so that I could display what pitches were in what place inside of the grid, the foreach would display them in the "wrong" order (i.e., [0,3] in the pitch grid was not what was printed in the string). Using a nested for, this problem disappeared. I tried to recreate this with a smaller example of a 2D list of ints (code below) but it gives the "right" answer this time. Why?
//put pitches into grid
//numVoices = 2, exLength = 10 (10 notes long, 2 voices)
for (int i = 0; i < numVoices; i++ )
{
for(int j = 0; j < exLength; j++)
{
//here we generate random pitches in different octaves
//depending on the voice (voice 2 is in octave
//below voice 1, etc)
randnum = (random.Next(100 - (i * 13), 112 - (i * 13)));
melodyGrid[j, i] = (Pitch)randnum;
}
}
for (int i = 0; i < numVoices; i++)
{
for (int j = 0; j < exLength; j++)
{
//this down here makes it more readable for
//humans
//e.g. "FSharp5" becomes "F#5"
noteNames += String.Format("{0, -6}", melodyGrid[j,i].ToString().Replace("Sharp", "#").Replace("Flat", "b"));
}
noteNames += "\r\n"; //lower voices are just separated by newlines
}
Console.WriteLine(noteNames);
The following code works "correctly," however:
int[,] nums = { {1, 2, 3},
{4, 5, 6},
{7, 8 ,9} };
foreach (int i in nums)
{
Console.Write("{0} ", i);
}
Is it possible I was just making a semantic mistake? Or do foreach loops iterate through arrays in differing manners?
I was curious as to how a foreach loop in C# iterates over a multidimensional array.
As always for questions like this, the ultimate authority is the C# language specification. In this case, section 8.8.4:
The order in which foreach traverses the elements of an array, is as follows: For single-dimensional arrays elements are traversed in increasing index order, starting with index 0 and ending with index Length – 1. For multi-dimensional arrays, elements are traversed such that the indices of the rightmost dimension are increased first, then the next left dimension, and so on to the left.
Now, compare that with how you're iterating with your for statements:
for (int i = 0; i < numVoices; i++ )
{
for(int j = 0; j < exLength; j++)
{
...
melodyGrid[j, i] = (Pitch)randnum;
In other words, you're incrementing the leftmost dimension first... so yes, this will give a different result from foreach. If you want to use foreach but get the same iteration order, you'll need to switch the indexes for voices and length. Alternatively, if you want to keep the same order of indexes, just use the for loop and be happy with it.

Binary search slower, what am I doing wrong?

EDIT: so it looks like this is normal behavior, so can anyone just recommend a faster way to do these numerous intersections?
so my problem is this. I have 8000 lists (strings in each list). For each list (ranging from size 50 to 400), I'm comparing it to every other list and performing a calculation based on the intersection number. So I'll do
list1(intersect)list1= number
list1(intersect)list2= number
list1(intersect)list888= number
And I do this for every list. Previously, I had HashList and my code was essentially this: (well, I was actually searching through properties of an object, so I
had to modify the code a bit, but it's basically this:
I have my two versions below, but if anyone knows anything faster, please let me know!
Loop through AllLists, getting each list, starting with list1, and then do this:
foreach (List list in AllLists)
{
if (list1_length < list_length) //just a check to so I'm looping through the
//smaller list
{
foreach (string word in list1)
{
if (block.generator_list.Contains(word))
{
//simple integer count
}
}
}
// a little more code, but the same, but looping through the other list if it's smaller/bigger
Then I make the lists into regular lists, and applied Sort(), which changed my code to
foreach (List list in AllLists)
{
if (list1_length < list_length) //just a check to so I'm looping through the
//smaller list
{
for (int i = 0; i < list1_length; i++)
{
var test = list.BinarySearch(list1[i]);
if (test > -1)
{
//simple integer count
}
}
}
The first version takes about 6 seconds, the other one takes more than 20 (I just stop there cuz otherwise it would take more than a minute!!!) (and this is for a smallish subset of the data)
I'm sure there's a drastic mistake somewhere, but I can't find it.
Well I have tried three distinct methods for achieving this (assuming I understood the problem correctly). Please note I have used HashSet<int> in order to more easily generate random input.
setting up:
List<HashSet<int>> allSets = new List<HashSet<int>>();
Random rand = new Random();
for(int i = 0; i < 8000; ++i) {
HashSet<int> ints = new HashSet<int>();
for(int j = 0; j < rand.Next(50, 400); ++j) {
ints.Add(rand.Next(0, 1000));
}
allSets.Add(ints);
}
the three methods I checked (code is what runs in the inner loop):
the loop:
note that you are getting duplicated results in your code (intersecting set A with set B and later intersecting set B with set A).
It won't affect your performance thanks to the list length check you are doing. But iterating this way is clearer.
for(int i = 0; i < allSets.Count; ++i) {
for(int j = i + 1; j < allSets.Count; ++j) {
}
}
first method:
used IEnumerable.Intersect() to get the intersection with the other list and checked IEnumerable.Count() to get the size of the intersection.
var intersect = allSets[i].Intersect(allSets[j]);
count = intersect.Count();
this was the slowest one averaging 177s
second method:
cloned the smaller set of the two sets I was intersecting, then used ISet.IntersectWith() and checked the resulting sets Count.
HashSet<int> intersect;
HashSet<int> intersectWith;
if(allSets[i].Count < allSets[j].Count) {
intersect = new HashSet<int>(allSets[i]);
intersectWith = allSets[j];
} else {
intersect = new HashSet<int>(allSets[j]);
intersectWith = allSets[i];
}
intersect.IntersectWith(intersectWith);
count = intersect.Count;
}
}
this one was slightly faster, averaging 154s
third method:
did something very similar to what you did iterated over the shorter set and checked ISet.Contains on the longer set.
for(int i = 0; i < allSets.Count; ++i) {
for(int j = i + 1; j < allSets.Count; ++j) {
count = 0;
if(allSets[i].Count < allSets[j].Count) {
loopingSet = allSets[i];
containsSet = allSets[j];
} else {
loopingSet = allSets[j];
containsSet = allSets[i];
}
foreach(int k in loopingSet) {
if(containsSet.Contains(k)) {
++count;
}
}
}
}
this method was by far the fastest (as expected), averaging 66s
conclusion
the method you're using is the fastest of these three. I certainly can't think of a faster single threaded way to do this. Perhaps there is a better concurrent solution.
I've found that one of the most important considerations in iterating/searching any kind of collection is to choose the collection type very carefully. To iterate through a normal collection for your purposes will not be the most optimal. Try using something like:
System.Collections.Generic.HashSet<T>
Using the Contains() method while iterating over the shorter list of two (as you mentioned you're already doing) should give close to O(1) performance, the same as key lookups in the generic Dictionary type.

Fastest algorithm to create an array of known size

I need to create an array of boolean values, which could be on the scale of 100,000s or even millions of entries. It also needs to be super-fast, so every millisecond per iteration counts.
At the time of beginning the loop, I will already know how many entries there are going to be in the array. The question is, will it be faster to create a bool array up front and fill in the values by index (which is random access - could be slow?), or should I create a List<bool>, keep adding entries to the list, and at the end return .ToArray()?
In other words:
Option 1
var array = new bool[size];
for (var n=0; n<size; n++)
array[n] = GetValue(n);
return array;
Option 2
var list = new List<bool>();
for (var n=0; n<size; n++)
list.Add(GetValue(n));
return list.ToArray();
Or maybe there's a 3rd way that's even faster?
Use a System.Collections.BitArray and don't worry about speed.
What you are suggesting above will only waste your memory. This is optimizes both for speed and size, and will pack your bool values nicely (8 per byte, as the gods intended :).
Reply to below comments: If you use a BitArray, everything will be zero at first. Set only those bits for which you have GetValue == true.
The following code seems to show (at least to me) that of the methods discussed on this page, the simple allocation to a bool[] using a loop is quickest.
The code also seems to show me that unless GetValue(n) is computationally trivial, the overhead of allocating the bytes is not the part of the process I would be hoping to optimise.
Hope this helps in some way.
edit: added the results from the run (on my machine)
-- 187ms BitArray
-- 171ms List<bool>().ToArray
-- 168ms bool[] set only if true
-- 130ms bool[] always set
--11460ms bool[] always set with 'complex' GetValue()
class Program
{
static void Main(string[] args)
{
BitArray bitArray = new BitArray(10000000);
bool[] boolArray = new bool[10000000];
Stopwatch sw1 = new Stopwatch();
sw1.Start();
for (int i = 0; i < 10000000; i++)
{
bitArray[i] = GetMod2(i);
}
Console.WriteLine(sw1.ElapsedMilliseconds);
sw1.Restart();
var list = new List<bool>();
for (int i = 0; i < 10000000; i++)
list.Add(GetMod2(i));
var boolArray2 = list.ToArray();
Console.WriteLine(sw1.ElapsedMilliseconds);
sw1.Restart();
for (int i = 0; i < 10000000; i++)
{
bool nextVal = GetMod2(i);
if (nextVal)
bitArray[i] = true;
}
Console.WriteLine(sw1.ElapsedMilliseconds);
sw1.Restart();
for (int i = 0; i < 10000000; i++)
{
boolArray[i] = GetMod2(i);
}
Console.WriteLine(sw1.ElapsedMilliseconds);
sw1.Restart();
for (int i = 0; i < 10000000; i++)
{
boolArray[i] = GetRand(i);
}
Console.WriteLine(sw1.ElapsedMilliseconds);
Console.ReadLine();
}
static bool GetMod2(int i)
{
return (i % 2) == 1;
}
static bool GetRand(int i)
{
return new Random().Next(2) == 1;
}
}
Go with the first. The only reason it might be. "slow" is if it keeps paging data from outside the processor cache.
The list will have exactly the same problem, except it will also need to perform several memory allocations and copies.
Now here's a funny old thing. Inspired by #paul, I ran these benchmark tests myself, on 10,000,000 booleans. The results (in milliseconds) are very surprising, given the discussion in the comments to this question:
BitArray: 517
BitArray + CopyTo(array): 536
List + ToArray(): 455
bool array: 483
And what a turnout for the books! Despite the fact that the List<Bool> is inserting a new record every time, while the bool[] and BitArray are initialized to false on every record, and I only updated them where the value should be true, the List<bool> comes out tops, consistently, even including the .ToArray() call.
Yet another case where practical application is better than textbook knowledge, it seems... :)

Categories

Resources