How does `foreach` iterate through a 2D array? - c#
I was curious as to how a foreach loop in C# iterates over a multidimensional array. In the following code, the second nested for loop was originally a foreach which would give the incorrect position of the pitches placed in the loop. I know it's kind of difficult to intuit what it does, but it's basically this: Pitches are put into a multidimensional array (here, numVoices is 2 and exLength is 10) so that you will have a 2x10 array of pitches; each of these rows of pitches are then played at the same time by the MIDI output device. When I used a foreach to then put the pitches' names into a string so that I could display what pitches were in what place inside of the grid, the foreach would display them in the "wrong" order (i.e., [0,3] in the pitch grid was not what was printed in the string). Using a nested for, this problem disappeared. I tried to recreate this with a smaller example of a 2D list of ints (code below) but it gives the "right" answer this time. Why?
//put pitches into grid
//numVoices = 2, exLength = 10 (10 notes long, 2 voices)
for (int i = 0; i < numVoices; i++ )
{
for(int j = 0; j < exLength; j++)
{
//here we generate random pitches in different octaves
//depending on the voice (voice 2 is in octave
//below voice 1, etc)
randnum = (random.Next(100 - (i * 13), 112 - (i * 13)));
melodyGrid[j, i] = (Pitch)randnum;
}
}
for (int i = 0; i < numVoices; i++)
{
for (int j = 0; j < exLength; j++)
{
//this down here makes it more readable for
//humans
//e.g. "FSharp5" becomes "F#5"
noteNames += String.Format("{0, -6}", melodyGrid[j,i].ToString().Replace("Sharp", "#").Replace("Flat", "b"));
}
noteNames += "\r\n"; //lower voices are just separated by newlines
}
Console.WriteLine(noteNames);
The following code works "correctly," however:
int[,] nums = { {1, 2, 3},
{4, 5, 6},
{7, 8 ,9} };
foreach (int i in nums)
{
Console.Write("{0} ", i);
}
Is it possible I was just making a semantic mistake? Or do foreach loops iterate through arrays in differing manners?
I was curious as to how a foreach loop in C# iterates over a multidimensional array.
As always for questions like this, the ultimate authority is the C# language specification. In this case, section 8.8.4:
The order in which foreach traverses the elements of an array, is as follows: For single-dimensional arrays elements are traversed in increasing index order, starting with index 0 and ending with index Length – 1. For multi-dimensional arrays, elements are traversed such that the indices of the rightmost dimension are increased first, then the next left dimension, and so on to the left.
Now, compare that with how you're iterating with your for statements:
for (int i = 0; i < numVoices; i++ )
{
for(int j = 0; j < exLength; j++)
{
...
melodyGrid[j, i] = (Pitch)randnum;
In other words, you're incrementing the leftmost dimension first... so yes, this will give a different result from foreach. If you want to use foreach but get the same iteration order, you'll need to switch the indexes for voices and length. Alternatively, if you want to keep the same order of indexes, just use the for loop and be happy with it.
Related
What sorting method is this being applied and what is the algorithmic complexity of the method
I came across the code below for implementing sorting array. I have applied to a very long array and it was able to do so in under a sec may be 20 millisec or less. I have been reading about Algorithm complexity and the Big O notation and would like to know: Which is the sorting method (of the existing ones) that is implemented in this code. What is the complexity of the algorithm used here. If you were to improve the algorithm/ code below what would you alter. using System; using System.Text; //This program sorts an array public class SortArray { static void Main(String []args) { // declaring and initializing the array //int[] arr = new int[] {3,1,4,5,7,2,6,1, 9,11, 7, 2,5,8,4}; int[] arr = new int[] {489,491,493,495,497,529,531,533,535,369,507,509,511,513,515,203,205,207,209,211,213,107,109,111,113,115,117,11913,415,417,419,421,423,425,427,15,17,19,21,4,517,519,521,523,525,527,4,39,441,443,445,447,449,451,453,455,457,459,461,537,539,541,543,545,547,1,3,5,7,9,11,13,463,465,467,23,399,401,403,405,407,409,411,499,501,503,505,333,335,337,339,341,343,345,347,65,67,69,71,73,75,77,79,81,83,85,87,89,91,93,95,9,171,173,175,177,179,181,183,185,187,269,271,273,275,277,279,281,283,25,27,29,31,33,35,37,39,41,43,45,47,49,51,53,55,57,59,61,63,133,135,137,139,141,143,145,285,287,289,291,121,123,125,127,129,131,297,299,373,375,377,379,381,383,385,387,389,97,99,101,103,105,147,149,151,153,155,157,159,161,163,165,167,16,391,393,395,397,399,401,403,189,191,193,195,197,199,201,247,249,251,253,255,257,259,261,263,265,267,343,345,347,349,501,503,505,333,335,337,339,341,417,419,421,423,425,561,563,565,567,569,571,573,587,589,591,593,595,597,599,427,429,431,433,301,303,305,307,309,311,313,315,317,319,321,323,325,327,329,331,371,359,361,363,365,367,369,507,509,511,513,515,351,353,355,57,517,519,521,523,525,527,413,415,405,407,409,411,499,435,437,469,471,473,475,477,479,481,483,485,487,545,547,549,551,553,555,575,577,579,581,583,585,557,559,489,491,493,495,497,529,531,533,535,537,539,541,543,215,217,219,221,223,225,227,229,231,233,235,237,239,241,243,245,293,295}; int temp; // traverse 0 to array length for (int i = 0; i < arr.Length ; i++) { // traverse i+1 to array length //for (int j = i + 1; j < arr.Length; j++) for (int j = i+1; j < arr.Length; j++) { // compare array element with // all next element if (arr[i] > arr[j]) { ///Console.WriteLine(i+"i before"+arr[i]); temp = arr[i]; arr[i] = arr[j]; arr[j] = temp; //Console.WriteLine("i After"+arr[i]); } } } // print all element of array foreach(int value in arr) { Console.Write(value + " "); } } }
This is selection sort. It's time complexity is O(𝑛²). It has nested loops over i and j, and you can see these produce every possible set of two indices in the range {0,...,𝑛-1}, where 𝑛 is arr.Length. The number of pairs is a triangular number, and is equal to: 𝑛(𝑛-1)/2 ...which is O(𝑛²) If we stick to selection sort, we can still find some improvements. We can see that the role of the outer loop is to store in arr[i] the value that belongs there in the final sorted array, and never touch that entry again. It does so by searching the minimum value in the right part of the array that starts at this index 𝑖. Now during that search, which takes place in the inner loop, it keeps swapping lesser values into arr[i]. This may happen a few times, as it might find even lesser values as j walks to the right. That is a waste of operations, as we would prefer to only perform one swap. And this is possible: instead of swapping immediately, delay this operation. Instead keep track of where the minimum value is located (initially at i, but this may become some j index). Only when the inner loop completes, perform the swap. There is less important improvement: i does not have to get equal to arr.Length - 1, as then there are no iterations of the inner loop. So the ending condition for the outer loop can exclude that iteration from happening. Here is how that looks: for (int i = 0, last = arr.Length - 1; i < last; i++) { int k = i; // index that has the least value in the range arr[i..n-1] so far for (int j = i+1; j < arr.Length; j++) { if (arr[k] > arr[j]) { k = j; // don't swap yet -- just track where the minimum is located } } if (k > i) { // now perform the swap int temp = arr[i]; arr[i] = arr[k]; arr[k] = temp; } } A further improvement can be to use the inner loop to not only locate the minimum value, but also the maximum value, and to move the found maximum to the right end of the array. This way both ends of the array get sorted gradually, and the inner loop will shorten twice as fast. Still, the number of comparisons remains the same, and the average number of swaps as well. So the gain is only in the iteration overhead.
This is "Bubble sort" with O(n^2). You can use "Mergesort" or "Quicksort" to improve your algorithm to O(n*log(n)). If you always know the minimum and maximum of your numbers, you can use "Digit sort" or "Radix Sort" with O(n) See: Sorting alrogithms in c#
Can I convert a list with objects into a two dimensional array
I don't know if this is possible in c# but it would be great if it was. I have a list of objects which is very easy to convert into an array with the code below: object [] myArray = wrd.ToArray(); It works fine, but how do I convert the list with objects into a two dimensional array, where the elements from index 0 to 20 in the list will be on the first row in the two dimensional array. The elements from index 21 to 41 in the list should be on the second row in the two dimensional array. The elements from index 42 to 62 in the list should be on the third row in the two dimensional array etc. In other words there should be 20 elements in each row and there is 21 rows and 21 columns. Forinstens in the image labyrint then the character 'B' should be reached at index [1,0] But again I don't know if it's possible, but hopefully some very skilled person can help me out.
You could just loop over the array and put the items into a 2d one. // Pseudo code... x = 0 y = 0 loop while i < originalArray.Length 2dArray[x,y] = originalArray[i] i++ y++ if y >= 2dArrayRowLength x++ y=0 end loop But really, why bother. 2D arrays in C# don't actually change anything about how the data is stored, they are just syntactical suger for accessing a single dimensional array in a slightly different way. Just calculate your x & y offsets on the fly as you access your single dimensional array. Something a bit like this. object[] data = new object[100]; int xLength = 21; int yLength = 21; for (int x = 0; x < xLength; x++) { for (int y = 0; y < yLength; y++) { var theValue = data[(yLength * x) + y]; Console.Write(theValue); } Console.WriteLine(); }
How to do this the fastest way? Image arrays
I have a loop that is too slow in C#. I want to know if there is a faster way to process through these arrays. I'm currently working in .NET 2.0. i'm not opposed to upgrading this project. This is part of a theoretical image processing concept involving gray levels. Pixel count (PixCnt = 21144402) g_len = 4625 list1d - 1Dimensional array of an image with upper bound of the above pixel count. pg - gray level intensity holder. This function creates an index of those values. hence pgidx. int[] pgidx = new int[PixCnt]; sw = new Stopwatch(); sw.Start(); for (i = 0; i < PixCnt; i++) { j = 0; pgidx[i] = 0; while (list_1d[i] != pg[j] && j < g_len) j++; if (list_id[i] == pg[j]) pgidx[i] = j } sw.stop(); Debug.WriteLine("PixCnt Loop took" + sw.ElapsedMilliseconds + " ms");
I think using a dictionary to store what's in the pg array will speed it up. g_len is 4625 elements, so you will likely average around 2312 iterations of the inner while loop. Replacing that with a single hashed look up in a dictionary should be faster. Since the outer loop executes 21 million times, speeding up the body of that loop should reap big rewards. I'm guessing the code below will speed up your time by 100 to 1000 time faster. var pgDict = new Dictionary<int,int>(g_len); for (int i = 0; i < g_len; i++) pgDict.Add(pg[i], i); int[] pgidx = new int[PixCnt]; int value = 0; for (int i = 0; i < PixCnt; i++) { if (pgDict.TryGetValue(list_id[i], out value)) pgidx[i] = value; } Note that setting pgidx[i] to zero when a match isn't found is not necessary, because all elements of the array are already initialized to zero when the array is created. If there is the possibility for a value in pg to appear more than once, you would want to check first to see if that key has already been added, and skip adding it to the dictionary if it has. That would mimic your current behavior of finding the first match. To do that replace the line where the dictionary is built with this: for (int i = 0; i < g_len; i++) if (!pgDict.ContainsKey(pg[i])) pgDict.Add(pg[i], i);
If the range of the pixel values in pq allows it (say 16 bpp = 65536 entries), you can create an auxiliary array that maps all possible gray levels to the index value in pg. Filling this array is done with a single pass over pg (after initializing to all zeroes). Then convert list_1d to pgidx with straight table lookups. If the table is too big (bigger than the image), then do as #hatchet answered.
Getting all the possible distances between points
I have created a program to spawn a number of points (the number is given by the user). Therefore the program spawns N points see the image for an example, it has 3 points in that case What I need is to get all the possible distances between those villages (In the example it's distance: AB, AC, BC). The points are stored in a single array (that scores x-coordinate and y-coordinate) List<Villages> I know that I new Pythagoras Theorem, I just cannot get the foreach loop right.
I would think you would want a regular nested for-loop rather than foreach. Something like this should work: for (int i = 0; i < villageList.Count; ++i) { for (int j = i + 1; j < villageList.Count; ++j) { distanceFunc(villageList[i], villagelist[j]); } } Where distanceFunc is whatever implementation of a distance function you want to use and villageList is your List of villages. The reason you would use for-loops is because you need the inner loop to start one element past the the outer loop (i + 1), and foreach loops don't let you easily access the index you're currently at (they let you access the element itself, but not easily see it's position in the array).
You need two for loops: var villages = new List<Villages>() { ... }; for (int i = 0; i < villages.Count - 1; i++) for (int j = i + 1; j < villages.Count; j++) Console.WriteLine(getDistance(villages[i], villages[j])); Where getDistance you should write yourself. It should return a distance between two specified Villages.
How about something like this pseudocode: villages = [a, b, c, ...] for i=0 to len(villages)-2: for j=i+1 to len(villages)-1: print(villages[i], villages[j], dist(villages[i], villages[j]))
Delete Duplicate from an Array
I need to delete duplicate entries from an array, but can't use any new data structures and the same array should return only distinct elements. For example, if my array is 1,3,3,3,5,55,67,1 then the result should be 1,3,5,55,67. I believe I have solved the problem, but I need your opinion on whether it is a good algorithm or if I need to change something. public void DeleteDuplicate(int[] array) { int l = 0; int newPosition = array.Length -1; for (int i = 0; i < array.Length; i++) { for (int j = i + 1; j < array.Length-l; j++) { if (array[i] == array[j]) { int temp = array[j]; array[j] = array[newPosition]; array[newPosition] = temp; newPosition--; l++; } } } Array.Resize(ref array, array.Length - l); }
Your code is buggy. Try running it against {1, 1, 1} - I think it will return {1, 1}.
In general, the question whether you have to maintain the relative ordering of the elements. For example, whether it is possible to return {1, 2} for input {2, 1, 2, 1}. If it is allowed, then the fastest solution will be: sort input array run once through it comparing a[i] with a[i+1] and removing duplicates in O(N)' The total complexity would be O(N*logN), which is better than N^2 you proposed. If you must preserve the initial ordering, then I am not ready to propose a solution.
Last time I checked C# allowed you to sort 2 arrays by using one for the comparisons and doing the swaps on both. Here's what you can do: Create an array indices that stores the numbers 0 to N-1. Sort array and indices using array as the key. Find the duplicates and set the indices of duplicates to N+1. Sort array and indices using indices as key. Resize the array to the correct size and presto. This removes duplicates and preserves ordering.