Accessing Dictionary and Multi-Dimension Array is Slow - c#

I found that accessing Dictionary and Multi-Dimension array can be slow-- quite a puzzle since access time for dictionary and array is O(1).
This is my code
public struct StateSpace
{
public double q;
public double v;
public double a;
}
public class AccessTest
{
public Dictionary<int, Dictionary<double,StateSpace>> ModeStateSpace;
public double[,] eigenVectors;
public void AccessJob(int n, double times)
{
var sumDisplacement = new double[6];
for(int i=0; i< n; i++)
{
var modeDisplacement = ModeStateSpace[i][times].q; //takes 5.81 sec
for(int j=0; j<6; j++)
{
var eigenVector = eigenVectors[i, j]; //takes 5.69 sec
sumDisplacement[i] += eigenVector*modeDisplacement ; //takes 1.06 sec
}
}
}
}
Notice the interesting part? The manipulation takes ~ 1 sec, but the accessing of dictionary and multi dimensional array takes ~5 sec!! Any idea why this is the case?
The n is not important here, and so is the absolute magnitude of the time needed for each operation. What is important is the ratio between the arithmetic operation and the dictionary lookup time.
Edit: I'm using Ants Profiler to do the profiling.
Note: I just simplified my actual code into something like this; the above code snippet has not been tested yet, but I'm fairly confident that I capture the gist of the problem with the above code snippet.

It is a known fact that jagged arrays are faster than multidimensional arrays in .NET (at least on windows):
Why are multi-dimensional arrays in .NET slower than normal arrays?
What are the differences between a multidimensional array and an array of arrays in C#?
so maybe try a jagged array instead

Related

2D Array vs Array of Arrays in tight loop performance C#

I had a look and couldn't see anything quite answering my question.
I'm not exactly the best at creating accurate 'real life' tests, so i'm not sure if that's the problem here. Basically I want to create a few simple neural networks to create something to the effect of Gridworld. Performance of these neural networks will be critical and i dont want the hidden layer to be a bottleneck as much as possible.
I would rather use more memory and be faster, so I opted to use arrays instead of lists (due to lists having an extra bounds check over arrays). The arrays aren't always full, but because the if statement (check if the element is null) is the same until the end, it can be predicted and there is no performance drop from that at all.
My question comes from how I store the data for the network to process. I figured due to 2D arrays storing all the data together it would be better cache wise and would run faster. But from my mock up test that an array of arrays performs much better in this scenario.
Some Code:
private void RunArrayOfArrayTest(float[][] testArray, Data[] data)
{
for (int i = 0; i < testArray.Length; i++) {
for (int j = 0; j < testArray[i].Length; j++) {
var inputTotal = data[i].bias;
for (int k = 0; k < data[i].weights.Length; k++) {
inputTotal += testArray[i][k];
}
}
}
}
private void Run2DArrayTest(float[,] testArray, Data[] data, int maxI, int maxJ)
{
for (int i = 0; i < maxI; i++) {
for (int j = 0; j < maxJ; j++) {
var inputTotal = data[i].bias;
for (int k = 0; k < maxJ; k++) {
inputTotal += testArray[i, k];
}
}
}
}
These are the two functions that are timed. Each 'creature' has its own network (The first for loop), each network has hidden nodes (The second for loop) and i need to find the sum of the weights for each input (The third loop). In my test i stripped it so that it's not really what i am doing in my actual code, but the same amount of loops happen (The data variable would have it's own 2D array, but i didn't want to possibly skew the results). From this i was trying to get a feel for which one is faster, and to my surprise the array of arrays was.
Code to start the tests:
// Array of Array test
Stopwatch timer = Stopwatch.StartNew();
RunArrayOfArrayTest(arrayOfArrays, dataArrays);
timer.Stop();
Console.WriteLine("Array of Arrays finished in: " + timer.ElapsedTicks);
// 2D Array test
timer = Stopwatch.StartNew();
Run2DArrayTest(array2D, dataArrays, NumberOfNetworks, NumberOfInputNeurons);
timer.Stop();
Console.WriteLine("2D Array finished in: " + timer.ElapsedTicks);
Just wanted to show how i was testing it. The results from this in release mode give me values like:
Array of Arrays finished in: 8972
2D Array finished in: 16376
Can someone explain to me what i'm doing wrong? Why is an array of arrays faster in this situation by so much? Isn't a 2D array all stored together, meaning it would be more cache friendly?
Note i really do need this to be fast as it needs to sum up hundreds of thousands - millions of numbers per frame, and like i said i don't want this is be a problem. I know this can be multi threaded in the future quite easily because each network is completely separate and even each node is completely separate.
Last question i suppose, would something like this be possible to run on the GPU instead? I figure a GPU would not struggle to have much larger amounts of networks with much larger numbers of input/hidden neurons.
In the CLR, there are two different types of array:
Vectors, which are zero-based, single-dimensional arrays
Arrays, which can have non-zero bases and multiple dimensions
Your "array of arrays" is a "vector of vectors" in CLR terms.
Vectors are significantly faster than arrays, basically. It's possible that arrays could be optimized further in later CLR versions, but I doubt that there'll get the same amount of love as vectors, as they're so relatively rarely used. There's not a lot you can do to make CLR arrays faster. As you say, they'll be more cache friendly, but they have this CLR penalty.
You can improve your array-of-arrays code already, however, by only performing the first indexing operation once per row:
private void RunArrayOfArrayTest(float[][] testArray, Data[] data)
{
for (int i = 0; i < testArray.Length; i++) {
// These don't change in the loop below, so extract them
var row = testArray[i];
var inputTotal = data[i].bias;
var weightLength = data[i].weights.Length;
for (int j = 0; j < row.Length; j++) {
for (int k = 0; k < weightLength; k++) {
inputTotal += row[k];
}
}
}
}
If you want to get the cache friendliness and still use a vector, you could have a single float[] and perform the indexing yourself... but I'd probably start off with the array-of-arrays approach.

A workaround for a big multidimensional array (Jagged Array) C#?

I'm trying to initialize an array in three dimension to load a voxel world.
The total size of the map should be (2048/1024/2048). I tried to initialize an jagged array of "int" but I throw a memory exception. What is the size limit?
Size of my table: 2048 * 1024 * 2048 = 4'191'893'824
Anyone know there a way around this problem?
// System.OutOfMemoryException here !
int[][][] matrice = CreateJaggedArray<int[][][]>(2048,1024,2048);
// if i try normal Initialization I also throws the exception
int[, ,] matrice = new int[2048,1024,2048];
static T CreateJaggedArray<T>(params int[] lengths)
{
return (T)InitializeJaggedArray(typeof(T).GetElementType(), 0, lengths);
}
static object InitializeJaggedArray(Type type, int index, int[] lengths)
{
Array array = Array.CreateInstance(type, lengths[index]);
Type elementType = type.GetElementType();
if (elementType != null)
{
for (int i = 0; i < lengths[index]; i++)
{
array.SetValue(
InitializeJaggedArray(elementType, index + 1, lengths), i);
}
}
return array;
}
The maximum size of a single object in C# is 2GB. Since you are creating a multi-dimensional array rather than a jagged array (despite the name of your method) it is a single object that needs to contain all of those items, not several. If you actually used a jagged array then you wouldn't have a single item with all of that data (even though the total memory footprint would be a tad larger, not smaller, it's just spread out more).
Thank you so much to all the staff who tried to help me in understanding and solving my problem.
I tried several solution to be able to load a lot of data and stored in a table.
After two days, here are my tests and finally the solution which can store 4'191'893'824 entry into one array
I add my final solution, hoping someone could help
the goal
I recall the goal: Initialize an integer array [2048/1024/2048] for storing 4'191'893'824 data
Test 1: with JaggedArray method (failure)
system out of memory exception thrown
/* ******************** */
/* Jagged Array method */
/* ******************** */
// allocate the first dimension;
bigData = new int[2048][][];
for (int x = 0; x < 2048; x++)
{
// allocate the second dimension;
bigData[x] = new int[1024][];
for (int y = 0; y < 1024; y++)
{
// the last dimension allocation
bigData[x][y] = new int[2048];
}
}
Test 2: with List method (failure)
system out of memory exception thrown (divide the big array into several small array .. Does not work because "List <>" allows a maximum of "2GB" Ram allocution like a simple array unfortunately.)
/* ******************** */
/* List method */
/* ******************** */
List<int[,,]> bigData = new List<int[,,]>(512);
for (int a = 0; a < 512; a++)
{
bigData.Add(new int[256, 128, 256]);
}
Test 3: with MemoryMappedFile (Solution)
I finally finally found the solution!
Use the class "Memory Mapped File" contains the contents of a file in virtual memory.
MemoryMappedFile MSDN
Use with custom class that I found on codeproject here. The initialization is long but it works well!
/* ************************ */
/* MemoryMappedFile method */
/* ************************ */
string path = AppDomain.CurrentDomain.BaseDirectory;
var myList = new GenericMemoryMappedArray<int>(2048L*1024L*2048L, path);
using (myList)
{
myList.AutoGrow = false;
/*
for (int a = 0; a < (2048L * 1024L * 2048L); a++)
{
myList[a] = a;
}
*/
myList[12456] = 8;
myList[1939848234] = 1;
// etc...
}
From the MSDN documentation on Arrays (emphasis added)
By default, the maximum size of an Array is 2 gigabytes (GB). In a
64-bit environment, you can avoid the size restriction by setting the
enabled attribute of the gcAllowVeryLargeObjects configuration element
to true in the run-time environment. However, the array will still be
limited to a total of 4 billion elements, and to a maximum index of
0X7FEFFFFF in any given dimension (0X7FFFFFC7 for byte arrays and
arrays of single-byte structures).
So despite the above answers, even if you set the flag to allow a larger object size, the array is still limited to the 32bit limit of the number of elements.
EDIT: You'll likely have to redesign to eliminate the need for a multidimensional array as you're currently using it (as others have suggested, there are a few ways to do this between using actual jagged arrays, or some other collection of dimensions). Given the scale of the number of elements, it may be best to use a design that dynamically allocates objects/memory as used instead of arrays that have to pre-allocate it. (unless you don't mind using many gigabytes of memory) EDITx2: That is, perhaps you can define data structures that define filled content rather than defining every possible voxel in the world, even the "empty" ones. (I'm assuming the vast majority of voxels are "empty" rather than "filled")
EDIT: Although not trivial, especially if most of the space is considered "empty", then your best bet would be to introduce some sort of spatial tree that will let you efficiently query your world to see what objects are in a particular area. For example: Octrees (as Eric suggested) or RTrees
Creating this object as described, either as a standard array or as a jagged array, is going to destroy the locality of reference that allows your CPU to be performant. I recommend you use a structure like this instead:
class BigArray
{
ArrayCell[,,] arrayCell = new ArrayCell[32,16,32];
public int this[int i, int j, int k]
{
get { return (arrayCell[i/64, j/64, k/64])[i%64, j%64, k%16]; }
}
}
class ArrayCell
{
int[,,] cell = new int[64,64,64];
public int this[int i, int j, int k]
{
get { return cell[i,j,k]; }
}
}

Find intersection of two multi-dimensional Arrays in C# 4.0

Trying to find a solution to my ranking problem.
Basically I have two multi-dimensional double[,] arrays. Both containing rankings for certain scenarios, so [rank number, scenario number]. More than one scenario can have the same rank.
I want to generate a third multi-dimensional array, taking the intersections of the previous two multi-dimensional arrays to provide a joint ranking.
Does anyone have an idea how I can do this in C#?
Many thanks for any advice or help you can provide!
Edit:
Thank you for all the responses, sorry I should have included an example.
Here it is:
Array One:
[{0,4},{1,0},{1,2},{2,1},{3,5},{4,3}]
Array Two:
[{0,1},{0,4},{1,0},{1,2},{3,5},{4,3}]
Required Result:
[{0,4},{1,0},{1,2},{1,1},{2,5},{3,3}]
Here's some sample code that makes a bunch of assumptions but might be something like what you are looking for. I've added a few comments as well:
static double[,] Intersect(double[,] a1, double[,] a2)
{
// Assumptions:
// a1 and a2 are two-dimensional arrays of the same size
// An element in the array matches if and only if its value is found in the same location in both arrays
// result will contain not-a-number (NaN) for non-matches
double[,] result = new double[a1.GetLength(0), a1.GetLength(1)];
for (int i = 0; i < a1.GetLength(0); i++)
{
for (int j = 0; j < a1.GetLength(1); j++)
{
if (a1[i, j] == a2[i, j])
{
result[i, j] = a1[i, j];
}
else
{
result[i, j] = double.NaN;
}
}
}
return result;
}
For the most part, finding the intersection of multiple dimensional arrays will involve iterating over the elements in each of the dimensions in the arrays. If the indices of the array are not part of the match criteria (my second assumption in my code is removed), you would have to walk each dimension in each array - which increases the run-time of the algorithm (in this case, from O(n^2) to O(n^4).
If you care enough about run-time, I believe array matching is one of the typical examples of dynamic programming (DP) optimization; which you can read up on at your leisure.
I'm not sure how you wanted your results...you could probably return a flat collection of results that can be indexed by a pair, which would potentially save a lot of space if the expected result set is typically small. I went with a third fixed-sized array because it was the easiest thing to do.
Lastly, I'll mention that I don't see a keen C# way of doing this using IEnumerable, LINQ, or something like that. Someone more C# knowledgeable than I can chime in anytime now....
Given the additional information, I'd argue that you aren't actually working with multidimensional arrays, but instead are working with a collection of pairs. The pair is a pair of doubles. I think the following should work nicely:
public class Pair : IEquatable<Pair>
{
public double Rank;
public double Scenario;
public bool Equals(Pair p)
{
return Rank == p.Rank && Scenario == p.Scenario;
}
public override int GetHashCode()
{
int hashRank= Rank.GetHashCode();
int hashScenario = Scenario.GetHashCode();
return hashRank ^ hashScenario;
}
}
You can then use the Intersect operator on IEnumerable:
List<Pair> one = new List<Pair>();
List<Pair> two = new List<Pair>();
// ... populate the lists
List<Pair> result = one.Intersect(two).ToList();
Check out the following msdn article on Enumerable.Intersect() for more information:
http://msdn.microsoft.com/en-us/library/bb910215%28v=vs.90%29.aspx

Simple Dictionary Lookup is Slow in .Net Compared to Flat Array

I found that dictionary lookup could be very slow if compared to flat array access. Any idea why? I'm using Ants Profiler for performance testing. Here's a sample function that reproduces the problem:
private static void NodeDisplace()
{
var nodeDisplacement = new Dictionary<double, double[]>();
var times = new List<double>();
for (int i = 0; i < 6000; i++)
{
times.Add(i * 0.02);
}
foreach (var time in times)
{
nodeDisplacement.Add(time, new double[6]);
}
var five = 5;
var six = 6;
int modes = 10;
var arrayList = new double[times.Count*6];
for (int i = 0; i < modes; i++)
{
int k=0;
foreach (var time in times)
{
for (int j = 0; j < 6; j++)
{
var simpelCompute = five * six; // 0.027 sec
nodeDisplacement[time][j] = simpelCompute; //0.403 sec
arrayList[6*k+j] = simpelCompute; //0.0278 sec
}
k++;
}
}
}
Notice the relative magnitude between flat array access and dictionary access? Flat array is about 20 times faster than dictionary access ( 0.403/0.0278), after taking into account of the array index manipulation ( 6*k+j).
As weird as it sounds, but dictionary lookup is taking a major portion of my time, and I have to optimize it.
Yes, I'm not surprised. The point of dictionaries is that they're used to look up arbitrary keys. Consider what has to happen for a single array dereference:
Check bounds
Multiply index by element size
Add index to pointer
Very, very fast. Now for a dictionary lookup (very rough; depends on implementation):
Potentially check key for nullity
Take hash code of key
Find the right slot for that hash code (probably a "mod prime" operation)
Probably dereference an array element to find the information for that slot
Compare hash codes
If the hash codes match, compare for equality (and potentially go on to the next hash code match)
If you've got "keys" which can very easily be used as array indexes instead (e.g. contiguous integers, or something which can easily be mapped to contiguous integers) then that will be very, very fast. That's not the primary use case for hash tables. They're good for situations which can't easily be mapped that way - for example looking up by string, or by arbitrary double value (rather than doubles which are evenly spaced, and can thus be mapped to integers easily).
I would say that your title is misleading - it's not that dictionary lookup is slow, it's that when arrays are a more suitable approach, they're ludicrously fast.
In addition the Jon's answer I would like to add that your inner loop does not do very much, normally you do a least some more work in the inner loop and then the relative performance loss of the dictionary is somewhat lower.
If you look at the code for Double.GetHashCode() in Reflector you'll find that it is executing 4 lines of code (assuming your double is not 0), just that is more than the body of your inner loop. Dictionary<TKey, TValue>.Insert() (called by the set indexer) is even more code, almost a screen full.
The thing with Dictionary compared to a flat array is that you don't waste to much memory when your keys are not dense (as they are in your case) and that read and write are ~O(1) like arrays (but with a higher constant).
As a side note you can use a multi dimensional array instead of the 6*k+j trick.
Declare it this way
var arrayList = new double[times.Count, 6];
and use it this way
arrayList[k ,j] = simpelCompute;
It won't be faster, but it is easier to read.

Differences between a multidimensional array "[,]" and an array of arrays "[][]" in C#?

What are the differences between multidimensional arrays double[,] and array of arrays double[][] in C#?
If there is a difference?
What is the best use for each one?
Array of arrays (jagged arrays) are faster than multi-dimensional arrays and can be used more effectively. Multidimensional arrays have nicer syntax.
If you write some simple code using jagged and multidimensional arrays and then inspect the compiled assembly with an IL disassembler you will see that the storage and retrieval from jagged (or single dimensional) arrays are simple IL instructions while the same operations for multidimensional arrays are method invocations which are always slower.
Consider the following methods:
static void SetElementAt(int[][] array, int i, int j, int value)
{
array[i][j] = value;
}
static void SetElementAt(int[,] array, int i, int j, int value)
{
array[i, j] = value;
}
Their IL will be the following:
.method private hidebysig static void SetElementAt(int32[][] 'array',
int32 i,
int32 j,
int32 'value') cil managed
{
// Code size 7 (0x7)
.maxstack 8
IL_0000: ldarg.0
IL_0001: ldarg.1
IL_0002: ldelem.ref
IL_0003: ldarg.2
IL_0004: ldarg.3
IL_0005: stelem.i4
IL_0006: ret
} // end of method Program::SetElementAt
.method private hidebysig static void SetElementAt(int32[0...,0...] 'array',
int32 i,
int32 j,
int32 'value') cil managed
{
// Code size 10 (0xa)
.maxstack 8
IL_0000: ldarg.0
IL_0001: ldarg.1
IL_0002: ldarg.2
IL_0003: ldarg.3
IL_0004: call instance void int32[0...,0...]::Set(int32,
int32,
int32)
IL_0009: ret
} // end of method Program::SetElementAt
When using jagged arrays you can easily perform such operations as row swap and row resize. Maybe in some cases usage of multidimensional arrays will be more safe, but even Microsoft FxCop tells that jagged arrays should be used instead of multidimensional when you use it to analyse your projects.
A multidimensional array creates a nice linear memory layout while a jagged array implies several extra levels of indirection.
Looking up the value jagged[3][6] in a jagged array var jagged = new int[10][5] works like this:
Look up the element at index 3 (which is an array).
Look up the element at index 6 in that array (which is a value).
For each dimension in this case, there's an additional look up (this is an expensive memory access pattern).
A multidimensional array is laid out linearly in memory, the actual value is found by multiplying together the indexes. However, given the array var mult = new int[10,30], the Length property of that multidimensional array returns the total number of elements i.e. 10 * 30 = 300.
The Rank property of a jagged array is always 1, but a multidimensional array can have any rank. The GetLength method of any array can be used to get the length of each dimension. For the multidimensional array in this example mult.GetLength(1) returns 30.
Indexing the multidimensional array is faster. e.g. given the multidimensional array in this example mult[1,7] = 30 * 1 + 7 = 37, get the element at that index 37. This is a better memory access pattern because only one memory location is involved, which is the base address of the array.
A multidimensional array therefore allocates a continuous memory block, while a jagged array does not have to be square, e.g. jagged[1].Length does not have to equal jagged[2].Length, which would be true for any multidimensional array.
Performance
Performance wise, multidimensional arrays should be faster. A lot faster, but due to a really bad CLR implementation they are not.
23.084 16.634 15.215 15.489 14.407 13.691 14.695 14.398 14.551 14.252
25.782 27.484 25.711 20.844 19.607 20.349 25.861 26.214 19.677 20.171
5.050 5.085 6.412 5.225 5.100 5.751 6.650 5.222 6.770 5.305
The first row are timings of jagged arrays, the second shows multidimensional arrays and the third, well that's how it should be. The program is shown below, FYI this was tested running Mono. (The Windows timings are vastly different, mostly due to the CLR implementation variations).
On Windows, the timings of the jagged arrays are greatly superior, about the same as my own interpretation of what multidimensional array look up should be like, see 'Single()'. Sadly the Windows JIT-compiler is really stupid, and this unfortunately makes these performance discussions difficult, there are too many inconsistencies.
These are the timings I got on Windows, same deal here, the first row are jagged arrays, second multidimensional and third my own implementation of multidimensional, note how much slower this is on Windows compared to Mono.
8.438 2.004 8.439 4.362 4.936 4.533 4.751 4.776 4.635 5.864
7.414 13.196 11.940 11.832 11.675 11.811 11.812 12.964 11.885 11.751
11.355 10.788 10.527 10.541 10.745 10.723 10.651 10.930 10.639 10.595
Source code:
using System;
using System.Diagnostics;
static class ArrayPref
{
const string Format = "{0,7:0.000} ";
static void Main()
{
Jagged();
Multi();
Single();
}
static void Jagged()
{
const int dim = 100;
for(var passes = 0; passes < 10; passes++)
{
var timer = new Stopwatch();
timer.Start();
var jagged = new int[dim][][];
for(var i = 0; i < dim; i++)
{
jagged[i] = new int[dim][];
for(var j = 0; j < dim; j++)
{
jagged[i][j] = new int[dim];
for(var k = 0; k < dim; k++)
{
jagged[i][j][k] = i * j * k;
}
}
}
timer.Stop();
Console.Write(Format,
(double)timer.ElapsedTicks/TimeSpan.TicksPerMillisecond);
}
Console.WriteLine();
}
static void Multi()
{
const int dim = 100;
for(var passes = 0; passes < 10; passes++)
{
var timer = new Stopwatch();
timer.Start();
var multi = new int[dim,dim,dim];
for(var i = 0; i < dim; i++)
{
for(var j = 0; j < dim; j++)
{
for(var k = 0; k < dim; k++)
{
multi[i,j,k] = i * j * k;
}
}
}
timer.Stop();
Console.Write(Format,
(double)timer.ElapsedTicks/TimeSpan.TicksPerMillisecond);
}
Console.WriteLine();
}
static void Single()
{
const int dim = 100;
for(var passes = 0; passes < 10; passes++)
{
var timer = new Stopwatch();
timer.Start();
var single = new int[dim*dim*dim];
for(var i = 0; i < dim; i++)
{
for(var j = 0; j < dim; j++)
{
for(var k = 0; k < dim; k++)
{
single[i*dim*dim+j*dim+k] = i * j * k;
}
}
}
timer.Stop();
Console.Write(Format,
(double)timer.ElapsedTicks/TimeSpan.TicksPerMillisecond);
}
Console.WriteLine();
}
}
Simply put multidimensional arrays are similar to a table in DBMS.
Array of Array (jagged array) lets you have each element hold another array of the same type of variable length.
So, if you are sure that the structure of data looks like a table (fixed rows/columns), you can use a multi-dimensional array. Jagged array are fixed elements & each element can hold an array of variable length
E.g. Psuedocode:
int[,] data = new int[2,2];
data[0,0] = 1;
data[0,1] = 2;
data[1,0] = 3;
data[1,1] = 4;
Think of the above as a 2x2 table:
1 | 2
3 | 4
int[][] jagged = new int[3][];
jagged[0] = new int[4] { 1, 2, 3, 4 };
jagged[1] = new int[2] { 11, 12 };
jagged[2] = new int[3] { 21, 22, 23 };
Think of the above as each row having variable number of columns:
1 | 2 | 3 | 4
11 | 12
21 | 22 | 23
Update .NET 6:
With the release of .NET 6 I decided it was a good time to revisit this topic. I rewrote the test code for new .NET and ran it with the requirement of each part running at least a second. The benchmark was done on AMD Ryzen 5600x.
Results? It's complicated. It seems that Single array is the most performant for smaller and large arrays (< ~25x25x25 & > ~200x200x200) and Jagged arrays being fastest in between. Unfortunately it seems from my testing that multi-dimensional are by far the slowest option. At best performing twice as slow as the fastest option. But! It depends on what you need the arrays for because jagged arrays can take much longer to initialize on 50^3 cube the initialization was roughly 3 times longer than single dimensional. Multi-dimensional was only a little bit slower than single dimensional.
The conclusion? If you need fast code, benchmark it yourself on the machine it's going to run on. CPU architecture can complete change the relative performance of each method.
Numbers!
Method name Ticks/Iteration Scaled to the best
Array size 1x1x1 (10,000,000 iterations):
Jagged: 0.15 4.28
Single: 0.035 1
Multi-dimensional: 0.77 22
Array size 10x10x10 (25,000 iterations):
Jagged: 15 1.67
Single: 9 1
Multi-dimensional: 56 6.2
Array size 25x25x25 (25,000 iterations):
Jagged: 157 1.3
Single: 120 1
Multi-dimensional: 667 5.56
Array size 50x50x50 (10,000 iterations):
Jagged: 1,140 1
Single: 2,440 2.14
Multi-dimensional: 5,210 4.57
Array size 100x100x100 (10,000 iterations):
Jagged: 9,800 1
Single: 19,800 2
Multi-dimensional: 41,700 4.25
Array size 200x200x200 (1,000 iterations):
Jagged: 161,622 1
Single: 175,507 1.086
Multi-dimensional: 351,275 2.17
Array size 500x500x500 (100 iterations):
Jagged: 4,057.413 1.5
Single: 2,709,301 1
Multi-dimensional: 5,359,393 1.98
Don't trust me? Run it yourself and verify.
Note: the constant size seems to give jagged arrays an edge, but is not significant enough to change the order in my benchmarks. I have measured in some instance ~7% decrease in performance when using size from user input for jagged arrays, no difference for single arrays and very small difference (~1% or less) for multi-dimensional arrays. It is most prominent in the middle where jagged arrays take the lead.
using System.Diagnostics;
const string Format = "{0,7:0.000} ";
const int TotalPasses = 25000;
const int Size = 50;
Stopwatch timer = new();
var functionList = new List<Action> { Jagged, Single, SingleStandard, Multi };
Console.WriteLine("{0,5}{1,20}{2,20}{3,20}{4,20}", "Run", "Ticks", "ms", "Ticks/Instance", "ms/Instance");
foreach (var item in functionList)
{
var warmup = Test(item);
var run = Test(item);
Console.WriteLine($"{item.Method.Name}:");
PrintResult("warmup", warmup);
PrintResult("run", run);
Console.WriteLine();
}
static void PrintResult(string name, long ticks)
{
Console.WriteLine("{0,10}{1,20}{2,20}{3,20}{4,20}", name, ticks, string.Format(Format, (decimal)ticks / TimeSpan.TicksPerMillisecond), (decimal)ticks / TotalPasses, (decimal)ticks / TotalPasses / TimeSpan.TicksPerMillisecond);
}
long Test(Action func)
{
timer.Restart();
func();
timer.Stop();
return timer.ElapsedTicks;
}
static void Jagged()
{
for (var passes = 0; passes < TotalPasses; passes++)
{
var jagged = new int[Size][][];
for (var i = 0; i < Size; i++)
{
jagged[i] = new int[Size][];
for (var j = 0; j < Size; j++)
{
jagged[i][j] = new int[Size];
for (var k = 0; k < Size; k++)
{
jagged[i][j][k] = i * j * k;
}
}
}
}
}
static void Multi()
{
for (var passes = 0; passes < TotalPasses; passes++)
{
var multi = new int[Size, Size, Size];
for (var i = 0; i < Size; i++)
{
for (var j = 0; j < Size; j++)
{
for (var k = 0; k < Size; k++)
{
multi[i, j, k] = i * j * k;
}
}
}
}
}
static void Single()
{
for (var passes = 0; passes < TotalPasses; passes++)
{
var single = new int[Size * Size * Size];
for (var i = 0; i < Size; i++)
{
int iOffset = i * Size * Size;
for (var j = 0; j < Size; j++)
{
var jOffset = iOffset + j * Size;
for (var k = 0; k < Size; k++)
{
single[jOffset + k] = i * j * k;
}
}
}
}
}
static void SingleStandard()
{
for (var passes = 0; passes < TotalPasses; passes++)
{
var single = new int[Size * Size * Size];
for (var i = 0; i < Size; i++)
{
for (var j = 0; j < Size; j++)
{
for (var k = 0; k < Size; k++)
{
single[i * Size * Size + j * Size + k] = i * j * k;
}
}
}
}
}
Lesson learned: Always include CPU in benchmarks, because it makes a difference. Did it this time? I don't know but I suspect it might've.
Original answer:
I would like to update on this, because in .NET Core multi-dimensional arrays are faster than jagged arrays. I ran the tests from John Leidegren and these are the results on .NET Core 2.0 preview 2. I increased the dimension value to make any possible influences from background apps less visible.
Debug (code optimalization disabled)
Running jagged
187.232 200.585 219.927 227.765 225.334 222.745 224.036 222.396 219.912 222.737
Running multi-dimensional
130.732 151.398 131.763 129.740 129.572 159.948 145.464 131.930 133.117 129.342
Running single-dimensional
91.153 145.657 111.974 96.436 100.015 97.640 94.581 139.658 108.326 92.931
Release (code optimalization enabled)
Running jagged
108.503 95.409 128.187 121.877 119.295 118.201 102.321 116.393 125.499 116.459
Running multi-dimensional
62.292 60.627 60.611 60.883 61.167 60.923 62.083 60.932 61.444 62.974
Running single-dimensional
34.974 33.901 34.088 34.659 34.064 34.735 34.919 34.694 35.006 34.796
I looked into disassemblies and this is what I found
jagged[i][j][k] = i * j * k; needed 34 instructions to execute
multi[i, j, k] = i * j * k; needed 11 instructions to execute
single[i * dim * dim + j * dim + k] = i * j * k; needed 23 instructions to execute
I wasn't able to identify why single-dimensional arrays were still faster than multi-dimensional but my guess is that it has to do with some optimalization made on the CPU
Preface: This comment is intended to address the answer provided by okutane regarding the performance difference between jagged arrays and multidimensional ones.
The assertion that one type is slower than the other because of the method calls isn't correct. One is slower than the other because of more complicated bounds-checking algorithms. You can easily verify this by looking, not at the IL, but at the compiled assembly. For example, on my 4.5 install, accessing an element (via pointer in edx) stored in a two-dimensional array pointed to by ecx with indexes stored in eax and edx looks like so:
sub eax,[ecx+10]
cmp eax,[ecx+08]
jae oops //jump to throw out of bounds exception
sub edx,[ecx+14]
cmp edx,[ecx+0C]
jae oops //jump to throw out of bounds exception
imul eax,[ecx+0C]
add eax,edx
lea edx,[ecx+eax*4+18]
Here, you can see that there's no overhead from method calls. The bounds checking is just very convoluted thanks to the possibility of non-zero indexes, which is a functionality not on offer with jagged arrays. If we remove the sub, cmp, and jmps for the non-zero cases, the code pretty much resolves to (x*y_max+y)*sizeof(ptr)+sizeof(array_header). This calculation is about as fast (one multiply could be replaced by a shift, since that's the whole reason we choose bytes to be sized as powers of two bits) as anything else for random access to an element.
Another complication is that there are plenty of cases where a modern compiler will optimize away the nested bounds-checking for element access while iterating over a single-dimension array. The result is code that basically just advances an index pointer over the contiguous memory of the array. Naive iteration over multi-dimensional arrays generally involves an extra layer of nested logic, so a compiler is less likely to optimize the operation. So, even though the bounds-checking overhead of accessing a single element amortizes out to constant runtime with respect to array dimensions and sizes, a simple test-case to measure the difference may take many times longer to execute.
Multi-dimension arrays are (n-1)-dimension matrices.
So int[,] square = new int[2,2] is square matrix 2x2, int[,,] cube = new int [3,3,3] is a cube - square matrix 3x3. Proportionality is not required.
Jagged arrays are just array of arrays - an array where each cell contains an array.
So MDA are proportional, JD may be not! Each cell can contains an array of arbitrary length!
This might have been mentioned in the above answers but not explicitly: with jagged array you can use array[row] to refer a whole row of data, but this is not allowed for multi-d arrays.
In addition to the other answers, note that a multidimensional array is allocated as one big chunky object on the heap. This has some implications:
Some multidimensional arrays will get allocated on the Large Object Heap (LOH) where their equivalent jagged array counterparts would otherwise not have.
The GC will need to find a single contiguous free block of memory to allocate a multidimensional array, whereas a jagged array might be able to fill in gaps caused by heap fragmentation... this isn't usually an issue in .NET because of compaction, but the LOH doesn't get compacted by default (you have to ask for it, and you have to ask every time you want it).
You'll want to look into <gcAllowVeryLargeObjects> for multidimensional arrays way before the issue will ever come up if you only ever use jagged arrays.
I thought I'd chime in here from the future with some performance results from .NET 5, seen as that will be the platform which everyone uses from now on.
These are the same tests that John Leidegren used (in 2009).
My results (.NET 5.0.1):
Debug:
(Jagged)
5.616 4.719 4.778 5.524 4.559 4.508 5.913 6.107 5.839 5.270
(Multi)
6.336 7.477 6.124 5.817 6.516 7.098 5.272 6.091 25.034 6.023
(Single)
4.688 3.494 4.425 6.176 4.472 4.347 4.976 4.754 3.591 4.403
Release(code optimizations on):
(Jagged)
2.614 2.108 3.541 3.065 2.172 2.936 1.681 1.724 2.622 1.708
(Multi)
3.371 4.690 4.502 4.153 3.651 3.637 3.580 3.854 3.841 3.802
(Single)
1.934 2.102 2.246 2.061 1.941 1.900 2.172 2.103 1.911 1.911
Ran on a a 6 core 3.7GHz AMD Ryzen 1600 machine.
It looks as though the performance ratio is still roughly the same. I'd say unless you're really optimizing hard, just use multi-dimensional arrays as the syntax is slightly easier to use.
Jagged arrays are arrays of arrays or arrays in which each row contains an array of its own.
These arrays can have lengths different than those in the other rows.
Declaration and Allocation an Array of Arrays
The only difference in the declaration of the jagged arrays compared to the regular multidimensional array is that we do not have just one pair of brackets. With the jagged arrays, we have a pair of brackets per dimension. We allocate them this way:
int[][] exampleJaggedArray;
jaggedArray = new int[2][];
jaggedArray[0] = new int[5];
jaggedArray[1] = new int[3];
The Initializing an array of arrays
int[][] exampleJaggedArray = {
new int[] {5, 7, 2},
new int[] {10, 20, 40},
new int[] {3, 25}
};
Memory Allocation
Jagged arrays are an aggregation of references. A jagged array does not directly contain any arrays, but rather has elements pointing to them. The size is unknown and that is why CLR just keeps references to the internal arrays. After we allocate memory for one array-element of the jagged array, then the reference starts pointing to the newly created block in the dynamic memory.
The variable exampleJaggedArray is stored in the execution stack of the program and points to a block in the dynamic memory, which contains a sequence of three references to other three blocks in memory; each of them contains an array of integer numbers – the elements of the jagged array:
I am parsing .il files generated by ildasm to build a database of assemnblies, classes, methods, and stored procedures for use doing a conversion. I came across the following, which broke my parsing.
.method private hidebysig instance uint32[0...,0...]
GenerateWorkingKey(uint8[] key,
bool forEncryption) cil managed
The book Expert .NET 2.0 IL Assembler, by Serge Lidin, Apress, published 2006, Chapter 8, Primitive Types and Signatures, pp. 149-150 explains.
<type>[] is termed a Vector of <type>,
<type>[<bounds> [<bounds>**] ] is termed an array of <type>
** means may be repeated, [ ] means optional.
Examples: Let <type> = int32.
1) int32[...,...] is a two-dimensional array of undefined lower bounds and sizes
2) int32[2...5] is a one-dimensional array of lower bound 2 and size 4.
3) int32[0...,0...] is a two-dimensional array of lower bounds 0 and undefined size.
Tom
Using a test based on the one by John Leidegren, I benchmarked the result using .NET 4.7.2, which is the relevant version for my purposes and thought I could share. I originally started with this comment in the dotnet core GitHub repository.
It appears that the performance varies greatly as the array size changes, at least on my setup, 1 processor xeon with 4physical 8logical.
w = initialize an array, and put int i * j in it.
wr = do w, then in another loop set int x to [i,j]
As array size grows, multidimensional appears to outperform.
Size
rw
Method
Mean
Error
StdDev
Gen 0/1k Op
Gen 1/1k Op
Gen 2/1k Op
Allocated Memory/Op
1800*500
w
Jagged
2.445 ms
0.0959 ms
0.1405 ms
578.1250
281.2500
85.9375
3.46 MB
1800*500
w
Multi
3.079 ms
0.2419 ms
0.3621 ms
269.5313
269.5313
269.5313
3.43 MB
2000*4000
w
Jagged
50.29 ms
3.262 ms
4.882 ms
5937.5000
3375.0000
937.5000
30.62 MB
2000*4000
w
Multi
26.34 ms
1.797 ms
2.690 ms
218.7500
218.7500
218.7500
30.52 MB
2000*4000
wr
Jagged
55.30 ms
3.066 ms
4.589 ms
5937.5000
3375.0000
937.5000
30.62 MB
2000*4000
wr
Multi
32.23 ms
2.798 ms
4.187 ms
285.7143
285.7143
285.7143
30.52 MB
1000*2000
wr
Jagged
11.18 ms
0.5397 ms
0.8078 ms
1437.5000
578.1250
234.3750
7.69 MB
1000*2000
wr
Multi
6.622 ms
0.3238 ms
0.4847 ms
210.9375
210.9375
210.9375
7.63 MB
Update: last two tests with double[,] instead of int[,]. The difference appears significant considering the errors. With int, ratio of mean for jagged vs md is between 1.53x and 1.86x, with doubles it is 1.88x and 2.42x.
Size
rw
Method
Mean
Error
StdDev
Gen 0/1k Op
Gen 1/1k Op
Gen 2/1k Op
Allocated Memory/Op
1000*2000
wr
Jagged
26.83 ms
1.221 ms
1.790 ms
3062.5000
1531.2500
531.2500
15.31 MB
1000*2000
wr
Multi
12.61 ms
1.018 ms
1.524 ms
156.2500
156.2500
156.2500
15.26 MB

Categories

Resources