Bitarray VS bool[] - c#

I expected to find a existing question here on SO about this but i didn't.
What is the advantage of using a Bitarray when you can store your bool values in a bool[]?
System.Collections.BitArray biArray = new System.Collections.BitArray(8);
biArray[4] = true;
bool[] boArray = new bool[8];
boArray[4] = true;
The bool[] seems a little more handy to me because there exist more (extension)methods to work with a array instead of a BitArray

There's a memory/performance tradeoff. BitArray will store 8 entries per byte, but accessing a single entry requires a bunch of logical operations under the hood. A bool array will store each entry as one byte and thus taking up more memory, but requiring fewer CPU cycles to access.
Essentially, BitArray is a memory optimization over bool[], but there's no point in using it unless memory is sparse.
Edit:
Created a simple performance test.
Inversing 500M elements using BitArray takes 6 seconds on my machine:
const int limit = 500000000;
var bitarray = new BitArray(limit);
for (var i = 0; i < limit; ++i)
{
bitarray[i] = !bitarray[i];
}
The same test using a bool array takes about 1.5 seconds:
const int limit = 500000000;
var boolarray = new bool[limit];
for (var i = 0; i < limit; ++i)
{
boolarray[i] = !boolarray[i];
}

BitArray is compact and allows you to perform bitwise operations. From the MSDN forum :
A BitArray uses one bit for each value, while a bool[] uses one byte
for each value. You can pass to the BitArray constructor either an
array of bools, an array of bytes or an array of integers. You can
also pass an integer value specifying the desired length and
(optionally) a boolean argument that specifies if the individual bits
should be set or not.

Related

HashSet for byte arrays [duplicate]

This question already has answers here:
How to create a HashSet<List<Int>> with distinct elements?
(5 answers)
Closed 4 years ago.
This post was edited and submitted for review 2 months ago and failed to reopen the post:
Original close reason(s) were not resolved
I need a HashSet for byte arrays in order to check if a given byte array exists in the collection. But it seems like this doesn't work for byte arrays (or perhaps any array).
Here is my test code:
void test()
{
byte[] b1 = new byte[] { 1, 2, 3 };
byte[] b2 = new byte[] { 1, 2, 3 };
HashSet<byte[]> set = new HashSet<byte[]>();
set.Add(b1);
set.Add(b2);
Text = set.Count.ToString();//returns 2 instead of the expected 1.
}
Is there a way to make a HashSet for byte arrays?
Construct a HashSet with an IEqualityComparer<byte[]>. You don't want to use an interface here. While byte[] does in fact implement interfaces such as IEnumerable<byte>, IList<byte>, etc., use of them is a bad idea due to the weightiness involved. You don't use the fact that string implements IEnumerable<char> much at all so don't for byte[] either.
public class bytearraycomparer : IEqualityComparer<byte[]> {
public bool Equals(byte[] a, byte[] b)
{
if (a.Length != b.Length) return false;
for (int i = 0; i < a.Length; i++)
if (a[i] != b[i]) return false;
return true;
}
public int GetHashCode(byte[] a)
{
uint b = 0;
for (int i = 0; i < a.Length; i++)
b = ((b << 23) | (b >> 9)) ^ a[i];
return unchecked((int)b);
}
}
void test()
{
byte[] b1 = new byte[] { 1, 2, 3 };
byte[] b2 = new byte[] { 1, 2, 3 };
HashSet<byte[]> set = new HashSet<byte[]>(new bytearraycomparer );
set.Add(b1);
set.Add(b2);
Text = set.Count.ToString();
}
https://msdn.microsoft.com/en-us/library/bb359100(v=vs.110).aspx
If you were to use the answers in proposed duplicate question, you would end up with one function call and one array bounds check per byte processed. You don't want that. If expressed in the simplest way like so, the jitter will inline the fetches, and then notice that the bounds checks cannot fail (arrays can't be resized) and omit them. Only one function call for the entire array. Yay.
Lists tend to have only a few elements as compared to a byte array so often the dirt-simple hash function such as foreach (var item in list) hashcode = hashcode * 5 + item.GetHashCode(); if you use that kind of hash function for byte arrays you will have problems. The multiply by a small odd number trick ends up being rather biased too quickly for comfort here. My particular hash function given here is probably not optimal but we have run tests on this family and it performs quite well with three million entries. The multiply-by-odd was getting into trouble too quickly due to possessing numerous collisions that were only two bytes long/different. If you avoid the degenerate numbers this family will have no collisions in two bytes and most of them have no collisions in three bytes.
Considering actual use cases: By far the two most likely things here are byte strings and actual files being checked for sameness. In either case, taking a hash code of the first few bytes is most likely a bad idea. String's hash code uses the whole string, so byte strings should do the same, and most files being duplicated don't have a unique prefix in the first few bytes. For N entries, if you have hash collisions for the square root on N, you might as well have walked the entire array when generating the hash code, neglecting the fact that compares are slower than hashes.

Need a fast method of deserializing 1 million Strings & Guids in c#

I want to deserialize a list of 1 million pairs of (String,Guid) for a performance critical app. The format can be anything I choose, and serialization does not have the same performance requirements.
What sort of approach is best? Text or binary? Write each pair (string,guid) consecutively, or write all strings followed by all guids?
I started playing with LinqPad, (and the simpler example of deserializing strings only) and found that (slightly counter-intuitively), using a TextReader and ReadLine() was a fair bit faster than using a BinaryReader and ReadString(). (Is the filesystem cache playing tricks on me?)
public string[] DeSerializeBinary()
{
var tmr = System.Diagnostics.Stopwatch.StartNew();
long ms = 0;
string[] arr = null;
using (var rdr = new BinaryReader(new FileStream(file, FileMode.Open, FileAccess.Read)))
{
var num = rdr.ReadInt32();
arr = new String[num];
for (int i = 0; i < num; i++)
{
arr[i] = rdr.ReadString();
}
tmr.Stop();
ms = tmr.ElapsedMilliseconds;
Console.WriteLine("DeSerializeBinary took {0}ms", ms);
}
return arr;
}
public string[] DeserializeText()
{
var tmr = System.Diagnostics.Stopwatch.StartNew();
long ms = 0;
string[] arr = null;
using (var rdr = File.OpenText(file))
{
var num = Int32.Parse(rdr.ReadLine());
arr = new String[num];
for (int i = 0; i < num; i++)
{
arr[i] = rdr.ReadLine();
}
tmr.Stop();
ms = tmr.ElapsedMilliseconds;
Console.WriteLine("DeserializeText took {0}ms", ms);
}
return arr;
}
Some Edits:
I used RamMap to clear the file system cache, and it turns out there was very little difference to Text & Binary reader for strings only.
I have a fairly simple class that holds the string and guid. It also holds an int index which corresponds to its position in the list. Obviously there's no need to include this in serialization.
In a test for (binary) deSerializing Strings and Guids alternately, I get around 500ms.
Ideal timing is 50ms, or as close as I can get. However, a simple experiment showed it takes at least 120ms to read the (compressed) file into memory from a reasonably fast SSD drive, without any sort of parsing at all. So 50ms seems unlikely.
Our strings have no theoretical length restrictions. However, we can assume that the performance target only applies if they are all 20 characters or less.
Timings include opening the file.
Reading the Strings is the clear bottleneck now (hence my experiments with serializing strings only). The JIT_NewFast took 30% before I preallocated an array of 16bytes for reading GUIDs.
It's not surprising that reading a bunch of strings is faster with StreamReader than with BinaryReader. StreamReader reads in blocks from the underlying stream, and parses the strings from that buffer. BinaryReader doesn't have a buffer like that. It reads the string length from the underlying stream, and then reads that many characters. So BinaryReader makes more calls to the base stream's Read method.
But there's more to deserializing a (String, Guid) pair than just reading. You also have to parse the Guid. If you write the file in binary then the Guid is written in binary, which makes it much easier and faster to create a Guid structure. If it's a string, then you have to call new Guid(string) to parse the text and create a Guid, after you split the line into its two fields.
Hard to say which of those will be faster.
I can't imagine that we're talking about a whole lot of time here. Certainly reading a file with a million lines will take around a second. Unless the string is really long. A GUID is only 36 characters if you count the separators, right?
With BinaryWriter, you can write the file like this:
writer.Write(count); // integer number of records
foreach (var pair in pairs)
{
writer.Write(pair.theString);
writer.Write(pair.theGuid.ToByteArray());
}
And to read it, you have:
count = reader.ReadInt32();
byte[] guidBytes = new byte[16];
for (int i = 0; i < count; ++i)
{
string s = reader.ReadString();
reader.Read(guidBytes, 0, guidBytes.Length);
pairs.Add(new Pair(s, new Guid(guidBytes));
}
Whether that's faster than splitting a string and calling the Guid constructor that takes a string parameter, I don't know.
I suspect that any difference is going to be pretty slight. I'd probably go with the simplest method: a text file.
If you want to get really crazy, you can write a custom format that you can easily slurp up in just a couple of large reads (a header, an index, and two arrays for strings and GUIDs), and do everything else in memory. That would almost certainly be faster. But faster enough to warrant the extra work? Doubtful.
Update
Or maybe not doubtful. Here's some code that writes and reads a custom binary format. The format is:
count (int32)
guids (count * 16 bytes)
strings (one big concatenated string)
index (index of each string's starting character in the big string)
I assume you're using a Dictionary<string, Guid> to hold these things. But your data structure doesn't really matter. The code would be substantially the same.
Note that I tested this very briefly. I won't say that the code is 100% bug free, but I think you can get the idea of what I'm doing.
private void WriteGuidFile(string filename, Dictionary<string, Guid>guids)
{
using (var fs = File.Create(filename))
{
using (var writer = new BinaryWriter(fs, Encoding.UTF8))
{
List<int> stringIndex = new List<int>(guids.Count);
StringBuilder bigString = new StringBuilder();
// write count
writer.Write(guids.Count);
// Write the GUIDs and build the string index
foreach (var pair in guids)
{
writer.Write(pair.Value.ToByteArray(), 0, 16);
stringIndex.Add(bigString.Length);
bigString.Append(pair.Key);
}
// Add one more entry to the string index.
// makes deserializing easier
stringIndex.Add(bigString.Length);
// Write the string that contains all of the strings, combined
writer.Write(bigString.ToString());
// write the index
foreach (var ix in stringIndex)
{
writer.Write(ix);
}
}
}
}
Reading is just slightly more involved:
private Dictionary<string, Guid> ReadGuidFile(string filename)
{
using (var fs = File.OpenRead(filename))
{
using (var reader = new BinaryReader(fs, Encoding.UTF8))
{
// read the count
int count = reader.ReadInt32();
// The guids are in a huge byte array sized 16*count
byte[] guidsBuffer = new byte[16*count];
reader.Read(guidsBuffer, 0, guidsBuffer.Length);
// Strings are all concatenated into one
var bigString = reader.ReadString();
// Index is an array of int. We can read it as an array of
// ((count+1) * 4) bytes.
byte[] indexBuffer = new byte[4*(count+1)];
reader.Read(indexBuffer, 0, indexBuffer.Length);
var guids = new Dictionary<string, Guid>(count);
byte[] guidBytes = new byte[16];
int startix = 0;
int endix = 0;
for (int i = 0; i < count; ++i)
{
endix = BitConverter.ToInt32(indexBuffer, 4*(i+1));
string key = bigString.Substring(startix, endix - startix);
Buffer.BlockCopy(guidsBuffer, (i*16),
guidBytes, 0, 16);
guids.Add(key, new Guid(guidBytes));
startix = endix;
}
return guids;
}
}
}
A couple of notes here. First, I'm using BitConverter to convert the data in the byte arrays to integers. It would be faster to use unsafe code and just index into the arrays using an int32*.
You might gain some speed by using pointers to index into the guidBuffer and calling Guid Constructor (Int32, Int16, Int16, Byte, Byte, Byte, Byte, Byte, Byte, Byte, Byte) rather than using Buffer.BlockCopy to copy the GUID into the temporary array.
You could make the string index an index of lengths rather than the starting positions. That would eliminate the need for the extra value at the end of the array, but it's unlikely that it'd make any difference in the speed.
There might be other optimization opportunities, but I think you get the general idea here.

A workaround for a big multidimensional array (Jagged Array) C#?

I'm trying to initialize an array in three dimension to load a voxel world.
The total size of the map should be (2048/1024/2048). I tried to initialize an jagged array of "int" but I throw a memory exception. What is the size limit?
Size of my table: 2048 * 1024 * 2048 = 4'191'893'824
Anyone know there a way around this problem?
// System.OutOfMemoryException here !
int[][][] matrice = CreateJaggedArray<int[][][]>(2048,1024,2048);
// if i try normal Initialization I also throws the exception
int[, ,] matrice = new int[2048,1024,2048];
static T CreateJaggedArray<T>(params int[] lengths)
{
return (T)InitializeJaggedArray(typeof(T).GetElementType(), 0, lengths);
}
static object InitializeJaggedArray(Type type, int index, int[] lengths)
{
Array array = Array.CreateInstance(type, lengths[index]);
Type elementType = type.GetElementType();
if (elementType != null)
{
for (int i = 0; i < lengths[index]; i++)
{
array.SetValue(
InitializeJaggedArray(elementType, index + 1, lengths), i);
}
}
return array;
}
The maximum size of a single object in C# is 2GB. Since you are creating a multi-dimensional array rather than a jagged array (despite the name of your method) it is a single object that needs to contain all of those items, not several. If you actually used a jagged array then you wouldn't have a single item with all of that data (even though the total memory footprint would be a tad larger, not smaller, it's just spread out more).
Thank you so much to all the staff who tried to help me in understanding and solving my problem.
I tried several solution to be able to load a lot of data and stored in a table.
After two days, here are my tests and finally the solution which can store 4'191'893'824 entry into one array
I add my final solution, hoping someone could help
the goal
I recall the goal: Initialize an integer array [2048/1024/2048] for storing 4'191'893'824 data
Test 1: with JaggedArray method (failure)
system out of memory exception thrown
/* ******************** */
/* Jagged Array method */
/* ******************** */
// allocate the first dimension;
bigData = new int[2048][][];
for (int x = 0; x < 2048; x++)
{
// allocate the second dimension;
bigData[x] = new int[1024][];
for (int y = 0; y < 1024; y++)
{
// the last dimension allocation
bigData[x][y] = new int[2048];
}
}
Test 2: with List method (failure)
system out of memory exception thrown (divide the big array into several small array .. Does not work because "List <>" allows a maximum of "2GB" Ram allocution like a simple array unfortunately.)
/* ******************** */
/* List method */
/* ******************** */
List<int[,,]> bigData = new List<int[,,]>(512);
for (int a = 0; a < 512; a++)
{
bigData.Add(new int[256, 128, 256]);
}
Test 3: with MemoryMappedFile (Solution)
I finally finally found the solution!
Use the class "Memory Mapped File" contains the contents of a file in virtual memory.
MemoryMappedFile MSDN
Use with custom class that I found on codeproject here. The initialization is long but it works well!
/* ************************ */
/* MemoryMappedFile method */
/* ************************ */
string path = AppDomain.CurrentDomain.BaseDirectory;
var myList = new GenericMemoryMappedArray<int>(2048L*1024L*2048L, path);
using (myList)
{
myList.AutoGrow = false;
/*
for (int a = 0; a < (2048L * 1024L * 2048L); a++)
{
myList[a] = a;
}
*/
myList[12456] = 8;
myList[1939848234] = 1;
// etc...
}
From the MSDN documentation on Arrays (emphasis added)
By default, the maximum size of an Array is 2 gigabytes (GB). In a
64-bit environment, you can avoid the size restriction by setting the
enabled attribute of the gcAllowVeryLargeObjects configuration element
to true in the run-time environment. However, the array will still be
limited to a total of 4 billion elements, and to a maximum index of
0X7FEFFFFF in any given dimension (0X7FFFFFC7 for byte arrays and
arrays of single-byte structures).
So despite the above answers, even if you set the flag to allow a larger object size, the array is still limited to the 32bit limit of the number of elements.
EDIT: You'll likely have to redesign to eliminate the need for a multidimensional array as you're currently using it (as others have suggested, there are a few ways to do this between using actual jagged arrays, or some other collection of dimensions). Given the scale of the number of elements, it may be best to use a design that dynamically allocates objects/memory as used instead of arrays that have to pre-allocate it. (unless you don't mind using many gigabytes of memory) EDITx2: That is, perhaps you can define data structures that define filled content rather than defining every possible voxel in the world, even the "empty" ones. (I'm assuming the vast majority of voxels are "empty" rather than "filled")
EDIT: Although not trivial, especially if most of the space is considered "empty", then your best bet would be to introduce some sort of spatial tree that will let you efficiently query your world to see what objects are in a particular area. For example: Octrees (as Eric suggested) or RTrees
Creating this object as described, either as a standard array or as a jagged array, is going to destroy the locality of reference that allows your CPU to be performant. I recommend you use a structure like this instead:
class BigArray
{
ArrayCell[,,] arrayCell = new ArrayCell[32,16,32];
public int this[int i, int j, int k]
{
get { return (arrayCell[i/64, j/64, k/64])[i%64, j%64, k%16]; }
}
}
class ArrayCell
{
int[,,] cell = new int[64,64,64];
public int this[int i, int j, int k]
{
get { return cell[i,j,k]; }
}
}

Fastest algorithm to create an array of known size

I need to create an array of boolean values, which could be on the scale of 100,000s or even millions of entries. It also needs to be super-fast, so every millisecond per iteration counts.
At the time of beginning the loop, I will already know how many entries there are going to be in the array. The question is, will it be faster to create a bool array up front and fill in the values by index (which is random access - could be slow?), or should I create a List<bool>, keep adding entries to the list, and at the end return .ToArray()?
In other words:
Option 1
var array = new bool[size];
for (var n=0; n<size; n++)
array[n] = GetValue(n);
return array;
Option 2
var list = new List<bool>();
for (var n=0; n<size; n++)
list.Add(GetValue(n));
return list.ToArray();
Or maybe there's a 3rd way that's even faster?
Use a System.Collections.BitArray and don't worry about speed.
What you are suggesting above will only waste your memory. This is optimizes both for speed and size, and will pack your bool values nicely (8 per byte, as the gods intended :).
Reply to below comments: If you use a BitArray, everything will be zero at first. Set only those bits for which you have GetValue == true.
The following code seems to show (at least to me) that of the methods discussed on this page, the simple allocation to a bool[] using a loop is quickest.
The code also seems to show me that unless GetValue(n) is computationally trivial, the overhead of allocating the bytes is not the part of the process I would be hoping to optimise.
Hope this helps in some way.
edit: added the results from the run (on my machine)
-- 187ms BitArray
-- 171ms List<bool>().ToArray
-- 168ms bool[] set only if true
-- 130ms bool[] always set
--11460ms bool[] always set with 'complex' GetValue()
class Program
{
static void Main(string[] args)
{
BitArray bitArray = new BitArray(10000000);
bool[] boolArray = new bool[10000000];
Stopwatch sw1 = new Stopwatch();
sw1.Start();
for (int i = 0; i < 10000000; i++)
{
bitArray[i] = GetMod2(i);
}
Console.WriteLine(sw1.ElapsedMilliseconds);
sw1.Restart();
var list = new List<bool>();
for (int i = 0; i < 10000000; i++)
list.Add(GetMod2(i));
var boolArray2 = list.ToArray();
Console.WriteLine(sw1.ElapsedMilliseconds);
sw1.Restart();
for (int i = 0; i < 10000000; i++)
{
bool nextVal = GetMod2(i);
if (nextVal)
bitArray[i] = true;
}
Console.WriteLine(sw1.ElapsedMilliseconds);
sw1.Restart();
for (int i = 0; i < 10000000; i++)
{
boolArray[i] = GetMod2(i);
}
Console.WriteLine(sw1.ElapsedMilliseconds);
sw1.Restart();
for (int i = 0; i < 10000000; i++)
{
boolArray[i] = GetRand(i);
}
Console.WriteLine(sw1.ElapsedMilliseconds);
Console.ReadLine();
}
static bool GetMod2(int i)
{
return (i % 2) == 1;
}
static bool GetRand(int i)
{
return new Random().Next(2) == 1;
}
}
Go with the first. The only reason it might be. "slow" is if it keeps paging data from outside the processor cache.
The list will have exactly the same problem, except it will also need to perform several memory allocations and copies.
Now here's a funny old thing. Inspired by #paul, I ran these benchmark tests myself, on 10,000,000 booleans. The results (in milliseconds) are very surprising, given the discussion in the comments to this question:
BitArray: 517
BitArray + CopyTo(array): 536
List + ToArray(): 455
bool array: 483
And what a turnout for the books! Despite the fact that the List<Bool> is inserting a new record every time, while the bool[] and BitArray are initialized to false on every record, and I only updated them where the value should be true, the List<bool> comes out tops, consistently, even including the .ToArray() call.
Yet another case where practical application is better than textbook knowledge, it seems... :)

Differences between a multidimensional array "[,]" and an array of arrays "[][]" in C#?

What are the differences between multidimensional arrays double[,] and array of arrays double[][] in C#?
If there is a difference?
What is the best use for each one?
Array of arrays (jagged arrays) are faster than multi-dimensional arrays and can be used more effectively. Multidimensional arrays have nicer syntax.
If you write some simple code using jagged and multidimensional arrays and then inspect the compiled assembly with an IL disassembler you will see that the storage and retrieval from jagged (or single dimensional) arrays are simple IL instructions while the same operations for multidimensional arrays are method invocations which are always slower.
Consider the following methods:
static void SetElementAt(int[][] array, int i, int j, int value)
{
array[i][j] = value;
}
static void SetElementAt(int[,] array, int i, int j, int value)
{
array[i, j] = value;
}
Their IL will be the following:
.method private hidebysig static void SetElementAt(int32[][] 'array',
int32 i,
int32 j,
int32 'value') cil managed
{
// Code size 7 (0x7)
.maxstack 8
IL_0000: ldarg.0
IL_0001: ldarg.1
IL_0002: ldelem.ref
IL_0003: ldarg.2
IL_0004: ldarg.3
IL_0005: stelem.i4
IL_0006: ret
} // end of method Program::SetElementAt
.method private hidebysig static void SetElementAt(int32[0...,0...] 'array',
int32 i,
int32 j,
int32 'value') cil managed
{
// Code size 10 (0xa)
.maxstack 8
IL_0000: ldarg.0
IL_0001: ldarg.1
IL_0002: ldarg.2
IL_0003: ldarg.3
IL_0004: call instance void int32[0...,0...]::Set(int32,
int32,
int32)
IL_0009: ret
} // end of method Program::SetElementAt
When using jagged arrays you can easily perform such operations as row swap and row resize. Maybe in some cases usage of multidimensional arrays will be more safe, but even Microsoft FxCop tells that jagged arrays should be used instead of multidimensional when you use it to analyse your projects.
A multidimensional array creates a nice linear memory layout while a jagged array implies several extra levels of indirection.
Looking up the value jagged[3][6] in a jagged array var jagged = new int[10][5] works like this:
Look up the element at index 3 (which is an array).
Look up the element at index 6 in that array (which is a value).
For each dimension in this case, there's an additional look up (this is an expensive memory access pattern).
A multidimensional array is laid out linearly in memory, the actual value is found by multiplying together the indexes. However, given the array var mult = new int[10,30], the Length property of that multidimensional array returns the total number of elements i.e. 10 * 30 = 300.
The Rank property of a jagged array is always 1, but a multidimensional array can have any rank. The GetLength method of any array can be used to get the length of each dimension. For the multidimensional array in this example mult.GetLength(1) returns 30.
Indexing the multidimensional array is faster. e.g. given the multidimensional array in this example mult[1,7] = 30 * 1 + 7 = 37, get the element at that index 37. This is a better memory access pattern because only one memory location is involved, which is the base address of the array.
A multidimensional array therefore allocates a continuous memory block, while a jagged array does not have to be square, e.g. jagged[1].Length does not have to equal jagged[2].Length, which would be true for any multidimensional array.
Performance
Performance wise, multidimensional arrays should be faster. A lot faster, but due to a really bad CLR implementation they are not.
23.084 16.634 15.215 15.489 14.407 13.691 14.695 14.398 14.551 14.252
25.782 27.484 25.711 20.844 19.607 20.349 25.861 26.214 19.677 20.171
5.050 5.085 6.412 5.225 5.100 5.751 6.650 5.222 6.770 5.305
The first row are timings of jagged arrays, the second shows multidimensional arrays and the third, well that's how it should be. The program is shown below, FYI this was tested running Mono. (The Windows timings are vastly different, mostly due to the CLR implementation variations).
On Windows, the timings of the jagged arrays are greatly superior, about the same as my own interpretation of what multidimensional array look up should be like, see 'Single()'. Sadly the Windows JIT-compiler is really stupid, and this unfortunately makes these performance discussions difficult, there are too many inconsistencies.
These are the timings I got on Windows, same deal here, the first row are jagged arrays, second multidimensional and third my own implementation of multidimensional, note how much slower this is on Windows compared to Mono.
8.438 2.004 8.439 4.362 4.936 4.533 4.751 4.776 4.635 5.864
7.414 13.196 11.940 11.832 11.675 11.811 11.812 12.964 11.885 11.751
11.355 10.788 10.527 10.541 10.745 10.723 10.651 10.930 10.639 10.595
Source code:
using System;
using System.Diagnostics;
static class ArrayPref
{
const string Format = "{0,7:0.000} ";
static void Main()
{
Jagged();
Multi();
Single();
}
static void Jagged()
{
const int dim = 100;
for(var passes = 0; passes < 10; passes++)
{
var timer = new Stopwatch();
timer.Start();
var jagged = new int[dim][][];
for(var i = 0; i < dim; i++)
{
jagged[i] = new int[dim][];
for(var j = 0; j < dim; j++)
{
jagged[i][j] = new int[dim];
for(var k = 0; k < dim; k++)
{
jagged[i][j][k] = i * j * k;
}
}
}
timer.Stop();
Console.Write(Format,
(double)timer.ElapsedTicks/TimeSpan.TicksPerMillisecond);
}
Console.WriteLine();
}
static void Multi()
{
const int dim = 100;
for(var passes = 0; passes < 10; passes++)
{
var timer = new Stopwatch();
timer.Start();
var multi = new int[dim,dim,dim];
for(var i = 0; i < dim; i++)
{
for(var j = 0; j < dim; j++)
{
for(var k = 0; k < dim; k++)
{
multi[i,j,k] = i * j * k;
}
}
}
timer.Stop();
Console.Write(Format,
(double)timer.ElapsedTicks/TimeSpan.TicksPerMillisecond);
}
Console.WriteLine();
}
static void Single()
{
const int dim = 100;
for(var passes = 0; passes < 10; passes++)
{
var timer = new Stopwatch();
timer.Start();
var single = new int[dim*dim*dim];
for(var i = 0; i < dim; i++)
{
for(var j = 0; j < dim; j++)
{
for(var k = 0; k < dim; k++)
{
single[i*dim*dim+j*dim+k] = i * j * k;
}
}
}
timer.Stop();
Console.Write(Format,
(double)timer.ElapsedTicks/TimeSpan.TicksPerMillisecond);
}
Console.WriteLine();
}
}
Simply put multidimensional arrays are similar to a table in DBMS.
Array of Array (jagged array) lets you have each element hold another array of the same type of variable length.
So, if you are sure that the structure of data looks like a table (fixed rows/columns), you can use a multi-dimensional array. Jagged array are fixed elements & each element can hold an array of variable length
E.g. Psuedocode:
int[,] data = new int[2,2];
data[0,0] = 1;
data[0,1] = 2;
data[1,0] = 3;
data[1,1] = 4;
Think of the above as a 2x2 table:
1 | 2
3 | 4
int[][] jagged = new int[3][];
jagged[0] = new int[4] { 1, 2, 3, 4 };
jagged[1] = new int[2] { 11, 12 };
jagged[2] = new int[3] { 21, 22, 23 };
Think of the above as each row having variable number of columns:
1 | 2 | 3 | 4
11 | 12
21 | 22 | 23
Update .NET 6:
With the release of .NET 6 I decided it was a good time to revisit this topic. I rewrote the test code for new .NET and ran it with the requirement of each part running at least a second. The benchmark was done on AMD Ryzen 5600x.
Results? It's complicated. It seems that Single array is the most performant for smaller and large arrays (< ~25x25x25 & > ~200x200x200) and Jagged arrays being fastest in between. Unfortunately it seems from my testing that multi-dimensional are by far the slowest option. At best performing twice as slow as the fastest option. But! It depends on what you need the arrays for because jagged arrays can take much longer to initialize on 50^3 cube the initialization was roughly 3 times longer than single dimensional. Multi-dimensional was only a little bit slower than single dimensional.
The conclusion? If you need fast code, benchmark it yourself on the machine it's going to run on. CPU architecture can complete change the relative performance of each method.
Numbers!
Method name Ticks/Iteration Scaled to the best
Array size 1x1x1 (10,000,000 iterations):
Jagged: 0.15 4.28
Single: 0.035 1
Multi-dimensional: 0.77 22
Array size 10x10x10 (25,000 iterations):
Jagged: 15 1.67
Single: 9 1
Multi-dimensional: 56 6.2
Array size 25x25x25 (25,000 iterations):
Jagged: 157 1.3
Single: 120 1
Multi-dimensional: 667 5.56
Array size 50x50x50 (10,000 iterations):
Jagged: 1,140 1
Single: 2,440 2.14
Multi-dimensional: 5,210 4.57
Array size 100x100x100 (10,000 iterations):
Jagged: 9,800 1
Single: 19,800 2
Multi-dimensional: 41,700 4.25
Array size 200x200x200 (1,000 iterations):
Jagged: 161,622 1
Single: 175,507 1.086
Multi-dimensional: 351,275 2.17
Array size 500x500x500 (100 iterations):
Jagged: 4,057.413 1.5
Single: 2,709,301 1
Multi-dimensional: 5,359,393 1.98
Don't trust me? Run it yourself and verify.
Note: the constant size seems to give jagged arrays an edge, but is not significant enough to change the order in my benchmarks. I have measured in some instance ~7% decrease in performance when using size from user input for jagged arrays, no difference for single arrays and very small difference (~1% or less) for multi-dimensional arrays. It is most prominent in the middle where jagged arrays take the lead.
using System.Diagnostics;
const string Format = "{0,7:0.000} ";
const int TotalPasses = 25000;
const int Size = 50;
Stopwatch timer = new();
var functionList = new List<Action> { Jagged, Single, SingleStandard, Multi };
Console.WriteLine("{0,5}{1,20}{2,20}{3,20}{4,20}", "Run", "Ticks", "ms", "Ticks/Instance", "ms/Instance");
foreach (var item in functionList)
{
var warmup = Test(item);
var run = Test(item);
Console.WriteLine($"{item.Method.Name}:");
PrintResult("warmup", warmup);
PrintResult("run", run);
Console.WriteLine();
}
static void PrintResult(string name, long ticks)
{
Console.WriteLine("{0,10}{1,20}{2,20}{3,20}{4,20}", name, ticks, string.Format(Format, (decimal)ticks / TimeSpan.TicksPerMillisecond), (decimal)ticks / TotalPasses, (decimal)ticks / TotalPasses / TimeSpan.TicksPerMillisecond);
}
long Test(Action func)
{
timer.Restart();
func();
timer.Stop();
return timer.ElapsedTicks;
}
static void Jagged()
{
for (var passes = 0; passes < TotalPasses; passes++)
{
var jagged = new int[Size][][];
for (var i = 0; i < Size; i++)
{
jagged[i] = new int[Size][];
for (var j = 0; j < Size; j++)
{
jagged[i][j] = new int[Size];
for (var k = 0; k < Size; k++)
{
jagged[i][j][k] = i * j * k;
}
}
}
}
}
static void Multi()
{
for (var passes = 0; passes < TotalPasses; passes++)
{
var multi = new int[Size, Size, Size];
for (var i = 0; i < Size; i++)
{
for (var j = 0; j < Size; j++)
{
for (var k = 0; k < Size; k++)
{
multi[i, j, k] = i * j * k;
}
}
}
}
}
static void Single()
{
for (var passes = 0; passes < TotalPasses; passes++)
{
var single = new int[Size * Size * Size];
for (var i = 0; i < Size; i++)
{
int iOffset = i * Size * Size;
for (var j = 0; j < Size; j++)
{
var jOffset = iOffset + j * Size;
for (var k = 0; k < Size; k++)
{
single[jOffset + k] = i * j * k;
}
}
}
}
}
static void SingleStandard()
{
for (var passes = 0; passes < TotalPasses; passes++)
{
var single = new int[Size * Size * Size];
for (var i = 0; i < Size; i++)
{
for (var j = 0; j < Size; j++)
{
for (var k = 0; k < Size; k++)
{
single[i * Size * Size + j * Size + k] = i * j * k;
}
}
}
}
}
Lesson learned: Always include CPU in benchmarks, because it makes a difference. Did it this time? I don't know but I suspect it might've.
Original answer:
I would like to update on this, because in .NET Core multi-dimensional arrays are faster than jagged arrays. I ran the tests from John Leidegren and these are the results on .NET Core 2.0 preview 2. I increased the dimension value to make any possible influences from background apps less visible.
Debug (code optimalization disabled)
Running jagged
187.232 200.585 219.927 227.765 225.334 222.745 224.036 222.396 219.912 222.737
Running multi-dimensional
130.732 151.398 131.763 129.740 129.572 159.948 145.464 131.930 133.117 129.342
Running single-dimensional
91.153 145.657 111.974 96.436 100.015 97.640 94.581 139.658 108.326 92.931
Release (code optimalization enabled)
Running jagged
108.503 95.409 128.187 121.877 119.295 118.201 102.321 116.393 125.499 116.459
Running multi-dimensional
62.292 60.627 60.611 60.883 61.167 60.923 62.083 60.932 61.444 62.974
Running single-dimensional
34.974 33.901 34.088 34.659 34.064 34.735 34.919 34.694 35.006 34.796
I looked into disassemblies and this is what I found
jagged[i][j][k] = i * j * k; needed 34 instructions to execute
multi[i, j, k] = i * j * k; needed 11 instructions to execute
single[i * dim * dim + j * dim + k] = i * j * k; needed 23 instructions to execute
I wasn't able to identify why single-dimensional arrays were still faster than multi-dimensional but my guess is that it has to do with some optimalization made on the CPU
Preface: This comment is intended to address the answer provided by okutane regarding the performance difference between jagged arrays and multidimensional ones.
The assertion that one type is slower than the other because of the method calls isn't correct. One is slower than the other because of more complicated bounds-checking algorithms. You can easily verify this by looking, not at the IL, but at the compiled assembly. For example, on my 4.5 install, accessing an element (via pointer in edx) stored in a two-dimensional array pointed to by ecx with indexes stored in eax and edx looks like so:
sub eax,[ecx+10]
cmp eax,[ecx+08]
jae oops //jump to throw out of bounds exception
sub edx,[ecx+14]
cmp edx,[ecx+0C]
jae oops //jump to throw out of bounds exception
imul eax,[ecx+0C]
add eax,edx
lea edx,[ecx+eax*4+18]
Here, you can see that there's no overhead from method calls. The bounds checking is just very convoluted thanks to the possibility of non-zero indexes, which is a functionality not on offer with jagged arrays. If we remove the sub, cmp, and jmps for the non-zero cases, the code pretty much resolves to (x*y_max+y)*sizeof(ptr)+sizeof(array_header). This calculation is about as fast (one multiply could be replaced by a shift, since that's the whole reason we choose bytes to be sized as powers of two bits) as anything else for random access to an element.
Another complication is that there are plenty of cases where a modern compiler will optimize away the nested bounds-checking for element access while iterating over a single-dimension array. The result is code that basically just advances an index pointer over the contiguous memory of the array. Naive iteration over multi-dimensional arrays generally involves an extra layer of nested logic, so a compiler is less likely to optimize the operation. So, even though the bounds-checking overhead of accessing a single element amortizes out to constant runtime with respect to array dimensions and sizes, a simple test-case to measure the difference may take many times longer to execute.
Multi-dimension arrays are (n-1)-dimension matrices.
So int[,] square = new int[2,2] is square matrix 2x2, int[,,] cube = new int [3,3,3] is a cube - square matrix 3x3. Proportionality is not required.
Jagged arrays are just array of arrays - an array where each cell contains an array.
So MDA are proportional, JD may be not! Each cell can contains an array of arbitrary length!
This might have been mentioned in the above answers but not explicitly: with jagged array you can use array[row] to refer a whole row of data, but this is not allowed for multi-d arrays.
In addition to the other answers, note that a multidimensional array is allocated as one big chunky object on the heap. This has some implications:
Some multidimensional arrays will get allocated on the Large Object Heap (LOH) where their equivalent jagged array counterparts would otherwise not have.
The GC will need to find a single contiguous free block of memory to allocate a multidimensional array, whereas a jagged array might be able to fill in gaps caused by heap fragmentation... this isn't usually an issue in .NET because of compaction, but the LOH doesn't get compacted by default (you have to ask for it, and you have to ask every time you want it).
You'll want to look into <gcAllowVeryLargeObjects> for multidimensional arrays way before the issue will ever come up if you only ever use jagged arrays.
I thought I'd chime in here from the future with some performance results from .NET 5, seen as that will be the platform which everyone uses from now on.
These are the same tests that John Leidegren used (in 2009).
My results (.NET 5.0.1):
Debug:
(Jagged)
5.616 4.719 4.778 5.524 4.559 4.508 5.913 6.107 5.839 5.270
(Multi)
6.336 7.477 6.124 5.817 6.516 7.098 5.272 6.091 25.034 6.023
(Single)
4.688 3.494 4.425 6.176 4.472 4.347 4.976 4.754 3.591 4.403
Release(code optimizations on):
(Jagged)
2.614 2.108 3.541 3.065 2.172 2.936 1.681 1.724 2.622 1.708
(Multi)
3.371 4.690 4.502 4.153 3.651 3.637 3.580 3.854 3.841 3.802
(Single)
1.934 2.102 2.246 2.061 1.941 1.900 2.172 2.103 1.911 1.911
Ran on a a 6 core 3.7GHz AMD Ryzen 1600 machine.
It looks as though the performance ratio is still roughly the same. I'd say unless you're really optimizing hard, just use multi-dimensional arrays as the syntax is slightly easier to use.
Jagged arrays are arrays of arrays or arrays in which each row contains an array of its own.
These arrays can have lengths different than those in the other rows.
Declaration and Allocation an Array of Arrays
The only difference in the declaration of the jagged arrays compared to the regular multidimensional array is that we do not have just one pair of brackets. With the jagged arrays, we have a pair of brackets per dimension. We allocate them this way:
int[][] exampleJaggedArray;
jaggedArray = new int[2][];
jaggedArray[0] = new int[5];
jaggedArray[1] = new int[3];
The Initializing an array of arrays
int[][] exampleJaggedArray = {
new int[] {5, 7, 2},
new int[] {10, 20, 40},
new int[] {3, 25}
};
Memory Allocation
Jagged arrays are an aggregation of references. A jagged array does not directly contain any arrays, but rather has elements pointing to them. The size is unknown and that is why CLR just keeps references to the internal arrays. After we allocate memory for one array-element of the jagged array, then the reference starts pointing to the newly created block in the dynamic memory.
The variable exampleJaggedArray is stored in the execution stack of the program and points to a block in the dynamic memory, which contains a sequence of three references to other three blocks in memory; each of them contains an array of integer numbers – the elements of the jagged array:
I am parsing .il files generated by ildasm to build a database of assemnblies, classes, methods, and stored procedures for use doing a conversion. I came across the following, which broke my parsing.
.method private hidebysig instance uint32[0...,0...]
GenerateWorkingKey(uint8[] key,
bool forEncryption) cil managed
The book Expert .NET 2.0 IL Assembler, by Serge Lidin, Apress, published 2006, Chapter 8, Primitive Types and Signatures, pp. 149-150 explains.
<type>[] is termed a Vector of <type>,
<type>[<bounds> [<bounds>**] ] is termed an array of <type>
** means may be repeated, [ ] means optional.
Examples: Let <type> = int32.
1) int32[...,...] is a two-dimensional array of undefined lower bounds and sizes
2) int32[2...5] is a one-dimensional array of lower bound 2 and size 4.
3) int32[0...,0...] is a two-dimensional array of lower bounds 0 and undefined size.
Tom
Using a test based on the one by John Leidegren, I benchmarked the result using .NET 4.7.2, which is the relevant version for my purposes and thought I could share. I originally started with this comment in the dotnet core GitHub repository.
It appears that the performance varies greatly as the array size changes, at least on my setup, 1 processor xeon with 4physical 8logical.
w = initialize an array, and put int i * j in it.
wr = do w, then in another loop set int x to [i,j]
As array size grows, multidimensional appears to outperform.
Size
rw
Method
Mean
Error
StdDev
Gen 0/1k Op
Gen 1/1k Op
Gen 2/1k Op
Allocated Memory/Op
1800*500
w
Jagged
2.445 ms
0.0959 ms
0.1405 ms
578.1250
281.2500
85.9375
3.46 MB
1800*500
w
Multi
3.079 ms
0.2419 ms
0.3621 ms
269.5313
269.5313
269.5313
3.43 MB
2000*4000
w
Jagged
50.29 ms
3.262 ms
4.882 ms
5937.5000
3375.0000
937.5000
30.62 MB
2000*4000
w
Multi
26.34 ms
1.797 ms
2.690 ms
218.7500
218.7500
218.7500
30.52 MB
2000*4000
wr
Jagged
55.30 ms
3.066 ms
4.589 ms
5937.5000
3375.0000
937.5000
30.62 MB
2000*4000
wr
Multi
32.23 ms
2.798 ms
4.187 ms
285.7143
285.7143
285.7143
30.52 MB
1000*2000
wr
Jagged
11.18 ms
0.5397 ms
0.8078 ms
1437.5000
578.1250
234.3750
7.69 MB
1000*2000
wr
Multi
6.622 ms
0.3238 ms
0.4847 ms
210.9375
210.9375
210.9375
7.63 MB
Update: last two tests with double[,] instead of int[,]. The difference appears significant considering the errors. With int, ratio of mean for jagged vs md is between 1.53x and 1.86x, with doubles it is 1.88x and 2.42x.
Size
rw
Method
Mean
Error
StdDev
Gen 0/1k Op
Gen 1/1k Op
Gen 2/1k Op
Allocated Memory/Op
1000*2000
wr
Jagged
26.83 ms
1.221 ms
1.790 ms
3062.5000
1531.2500
531.2500
15.31 MB
1000*2000
wr
Multi
12.61 ms
1.018 ms
1.524 ms
156.2500
156.2500
156.2500
15.26 MB

Categories

Resources