Rapid stack allocations vs accessing a single heap allocation - c#

I'm having a situation where I have an array T[] which must be copied in an instant and sent over to a function accepting a ReadOnlySpan<T>. I found two solutions on this problem. However I'm interested in the one which gives the better performance.
Considerations:
Array number of elements range between 1 to 3 (so it's extremely small);
T is a readonly struct of 16 bytes managed size
I create another array T[] globally (which will be heap allocated) and then use the .CopyTo() extension method on the first array and then pass down the second array
I create a Span<T> locally using stackalloc (which will be stack allocated) and then use .CopyTo<T>() extension just like the previous version.
The difference is that the second approach requires me to do this every time the function is called, whereas the first approach the array is already initialized before the function is called even the first time.
Which approach do you guys think it's the better one ?

Ok so I ran some benchmark dotnet tests. The results were almost 99% the same (taking about speed) with a small in favor of the stackalloc version, but only if I had used in combination with the [SkipLocalsInit] attribute. Also total allocations were higher for this version. So to summarize I came to the conclusion that this is just a micro-optimization and both variants are fine as long as you keep an eye on the stack size, otherwise first version is 10x better.
And not in the end, if you thought ReadOnlySpan<T> is always pass by value, think again when looking at my example

Related

Why most of the data structures in generic collections use array despite of Large Object Heap fragmentation?

I could see that CoreCLR and CoreFx implicitly use array for most of the generic collections. what is the main driving factor to go with arrays and how it handles any side effects of LOH fragmentation.
What other then arrays should collections be?
More importnatly, what other then arrays could collections be?
In use collection boils down to "arrays - and stuff we wrap around arrays, for ease of use.":
The pure thing (arrays), wich do offer some conveniences like bounds checks in C#/.NET
Self growing arrays (Lists)
Two synchronized arrays that allow the mapping of any any input to any element (Dictionaries key/value pair)
Three synchornized array: Key, Value and a Hashvalue to quickly identify not-matching keys (HastTable).
Below the hood - regardless of how hard .NET makes it to use pointers - it all boils down to some code doing C/C++ style pointer arythmethic to get the next element.
Edit 1: As I learned in another place, .NET Dictionaries are actually implemented as HashLists. The HashList class is just the pre-generics version. Object has a GetHashCode function with sensible default behavior wich can be used, but also fully overwritten.
Fragmentation wise the "best" would be a array of references. It can be as small as the reference width (a Pointer or slightly bigger) and the GC can move around the instances to defragment memory. Of course then you get the slight overhead of accessing references rather the just counting/mathing up a pointer, so as usualy it is a memory vs speed tradeoff. However this might go into Speed Rant Territory of detail.
Edit 2: As Markus Appel pointed out in the comments, there is something even better for fragmentation avoidance: Linked lists. Even that single array of references - if you just make it big enough - will take quite some memory in one indivisible chunk. So it might run into object size limits or array indexer limits. A linked list will do neither. But as a result the performance is around a disk that was never defragmented.
Generics is just a convience to have typesafety in collections/other places. It avoids you having to use the dreaded Object as type, wich ruins all compile-time typesafety. Afaik they add nothing else to this situation. List<string> works the same as a StringList would.
Array access is faster as it is a linear storage. If Arrays can solve a problem well enough they are a better storage for traversal rather than always identifying where the next object is stored. For Large data structures this performance benefit will also be amplified.
Using arrays can cause fragmentation if used carelessly. In the general case though, the performance gains outweigh the cost.
When the buffer runs out, the collection allocates a new one with double the size. If the code inserts a lot of items without specifying a capacity, this results in log2(N) reallocations. If the code does specify a capacity though, even a very rough approximation, there may be no fragmentation issues at all.
Removal is another expensive case as the collection will have to move the items after the deleted item(s) to the left.
In general though, array storage offers far better performance than other storage structures though, both for reading, inserting and allocating memory. Deletions are rare in most cases.
For example, inserting N items in a linked list requires allocating N objects to hold that value and storing N pointers. That cost will be paid for every insertion, while the GC will have a lot more objects to track and collect. Inserting 100K items in a linked list would allocate 100K node objects that would need tracking.
With an array there won't be any allocations unless the buffer runs out. In the majority of cases insertion means simply writing to a buffer location and updating a count. When the buffer runs out there will be a single reallocation and an (expensive) copy operation. For 100K items, that's 17 allocations. In most cases, that's an acceptable cost.
To reduce or even get rid of allocations, the code can specify a capacity that's used as the initial buffer size. Specifying even a very rough estimate can reduce allocations a lot. Specifying 1024 as the initial capacity for 100K items would reduce reallocations to 7.

runtime optimization of multithreading code

Sorry for my last question, my code was so stupid.
My base situation is: I want to construct a state tree which has 8! items in the last state. so the total count of iterations is about 100.000 (8!*2 + 7! + 6! + ... )
it currently takes less than one second, i need to construct it every time my artificial intelligence is making a move. Of course, the alpha/beta search is a solution but before thinking of that i want to optimize my code so i really have the best possible performance.
what i already did:
tried to replace every LINQ function with precalculations or collections with faster access (Dictionary), more precalculations for skipping whole operations, of course, some approximations to spare heavy calculations, using List constructors only when there's actually a change, if not, just use the reference.
there'll be more calculations coming so i really need more ideas for reducing. maybe something about what collection is fastest for my purpose.
My code
It's about the BuildChildNodes function and the called TryCollect function. My Constructor is doing some little precalculations. my state tree knows everything, even the cards which aren't actually shown.
as the comment came up: i'm not asking you to read and understand my code to provide content-wise advices. i'm asking you about the functions, operators, data types and classes i'm using and if there could be make a replacement which runs a bit faster. e.g. if there's a faster collection for my purpose or if you have a better idea to replace the collections constructor with a faster method reagarding of adding and removing afterwards.
Edit: okay List is definitely the best type i can use. i tried [] Arrays and even Dictionaries () and last of all i even tried LinkedLists. All with a significant loss.
I can see that RemoveAt() could be expensive at it is proportional to the size of the list.
You can always use the Visual Studio performance profiler to find out where you should optimize your code the most.
If you can find a way to use fixed-size arrays that you allocate when your program starts, instead of dynamically-allocated data structures like List, you will save of lot on memory allocation management overhead.

Is this a good fit for a class or struct (speed is more important than memory)?

Normally, I'd never have to ask myself whether a given scenario is better suited to a struct or class and frankly I did not ask that question before going the class way in this case. Now that I'm optimizing, things are getting a little confusing.
I'm writing a number crunching application that deals with extremely large numbers containing millions of Base10 digits. The numbers are (x,y) coordinates in 2D space. The main algorithm is pretty sequential and has no more than 200 instances of the class Cell (listed below) in memory at any given time. Each instance of the class takes up approximately 5MB of memory resulting in no more than 1GB in total peak memory for the application. The finished product will run on a 16 core machine with 20GB of RAM and no other applications hogging up the resources.
Here is the class:
// Inheritance is convenient but not absolutely necessary here.
public sealed class Cell: CellBase
{
// Will contain numbers with millions of digits (512KB on average).
public System.Numerics.BigInteger X = 0;
// Will contain numbers with millions of digits (512KB on average).
public System.Numerics.BigInteger Y = 0;
public double XLogD = 0D;
// Size of the array is roughly Base2Log(this.X).
public byte [] XBytes = null;
public double YLogD = 0D;
// Size of the array is roughly Base2Log(this.Y).
public byte [] YBytes = null;
// Tons of other properties for scientific calculations on X and Y.
// NOTE: 90% of the other fields and properties are structs (similar to BigInteger).
public Cell (System.Numerics.BigInteger x, System.Numerics.BigInteger y)
{
this.X = x;
this.XLogD = System.Numerics.BigInteger.Log(x, 2);
this.XBytes = x.ToByteArray();
this.Y = y;
this.YLogD = System.Numerics.BigInteger.Log(y, 2);
this.YBytes = y.ToByteArray();
}
}
I chose to use a class instead of a struct simply because it 'felt' more natural. The number of fields, methods and memory all instinctively pointed to classes as opposed to structs. I further justified that by considering how much overhead temporary assignment calls would have since the underlying primary objects are instances of BigInteger, which itself is a struct.
The question is, have I chosen wisely here considering speed efficiency is the ultimate goal in this case?
Here's a bit about the algorithm in case it helps. In each iteration:
Sorting performed once on all 200 instances. 20% of execution time.
Calculating neighboring (x,y) coordinates of interest. 60% of execution time.
Parallel/Threading overhead for point 2 above. 10% of execution time.
Branching overhead. 10% of execution time.
The most expensive function: BigInteger.ToByteArray() (implementation).
This would be better fit as a class, for many reasons, including
It doesn't logically represent a single value
It's larger than 16 bytes
It's mutable
For details, see Choosing Between Classes and Structures.
In addition, I'd also suggest that it's better suited to a class given:
It contains reference types (arrays). Structures containing classes are rarely a good design idea.
This is especially true, though, given what you're doing. If you were to use a struct, sorting would require copies of the entire struct, instead of just copies of the references. Method calls (unless passed by ref) would incur a huge overhead, as well, since you'd be copying all of the data.
Parallelization of items in a collection could also likely incur huge overhead, as the bounds checking on any array of the struct (ie: if it's kept in a List<Cell> or similar) would cause bad false sharing, since all access into the list would access the memory at the start of the list.
I would recommend leaving this as a class, and, in addition, I would suggest trying to move the fields into properties, and making the class as immutable as possible. This will help keep your design clean, and less likely to be problematic when multithreading.
It's hard to tell based on what you've written (we don't know how often you end up copying a value of type Cell for example) but I would strongly expect a class to be the correct approach here.
The number of methods in the class is irrelevant, but if it has lots of fields you need to consider the impact of copying all those fields any time you pass a value to another method (etc).
Fundamentally it doesn't feel like a value type to start with - but I understand that if performance is particularly important, the philosophical aspects may not be as interesting to you.
So yes, I think you've made the right decision, and I see no reason to believe anything else at the moment - but of course if you can easily change the decision and test it as a struct, that would be better than guesswork. Performance is remarkably difficult to predict accurately.
Since your class does contain arrays which do consume most of your memory and you have only 200 Cell Instances around the memory consumption of the class itself is not an issue. You were right that a class felt more natural it is indeed the right choice. My guess would be that the comparison of XByte[] and XYBytes[] does limit your sorting time. It all depends how big your arrays are and how you do perform the comparison.
Let's start ignoring the performance matters, and work up to them.
Structs are ValueTypes and ValueTypes are value-types. Integer's and DateTime's are value-types and a good comparison. There's no sense in talking about how one 1 is or isn't the same as 1, or how one 2010-02-03T12:45:23.321Z is or isn't the same as another 2010-02-03T12:45:23.321Z. They may have different significance in different uses, but that 1 == 1 and 1 != 2 and that 2010-02-03T12:45:23.321Z == 2010-02-03T12:45:23.321Z and 2010-02-03T12:45:23.321Z != 2931-03-05T09:21:29.43Z is inherent to the nature of integers and date-times and that's what makes them value-types.
That's the purest way of thinking about this. If it matches the above it's a value-type, if it doesn't, it's a reference type. Nothing else comes into it.
Extension 1: If an X can have an X then it has to be a reference type. Whether this logically follows from what was said above is debatable, but whatever you think on the matter you can't have a struct that has an instance of another one of itself as a member (directly or indirectly) in practice, so that's that.
Extension 2: Some say that the difficulties that come from mutable structs come from the above, and some do not. Again though, whatever you think on the matter, there are practical difficulties. A mutable struct can be useful in a few cases, but they cause enough confusion that they should be restricted to private cases as an optimisation rather than public cases as a matter of course.
Here comes the performance bit...
Value types and reference types have different characteristics in different cases that affects the speed, the memory use, and the way that memory use affects garbage collection in several ways giving each different pros and cons as far as performance goes. Just how much attention we pay to that, depends on how much we need to get down to that level. It's worth saying right now that the ways in which they differ tends to balance to a win if you follow the above rule on deciding between struct and class so if we start thinking about this beyond that, we're at least bordering on optimisation territory.
Optimisation level 1.
If a value type instance will contain more than 16bytes per instance, it should probably be made a reference. This is sometimes even stated as a "natural" difference rather than one of optimisation. Strictly there's nothing in "value type" that entails "16 or fewer bytes" but it does tend to balance out that way.
Moving away from the simplistic "16 bytes" rule, the smaller it is the faster it is to copy, and contrary-wise, so bending it for a 20-byte instance is of less impact than bending it for a 200-byte instance.
Will you need to box and unbox a lot? Since the introduction of generics we've been able to avoid a lot of cases where we would box and unbox with 1.0 and 1.1, so this isn't as big a deal as once, but if you do it will hurt performance.
Optimisation level 2.
The fact that value types can be place on a stack, placed directly in an array (rather than references to them) and be direct fields of a struct or class (again, rather than references to them) can make access to them and to their fields faster.
If you're going to create an array of them and if all-zero values are a useful starting point for you, you get that immediately, where as with reference types you get an array of nulls. This can make structs faster.
Edit: Something that extends from the above, if you are going to be iterating through arrays rapidly, then as well as the direct-access giving a boost over following the reference, you'll be loading a couple of instances into CPU cache at a time (64 bytes worth on current x86-32 or x86-64/amd, 128 bytes worth on ia-64). It has to be a pretty tight loop to matter, but there are cases where it does.
Pretty much most "I went for struct rather than class for performance" comes down to either the first point, or the first in combination with the second.
Optimisation level 3.
If you will have cases where some of the values you are concerned with are duplicates of each other, and they are large in size, then with immutable instances (or mutable instances you simply never mutate once you start doing what follows), you can deliberately alias different references so that you save a lot of memory because your e.g. 20 duplicate objects of 2kiB in size are actually the same object, hence saving 26kiB in that case. It can also make comparisons faster because the cases where you can short-cut on identity are more frequent. This can only be done with reference types.
Optimisation level 4.
Structs that have arrays do though alias the contained array and could internally use the above technique, balancing out that point, though it's somewhat more involved.
Optimisation level X.
It doesn't matter how much thinking about these pros and cons comes to a particular answer, if actually measuring the results comes to a different ones. Since there are both pros and cons, it's always possible to get this wrong.
In thinking about 1 through 4, along with the differences between value and reference types aside from such optimisation concerns, I think you should go for a class.
In thinking about level X I wouldn't be amazed if your actually testing it proved me wrong. The best bit is, if it is arduous to change from class to struct (you make heavy use of aliasing or the possibility of null value), then you can be pretty confident that doing so is a lose. If it isn't arduous, then you can just do so and measure! I'd strongly suggest measuring a test that involves a real run over doing something 10,000 times - who cares if you can do a given operation 10,000 times in a few less seconds if you do a different operation 20 times more often in the real thing?
A struct can only contain an array-type field safely if either (1) the state of the struct depends upon the identity of the array rather than its contents (as is the case with ArraySegment), or (2) no reference to the array will ever be held by anything that might try to mutate it (typically, this means that the array field will be private, and the struct itself will create the array and perform all modifications that will ever be done to it, before storing a reference in the field).
I advocate using structs much more commonly than other people here, but the fact that your data storage thingie would have two array-type fields would seem a strong argument against using a struct.

C# List<T>.ToArray performance is bad?

I'm using .Net 3.5 (C#) and I've heard the performance of C# List<T>.ToArray is "bad", since it memory copies for all elements to form a new array. Is that true?
No that's not true. Performance is good since all it does is memory copy all elements (*) to form a new array.
Of course it depends on what you define as "good" or "bad" performance.
(*) references for reference types, values for value types.
EDIT
In response to your comment, using Reflector is a good way to check the implementation (see below). Or just think for a couple of minutes about how you would implement it, and take it on trust that Microsoft's engineers won't come up with a worse solution.
public T[] ToArray()
{
T[] destinationArray = new T[this._size];
Array.Copy(this._items, 0, destinationArray, 0, this._size);
return destinationArray;
}
Of course, "good" or "bad" performance only has a meaning relative to some alternative. If in your specific case, there is an alternative technique to achieve your goal that is measurably faster, then you can consider performance to be "bad". If there is no such alternative, then performance is "good" (or "good enough").
EDIT 2
In response to the comment: "No re-construction of objects?" :
No reconstruction for reference types. For value types the values are copied, which could loosely be described as reconstruction.
Reasons to call ToArray()
If the returned value is not meant to be modified, returning it as an array makes that fact a bit clearer.
If the caller is expected to perform many non-sequential accesses to the data, there can be a performance benefit to an array over a List<>.
If you know you will need to pass the returned value to a third-party function that expects an array.
Compatibility with calling functions that need to work with .NET version 1 or 1.1. These versions don't have the List<> type (or any generic types, for that matter).
Reasons not to call ToArray()
If the caller ever does need to add or remove elements, a List<> is absolutely required.
The performance benefits are not necessarily guaranteed, especially if the caller is accessing the data in a sequential fashion. There is also the additional step of converting from List<> to array, which takes processing time.
The caller can always convert the list to an array themselves.
taken from here
Yes, it's true that it does a memory copy of all elements. Is it a performance problem? That depends on your performance requirements.
A List contains an array internally to hold all the elements. The array grows if the capacity is no longer sufficient for the list. Any time that happens, the list will copy all elements into a new array. That happens all the time, and for most people that is no performance problem.
E.g. a list with a default constructor starts at capacity 16, and when you .Add() the 17th element, it creates a new array of size 32, copies the 16 old values and adds the 17th.
The size difference is also the reason why ToArray() returns a new array instance instead of passing the private reference.
This is what Microsoft's official documentation says about List.ToArray's time complexity
The elements are copied using Array.Copy, which is an O(n) operation, where n is Count.
Then, looking at Array.Copy, we see that it is usually not cloning the data but instead using references:
If sourceArray and destinationArray are both reference-type arrays or are both arrays of type Object, a shallow copy is performed. A shallow copy of an Array is a new Array containing references to the same elements as the original Array. The elements themselves or anything referenced by the elements are not copied. In contrast, a deep copy of an Array copies the elements and everything directly or indirectly referenced by the elements.
So in conclusion, this is a pretty efficient way of getting an array from a list.
it creates new references in an array, but that's just the only thing that that method could and should do...
Performance has to be understood in relative terms. Converting an array to a List involves copying the array, and the cost of that will depend on the size of the array. But you have to compare that cost to other other things your program is doing. How did you obtain the information to put into the array in the first place? If it was by reading from the disk, or a network connection, or a database, then an array copy in memory is very unlikely to make a detectable difference to the time taken.
For any kind of List/ICollection where it knows the length, it can allocate an array of exactly the right size from the start.
T[] destinationArray = new T[this._size];
Array.Copy(this._items, 0, destinationArray, 0, this._size);
return destinationArray;
If your source type is IEnumerable (not a List/Collection) then the source is:
items = new TElement[4];
..
if (no more space) {
TElement[] newItems = new TElement[checked(count * 2)];
Array.Copy(items, 0, newItems, 0, count);
items = newItems;
It starts at size 4 and grows exponentially, doubling each time it runs out of space. Each time it doubles, it has to reallocate memory and copy the data over.
If we know the source-data size, we can avoid this slight overhead. However in most cases eg array size <=1024, it will execute so quickly, that we don't even need to think about this implementation detail.
References: Enumerable.cs, List.cs (F12ing into them), Joe's answer

Is c# compiler deciding to use stackalloc by itself?

I found a blog entry which suggests that sometimes c# compiler may decide to put array on the stack instead of the heap:
Improving Performance Through Stack Allocation (.NET Memory Management: Part 2)
This guy claims that:
The compiler will also sometimes decide to put things on the stack on its own. I did an experiment with TestStruct2 in which I allocated it both an unsafe and normal context. In the unsafe context the array was put on the heap, but in the normal context when I looked into memory the array had actually been allocated on the stack.
Can someone confirm that?
I was trying to repeat his example, but everytime I tried array was allocated on the heap.
If c# compiler can do such trick without using 'unsafe' keyword I'm specially intrested in it. I have a code that is working on many small byte arrays (8-10 bytes long) and so using heap for each new byte[...] is a waste of time and memory (especially that each object on heap has 8 bytes overhead needed for garbage collector).
EDIT: I just want to describe why it's important to me:
I'm writing library that is communicating with Gemalto.NET smart card which can have .net code working in it. When I call a method that returns something, smart card return 8 bytes that describes me the exact Type of return value. This 8 bytes are calculated by using md5 hash and some byte arrays concatenations.
Problem is that when I have an array that is not known to me I must scan all types in all assemblies loaded in application and for each I must calculate those 8 bytes until I find the same array.
I don't know other way to find the type, so I'm trying to speed it up as much as possible.
Author of the linked-to article here.
It seems impossible to force stack allocation outside of an unsafe context. This is likely the case to prevent some classes of stack overflow condition.
Instead, I recommend using a memory recycler class which would allocate byte arrays as needed but also allow you to "turn them in" afterward for reuse. It's as simple as keeping a stack of unused byte arrays and, when the list is empty, allocating new ones.
Stack<Byte[]> _byteStack = new Stack<Byte[]>();
Byte[] AllocateArray()
{
Byte[] outArray;
if (_byteStack.Count > 0)
outArray = _byteStack.Pop();
else
outArray = new Byte[8];
return outArray;
}
void RecycleArray(Byte[] inArray)
{
_byteStack.Push(inArray);
}
If you are trying to match a hash with a type it seems the best idea would be to use a Dictionary for fast lookups. In this case you could load all relevant types at startup, if this causes program startup to become too slow you might want to consider caching them the first time each type is used.
From your line:
I have a code that is working on many small byte arrays (8-10 bytes long)
Personally, I'd be more interested in allocating a spare buffer somewhere that different parts of your code can re-use (while processing the same block). Then you don't have any creation/GC to worry about. In most cases (where the buffer is used for very discreet operations) with a scratch-buffer, you can even always assume that it is "all yours" - i.e. every method that needs it can assume that they can start writing at zero.
I use this single-buffer approach in some binary serialization code (while encoding data); it is a big boost to performance. In my case, I pass a "context" object between the layers of serialization (that encapsulates the scratch-buffer, the output-stream (with some additional local buffering), and a few other oddities).
System.Array (the class representing an array) is a reference type and lives on the heap. You can only have an array on the stack if you use unsafe code.
I can't see where it says otherwise in the article that you refer to. If you want to have a stack allocated array, you can do something like this:
decimal* stackAllocatedDecimals = stackalloc decimal[4];
Personally I wouldn't bother- how much performance do you think you will gain by this approach?
This CodeProject article might be useful to you though.

Categories

Resources