Concurrent Collection with fastest possible Add, Remove and Find the highest - c#

I am doing some heavy computations in C# .NET and when doing these computations in parallel.for loop I must collect some data in collection, but because of limited memory I can't collect all results, so I only store the best ones.
Those computations must be as fast as possible because they are already taking too much time. So after optimizing a lot I find out that the slowest thing was my ConcurrentDictionary collection. I am wondering if I should switch to something with faster add, remove and find the highest (perhaps a sorted collection) and just use locks for my main operation or I can do something good using ConcurrentColletion and speed up it a little.
Here is my actual code, I know it's bad because of this huge lock, but without it I seem to lose consistency and a lot of my remove attempts are failing.
public class SignalsMultiValueConcurrentDictionary : ConcurrentDictionary<double, ConcurrentBag<Signal>>
{
public int Limit { get; set; }
public double WorstError { get; private set; }
public SignalsDictionaryState TryAddSignal(double key, Signal signal, out Signal removed)
{
SignalsDictionaryState state;
removed = null;
if (this.Count >= Limit && signal.AbsoluteError > WorstError)
return SignalsDictionaryState.NoAddedNoRemoved;
lock (this)
{
if (this.Count >= Limit)
{
ConcurrentBag<Signal> signals;
if (TryRemove(WorstError, out signals))
{
removed = signals.FirstOrDefault();
state = SignalsDictionaryState.AddedAndRemoved;
}
else
state = SignalsDictionaryState.AddedFailedRemoved;
}
else
state = SignalsDictionaryState.AddedNoRemoved;
this.Add(key, signal);
WorstError = Keys.Max();
}
return state;
}
private void Add(double key, Signal value)
{
ConcurrentBag<Signal> values;
if (!TryGetValue(key, out values))
{
values = new ConcurrentBag<Signal>();
this[key] = values;
}
values.Add(value);
}
}
Note also because I use absolute error of signal, sometimes (should be very rare) I store more than one value on one key.
The only operation used in my computations is TryAddSignal because it does what I want -> if I have more signlas than limit then it removes signal with highest error and adds new signal.
Because of the fact that I set Limit property at the start of the computations I don't need a resizable collection.
The main problem here is even without that huge lock, Keys.Max is a little too slow. So maybe I need other collection?

Keys.Max() is the killer. That's O(N). No need for a dictionary if you do this.
You can't incrementally compute the max value because you are adding and removing. So you better use a data structure that is made for this. Trees usually are. The BCL has SortedList and SortedSet and SortedDictionary I believe. One of them was based on a fast tree. It has min and max operations.
Or, use a .NET collection library with a priority queue.
Bug: Add is racy. You might overwrite a non-empty collection.

The large lock statement is at least dubious. An easier improvement, if you say that Keys.Max() is slow, is to calculate the maximum value incrementally. You'll need to refresh it only after removing a key:
//...
if (TryRemove(WorstError, out signals))
{
WorstError = Keys.Max();
//...
WorstError = Math.Max(WorstError, key);

What I did in the end was to implement Heap based on binary-tree as was suggested by #usr. My final collection was not concurrent but synchronized (I used locks). I checked performance thought and it does the job fast enough.
Here is pseudocode:
public class SynchronizedCollectionWithMaxOnTop
{
double Max => _items[0].AbsoluteError;
public ItemChangeState TryAdd(Item item, out Item removed)
{
ItemChangeState state;
removed = null;
if (_items.Count >= Limit && signal.AbsoluteError > Max)
return ItemChangeState.NoAddedNoRemoved;
lock (this)
{
if (_items.Count >= Limit)
{
removed = Remove();
state = ItemChangeState.AddedAndRemoved;
}
else
state = ItemChangeState.AddedNoRemoved;
Insert(item);
}
return state;
}
private void Insert(Item item)
{
_items.Add(item);
HeapifyUp(_items.Count - 1);
}
private void Remove()
{
var result = new Item(_items[0]);
var lastIndex = _items.Count - 1;
_items[0] = _items[lastIndex];
_items.RemoveAt(lastIndex);
HeapifyDown(0);
return result;
}
}

Related

While updating a value in concurrent dictionary is better to lock dictionary or value

I am performing two updates on a value I get from TryGet I would like to know that which of these is better?
Option 1: Locking only out value?
if (HubMemory.AppUsers.TryGetValue(ConID, out OnlineInfo onlineinfo))
{
lock (onlineinfo)
{
onlineinfo.SessionRequestId = 0;
onlineinfo.AudioSessionRequestId = 0;
onlineinfo.VideoSessionRequestId = 0;
}
}
Option 2: Locking whole dictionary?
if (HubMemory.AppUsers.TryGetValue(ConID, out OnlineInfo onlineinfo))
{
lock (HubMemory.AppUsers)
{
onlineinfo.SessionRequestId = 0;
onlineinfo.AudioSessionRequestId = 0;
onlineinfo.VideoSessionRequestId = 0;
}
}
I'm going to suggest something different.
Firstly, you should be storing immutable types in the dictionary to avoid a lot of threading issues. As it is, any code could modify the contents of any items in the dictionary just by retrieving an item from it and changing its properties.
Secondly, ConcurrentDictionary provides the TryUpdate() method to allow you to update values in the dictionary without having to implement explicit locking.
TryUpdate() requires three parameters: The key of the item to update, the updated item and the original item that you got from the dictionary and then updated.
TryUpdate() then checks that the original has NOT been updated by comparing the value currently in the dictionary with the original that you pass to it. Only if it is the SAME does it actually update it with the new value and return true. Otherwise it returns false without updating it.
This allows you to detect and respond appropriately to cases where some other thread has changed the value of the item you're updating while you were updating it. You can either ignore this (in which case, first change gets priority) or try again until you succeed (in which case, last change gets priority). What you do depend on your situation.
Note that this requires that your type implements IEquatable<T>, since that is used by the ConcurrentDictionary to compare values.
Here's a sample console app that demonstrates this:
using System;
using System.Collections.Concurrent;
using System.Threading;
using System.Threading.Tasks;
namespace Demo
{
sealed class Test: IEquatable<Test>
{
public Test(int value1, int value2, int value3)
{
Value1 = value1;
Value2 = value2;
Value3 = value3;
}
public Test(Test other) // Copy ctor.
{
Value1 = other.Value1;
Value2 = other.Value2;
Value3 = other.Value3;
}
public int Value1 { get; }
public int Value2 { get; }
public int Value3 { get; }
#region IEquatable<Test> implementation (generated using Resharper)
public bool Equals(Test other)
{
if (other is null)
return false;
if (ReferenceEquals(this, other))
return true;
return Value1 == other.Value1 && Value2 == other.Value2 && Value2 == other.Value3;
}
public override bool Equals(object obj)
{
return ReferenceEquals(this, obj) || obj is Test other && Equals(other);
}
public override int GetHashCode()
{
unchecked
{
return (Value1 * 397) ^ Value2;
}
}
public static bool operator ==(Test left, Test right)
{
return Equals(left, right);
}
public static bool operator !=(Test left, Test right)
{
return !Equals(left, right);
}
#endregion
}
static class Program
{
static void Main()
{
var dict = new ConcurrentDictionary<int, Test>();
dict.TryAdd(0, new Test(1000, 2000, 3000));
dict.TryAdd(1, new Test(4000, 5000, 6000));
dict.TryAdd(2, new Test(7000, 8000, 9000));
Parallel.Invoke(() => update(dict), () => update(dict));
}
static void update(ConcurrentDictionary<int, Test> dict)
{
for (int i = 0; i < 100000; ++i)
{
for (int attempt = 0 ;; ++attempt)
{
var original = dict[0];
var modified = new Test(original.Value1 + 1, original.Value2 + 1, original.Value3 + 1);
var updatedOk = dict.TryUpdate(1, modified, original);
if (updatedOk) // Updated OK so don't try again.
break; // In some cases you might not care, so you would never try again.
Console.WriteLine($"dict.TryUpdate() returned false in iteration {i} attempt {attempt} on thread {Thread.CurrentThread.ManagedThreadId}");
}
}
}
}
}
There's a lot of boilerplate code there to support the IEquatable<T> implementation and also to support the immutability.
Fortunately, C# 9 has introduced the record type which makes immutable types much easier to implement. Here's the same sample console app that uses a record instead. Note that record types are immutable and also implement IEquality<T> for you:
using System;
using System.Collections.Concurrent;
using System.Threading;
using System.Threading.Tasks;
namespace System.Runtime.CompilerServices // Remove this if compiling with .Net 5
{ // This is to allow earlier versions of .Net to use records.
class IsExternalInit {}
}
namespace Demo
{
record Test(int Value1, int Value2, int Value3);
static class Program
{
static void Main()
{
var dict = new ConcurrentDictionary<int, Test>();
dict.TryAdd(0, new Test(1000, 2000, 3000));
dict.TryAdd(1, new Test(4000, 5000, 6000));
dict.TryAdd(2, new Test(7000, 8000, 9000));
Parallel.Invoke(() => update(dict), () => update(dict));
}
static void update(ConcurrentDictionary<int, Test> dict)
{
for (int i = 0; i < 100000; ++i)
{
for (int attempt = 0 ;; ++attempt)
{
var original = dict[0];
var modified = original with
{
Value1 = original.Value1 + 1,
Value2 = original.Value2 + 1,
Value3 = original.Value3 + 1
};
var updatedOk = dict.TryUpdate(1, modified, original);
if (updatedOk) // Updated OK so don't try again.
break; // In some cases you might not care, so you would never try again.
Console.WriteLine($"dict.TryUpdate() returned false in iteration {i} attempt {attempt} on thread {Thread.CurrentThread.ManagedThreadId}");
}
}
}
}
}
Note how much shorter record Test is compared to class Test, even though it provides the same functionality. (Also note that I added class IsExternalInit to allow records to be used with .Net versions prior to .Net 5. If you're using .Net 5, you don't need that.)
Finally, note that you don't need to make your class immutable. The code I posted for the first example will work perfectly well if your class is mutable; it just won't stop other code from breaking things.
Addendum 1:
You may look at the output and wonder why so many retry attempts are made when the TryUpdate() fails. You might expect it to only need to retry a few times (depending on how many threads are concurrently attempting to modify the data). The answer to this is simply that the Console.WriteLine() takes so long that it's much more likely that some other thread changed the value in the dictionary again while we were writing to the console.
We can change the code slightly to only print the number of attempts OUTSIDE the loop like so (modifying the second example):
static void update(ConcurrentDictionary<int, Test> dict)
{
for (int i = 0; i < 100000; ++i)
{
int attempt = 0;
while (true)
{
var original = dict[1];
var modified = original with
{
Value1 = original.Value1 + 1,
Value2 = original.Value2 + 1,
Value3 = original.Value3 + 1
};
var updatedOk = dict.TryUpdate(1, modified, original);
if (updatedOk) // Updated OK so don't try again.
break; // In some cases you might not care, so you would never try again.
++attempt;
}
if (attempt > 0)
Console.WriteLine($"dict.TryUpdate() took {attempt} retries in iteration {i} on thread {Thread.CurrentThread.ManagedThreadId}");
}
}
With this change, we see that the number of retry attempts drops significantly. This shows the importance of minimising the amount of time spent in code between TryUpdate() attempts.
Addendum 2:
As noted by Theodor Zoulias below, you could also use ConcurrentDictionary<TKey,TValue>.AddOrUpdate(), as the example below shows. This is probably a better approach, but it is slightly harder to understand:
static void update(ConcurrentDictionary<int, Test> dict)
{
for (int i = 0; i < 100000; ++i)
{
int attempt = 0;
dict.AddOrUpdate(
1, // Key to update.
key => new Test(1, 2, 3), // Create new element; won't actually be called for this example.
(key, existing) => // Update existing element. Key not needed for this example.
{
++attempt;
return existing with
{
Value1 = existing.Value1 + 1,
Value2 = existing.Value2 + 1,
Value3 = existing.Value3 + 1
};
}
);
if (attempt > 1)
Console.WriteLine($"dict.TryUpdate() took {attempt-1} retries in iteration {i} on thread {Thread.CurrentThread.ManagedThreadId}");
}
}
If you just need to lock the dictionary value, for instance to make sure the 3 values are set at the same time. Then it doesn't really matter what reference type you lock over, just as long as it is a reference type, it's the same instance, and everything else that needs to read or modify those values are also locked on the same instance.
You can read more on how the Microsoft CLR implementation deals with locking and how and why locks work with a reference types here
Why Do Locks Require Instances In C#?
If you are trying to have internal consistency with the dictionary and the value, that's to say, if you are trying to protect not only the internal consistency of the dictionary and the setting and reading of object in the dictionary. Then the your lock is not appropriate at all.
You would need to place a lock around the entire statement (including the TryGetValue) and every other place where you add to the dictionary or read/modify the value. Once again, the object you lock over is not important, just as long as it's consistent.
Note 1 : it is normal to use a dedicated instance to lock over (i.e. some instantiated object) either statically or an instance member depending on your needs, as there is less chance of you shooting yourself in the foot.
Note 2 : there are a lot more ways that can implement thread safety here, depending on your needs, if you are happy with stale values, whether you need every ounce of performance, and if you have a degree in minimal lock coding and how much effort and innate safety you want to bake in. And that is entirely up to you and your solution.
The first option (locking on the entry of the dictionary) is more efficient because it is unlikely to create significant contention for the lock. For this to happen, two threads should try to update the same entry at the same time. The second option (locking on the entire dictionary) is quite possible to create contention under heavy usage, because two threads will be synchronized even if they try to update different entries concurrently.
The first option is also more in the spirit of using a ConcurrentDictionary<K,V> in the first place. If you are going to lock on the entire dictionary, you might as well use a normal Dictionary<K,V> instead. Regarding this dilemma, you may find this question interesting: When should I use ConcurrentDictionary and Dictionary?

Concurrent modification of double[][] elements without locking

I have a jagged double[][] array that may be modified concurrently by multiple threads. I should like to make it thread-safe, but if possible, without locks. The threads may well target the same element in the array, that is why the whole problem arises. I have found code to increment double values atomically using the Interlocked.CompareExchange method: Why is there no overload of Interlocked.Add that accepts Doubles as parameters?
My question is: will it stay atomic if there is a jagged array reference in Interlocked.CompareExchange? Your insights are much appreciated.
With an example:
public class Example
{
double[][] items;
public void AddToItem(int i, int j, double addendum)
{
double newCurrentValue = items[i][j];
double currentValue;
double newValue;
SpinWait spin = new SpinWait();
while (true) {
currentValue = newCurrentValue;
newValue = currentValue + addendum;
// This is the step of which I am uncertain:
newCurrentValue = Interlocked.CompareExchange(ref items[i][j], newValue, currentValue);
if (newCurrentValue == currentValue) break;
spin.SpinOnce();
}
}
}
Yes, it will still be atomic and thread-safe. Any calls to the same cell will be passing the same address-to-a-double. Details like whether it is in an array of as a field on an object are irrelevant.
However, the line:
double newCurrentValue = items[i][j];
is not atomic - that could in theory give a torn value (especially on x86). That's actually OK in this case because in the torn-value scenario it will just hit the loop, count as a collision, and redo - this time using the known-atomic value from CompareExchange.
Seems that you want eventually add some value to an array item.
I suppose there are only updates of values (array itself stays the same piece of memory) and all updates are done via this AddToItem method.
So, you have to read updated value each time (otherwise you lost changes done by other thread, or get infinite loop).
public class Example
{
double[][] items;
public void AddToItem(int i, int j, double addendum)
{
var spin = new SpinWait();
while (true)
{
var valueAtStart = Volatile.Read(ref items[i][j]);
var newValue = valueAtStart + addendum;
var oldValue = Interlocked.CompareExchange(ref items[i][j], newValue, valueAtStart);
if (oldValue.Equals(valueAtStart))
break;
spin.SpinOnce();
}
}
}
Note that we have to use some volatile method to read items[i][j]. Volatile.Read is used to avoid some unwanted optimizations that are allowed under .NET Memory Model (see ECMA-334 and ECMA-335 specifications).
Since we update value atomically (via Interlocked.CompareExchange) it's enough to read items[i][j] via Volatile.Read.
If not all changes to this array are done in this method, then it's better to write a loop in which create local copy of array, modify it and update reference to new array (using Volatile.Read and Interlocked.CompareExchange)

C# data structure, list which can dynamically resize up to a given limit, and allows fast access to any index

I'm implementing a memory system for an AI agent. It needs to have an internal list of state transitions which is capped at some number, say 10000.
If at capacity, adding a new memory should automatically remove the oldest memory.
Importantly, I should also need to be able to quickly access any item in this list.
A wrapper for Queue at first seemed obvious, but Queue does not allow fast access of any element. (O(n))
Similarly, remove an item from the beginning of a List structure takes O(n).
LinkedLists allow fast additions and removals, but again do not allow quick access to every index.
An array would allow random access but obviously it's not dynamically resizeable and deletion is problematic.
I've seen a HashMap being suggested but I'm ensure how that might be implemented.
Suggestions?
If you want the queue to be a fixed length, you could use a circular buffer which enables O(1) enqueue, dequeue and indexing operations and automatically overwrites old entries when the queue is full.
Try using a Dictionary with a LinkedList. The keys of the Dictionary are the indexes of the LinkedList nodes and the values of the Dictionary are of type LinkedListNode; that is, the LinkedList nodes.
The Dictionary would give you almost an O(1) on its operations and removing/adding LinkedListNode(s) to the beginning or end of a LinkedList is of O(1) as well.
Another alternative is to use a HashTable. However, in this case you have to know the capacity of the table beforehand (See Hashtable.Add Method) in order to get the O(1) performance:
If Count is less than the capacity of the Hashtable, this method is an O(1) operation. If the capacity needs to be increased to accommodate the new element, this method becomes an O(n) operation, where n is Count.
In the first solution, no matter what's the capcity of the LinkedList or the Dictionary you would still get almost an O(1) from both the Dictionary and the LinkedList. Of course that's going to be an O(3) or O(4) depending on the total number of operations that you perform on both the Dictionary and the LinkedList to do an add or remove operation inside your memory class. The search access is going to be always an O(1) because you will be using the Dictionary only.
HashMap is for Java, so the closest equivalent is Dictionary. C# Java HashMap equivalent. But I wouldn't say that this is the ultimate answer.
If you implement it as Dictionary, which key == the content, then you can search the content with O(1). However, you cannot have same key. Also, because it is not ordered, you may not know which the 1st content is.
If you implement it as Dictionary, which key == index, and value == the content, searching for the content still takes O(n) because you don't know the location of content.
A List or an Array will cost O(1) if you search the content by index reference. So, please double check your statement that it takes O(n)
If you search by index is sufficient, then circular array/ buffer which #Lee mentioned is good enough.
Otherwise, similar to DB, you might want to maintain in 2 separate data: 1 for storing the data (Circular Array) and the other one for search (Hash).
EDIT: #Lee has it right. A circular buffer seems to give you what you want. Answer left in place though.
I think the data structure you want might be a priority queue -- it depends on what you mean by 'quickly access any item'. If you mean 'able to enumerate all items in O(N)', then a priority queue fits the bill. If you mean 'enumerate the list in historical order', then it won't.
Assuming you need these operations;
add an item and associate with a time
remove the oldest item
enumerate all existing items in arbitrary order
Then you could easily extend this priority queue implementation I wrote a little while ago.
You'll want implement IEnumerable as a loop through the T[] data array from 0 to cursor. This will give you your enumeration.
Implement a GetItem(i) function which returns this.data[i] so long as i <= cursor.
Implement an automatic size limit by putting this into the Push() method;
if (queue.Size => 10000) {
queue.Pop();
}
I think this is O(ln n) for push and pop, and O(N) to enumerate ALL items, or O(i) to find ANY item, so long as you don't need them in order.
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
namespace Mindfire.DataStructures
{
public class PiorityQueue<T>
{
private int[] priorities;
private T[] data;
private int cursor;
private int capacity;
public int Size
{
get
{
return cursor+1;
}
}
public PiorityQueue(int capacity)
{
this.cursor = -1;
this.capacity = capacity;
this.priorities = new int[this.capacity];
this.data = new T[this.capacity];
}
public T Pop()
{
if (this.Size == 0)
{
throw new InvalidOperationException($"The {this.GetType().Name} is Empty");
}
var result = this.data[0];
this.data[0] = this.data[cursor];
this.priorities[0] = this.priorities[cursor];
this.cursor--;
var loc = 0;
while (true)
{
var l = loc * 2;
var r = loc * 2 + 1;
var leftIsBigger = l <= cursor && this.priorities[loc] < this.priorities[l];
var rightIsBigger = r <= cursor && this.priorities[loc] < this.priorities[r];
if (leftIsBigger)
{
Swap(loc, l);
loc = l;
}
else if (rightIsBigger)
{
Swap(loc, r);
loc = r;
}
else
{
break;
}
}
return result;
}
public void Push(int priority, T v)
{
this.cursor++;
if (this.cursor == this.capacity)
{
Resize(this.capacity * 2);
};
this.data[this.cursor] = v;
this.priorities[this.cursor] = priority;
var loc = (this.cursor -1)/ 2;
while (this.priorities[loc] < this.priorities[cursor])
{
// swap
this.Swap(loc, cursor);
}
}
private void Swap(int a, int b)
{
if (a == b) { return; }
var data = this.data[b];
var priority = this.priorities[b];
this.data[b] = this.data[a];
this.priorities[b] = this.priorities[a];
this.priorities[a] = priority;
this.data[a] = data;
}
private void Resize(int newCapacity)
{
var newPriorities = new int[newCapacity];
var newData = new T[newCapacity];
this.priorities.CopyTo(newPriorities, 0);
this.data.CopyTo(newData, 0);
this.data = newData;
this.priorities = newPriorities;
this.capacity = newCapacity;
}
public PiorityQueue() : this(1)
{
}
public T Peek()
{
if (this.cursor > 0)
{
return this.data[0];
}
else
{
return default(T);
}
}
public void Push(T item, int priority)
{
}
}
}

how to avoid objects allocations? how to reuse objects instead of allocating them?

My financical software processes constantly almost the same objects. For example I have such data online:
HP 100 1
HP 100 2
HP 100.1 1
etc.
I've about 1000 updates every second.
Each update is stored in object - but i do not want to allocate these objects on the fly to improve latency.
I use objects only in short period of time - i recive them, apply and free. Once object is free it actually can be reused for another pack of data.
So I need some storage (likely ring-buffer) that allocates required number of objects once and them allow to "obtain" and "free" them. What is the best way to do that in c#?
Each object has id and i assign id's sequentially and free them sequentially too.
For example i receive id's 1 2 and 3, then I free 1, 2, 3. So any FIFO collection would work, but i'm looking for some library class that cover's required functionality.
I.e. I need FIFO collection that do not allocates objects, but reuse them and allows to reconfigure them.
upd
I've added my implementation of what I want. This is not tested code and probably has bugs.
Idea is simple. Writer should call Obtain Commit methods. Reader should call TryGet method. Reader and writer can access this structure from different threads:
public sealed class ArrayPool<T> where T : class
{
readonly T[] array;
private readonly uint MASK;
private volatile uint curWriteNum;
private volatile uint curReadNum;
public ArrayPool(uint length = 1024) // length must be power of 2
{
if (length <= 0) throw new ArgumentOutOfRangeException("length");
array = new T[length];
MASK = length - 1;
}
/// <summary>
/// TryGet() itself is not thread safe and should be called from one thread.
/// However TryGet() and Obtain/Commit can be called from different threads
/// </summary>
/// <returns></returns>
public T TryGet()
{
if (curReadNum == curWriteNum)
{
return null;
}
T result = array[curReadNum & MASK];
curReadNum++;
return result;
}
public T Obtain()
{
return array[curWriteNum & MASK];
}
public void Commit()
{
curWriteNum++;
}
}
Comments about my implementation are welcome and probably some library method can replace this simple class?
I don't think you should leap at this, as per my comments on the question - however, a simple approach would be something like:
public sealed class MicroPool<T> where T : class
{
readonly T[] array;
public MicroPool(int length = 10)
{
if (length <= 0) throw new ArgumentOutOfRangeException("length");
array = new T[length];
}
public T TryGet()
{
T item;
for (int i = 0; i < array.Length; i++)
{
if ((item = Interlocked.Exchange(ref array[i], null)) != null)
return item;
}
return null;
}
public void Recycle(T item)
{
if(item == null) return;
for (int i = 0; i < array.Length; i++)
{
if (Interlocked.CompareExchange(ref array[i], item, null) == null)
return;
}
using (item as IDisposable) { } // cleaup if needed
}
}
If the loads come in burst, you may be able to use the GC's latency modes to offset the overhead by delaying collects. This is not a silver bullet, but in some cases it can be very helpful.
I am not sure, if this is what you need, but you could always make a pool of objects that are going to be used. Initialize a List of the object type. Then when you need to use an object remove it from the list and add it back when you are done with it.
http://www.codeproject.com/Articles/20848/C-Object-Pooling Is a good start.
Hope I've helped even if a little :)
If you are just worried about the time taken for the GC to run, then don't be - it can't be beaten by anything you can do yourself.
However, if your objects' constructors do some work it might be quicker to cache them.
A fairly straightforward way to do this is to use a ConcurrentBag
Essentially what you do is to pre-populate it with a set of objects using ConcurrentBag.Add() (that is if you want - or you can start with it empty and let it grow).
Then when you need a new object you use ConcurrentBag.TryTake() to grab an object.
If TryTake() fails then you just create a new object and use that instead.
Regardless of whether you grabbed an object from the bag or created a new one, once you're done with it you just put that object back into the bag using ConcurrentBag.Add()
Generally your bag will get to a certain size but no larger (but you might want to instrument things just to check it).
In any case, I would always do some timings to see if changes like this actually make any difference. Unless the object constructors are doing a fair bit of work, chances are it won't make much difference.

Is one of these for loops faster than the other?

for (var keyValue = 0; keyValue < dwhSessionDto.KeyValues.Count; keyValue++)
{...}
var count = dwhSessionDto.KeyValues.Count;
for (var keyValue = 0; keyValue < count; keyValue++)
{...}
I know there's a difference between the two, but is one of them faster than the other? I would think the second is faster.
Yes, the first version is much slower. After all, I'm assuming you're dealing with types like this:
public class SlowCountProvider
{
public int Count
{
get
{
Thread.Sleep(1000);
return 10;
}
}
}
public class KeyValuesWithSlowCountProvider
{
public SlowCountProvider KeyValues
{
get { return new SlowCountProvider(); }
}
}
Here, your first loop will take ~10 seconds, whereas your second loop will take ~1 second.
Of course, you might argue that the assumption that you're using this code is unjustified - but my point is that the right answer will depend on the types involved, and the question doesn't state what those types are.
Now if you're actually dealing with a type where accessing KeyValues and Count is cheap (which is quite likely) I wouldn't expect there to be much difference. Mind you, I'd almost always prefer to use foreach where possible:
foreach (var pair in dwhSessionDto.KeyValues)
{
// Use pair here
}
That way you never need the count. But then, you haven't said what you're trying to do inside the loop either. (Hint: to get more useful answers, provide more information.)
it depends how difficult it is to compute dwhSessionDto.KeyValues.Count if its just a pointer to an int then the speed of each version will be the same. However, if the Count value needs to be calculated, then it will be calculated every time, and therefore impede perfomance.
EDIT -- heres some code to demonstrate that the condition is always re-evaluated
public class Temp
{
public int Count { get; set; }
}
static void Main(string[] args)
{
var t = new Temp() {Count = 5};
for (int i = 0; i < t.Count; i++)
{
Console.WriteLine(i);
t.Count--;
}
Console.ReadLine();
}
The output is 0, 1, 2 - only !
See comments for reasons why this answer is wrong.
If there is a difference, it’s the other way round: Indeed, the first one might be faster. That’s because the compiler recognizes that you are iterating from 0 to the end of the array, and it can therefore elide bounds checks within the loop (i.e. when you access dwhSessionDTo.KeyValues[i]).
However, I believe the compiler only applies this optimization to arrays so there probably will be no difference here.
It is impossible to say without knowing the implementation of dwhSessionDto.KeyValues.Count and the loop body.
Assume a global variable bool foo = false; and then following implementations:
/* Loop body... */
{
if(foo) Thread.Sleep(1000);
}
/* ... */
public int Count
{
get
{
foo = !foo;
return 10;
}
}
/* ... */
Now, the first loop will perform approximately twice as fast as the second ;D
However, assuming non-moronic implementation, the second one is indeed more likely to be faster.
No. There is no performance difference between these two loops. With JIT and Code Optimization, it does not make any difference.
There is no difference but why you think that thereis difference , can you please post your findings?
if you see the implementation of insert item in Dictionary using reflector
private void Insert(TKey key, TValue value, bool add)
{
int freeList;
if (key == null)
{
ThrowHelper.ThrowArgumentNullException(ExceptionArgument.key);
}
if (this.buckets == null)
{
this.Initialize(0);
}
int num = this.comparer.GetHashCode(key) & 0x7fffffff;
int index = num % this.buckets.Length;
for (int i = this.buckets[index]; i >= 0; i = this.entries[i].next)
{
if ((this.entries[i].hashCode == num) && this.comparer.Equals(this.entries[i].key, key))
{
if (add)
{
ThrowHelper.ThrowArgumentException(ExceptionResource.Argument_AddingDuplicate);
}
this.entries[i].value = value;
this.version++;
return;
}
}
if (this.freeCount > 0)
{
freeList = this.freeList;
this.freeList = this.entries[freeList].next;
this.freeCount--;
}
else
{
if (this.count == this.entries.Length)
{
this.Resize();
index = num % this.buckets.Length;
}
freeList = this.count;
this.count++;
}
this.entries[freeList].hashCode = num;
this.entries[freeList].next = this.buckets[index];
this.entries[freeList].key = key;
this.entries[freeList].value = value;
this.buckets[index] = freeList;
this.version++;
}
Count is a internal member to this class which is incremented each item you insert an item into dictionary
so i beleive that there is no differenct at all.
The second version can be faster, sometimes. The point is that the condition is reevaluated after every iteration, so if e.g. the getter of "Count" actually counts the elements in an IEnumerable, or interogates a database /etc, this will slow things down.
So I'd say that if you dont affect the value of "Count" in the "for", the second version is safer.

Categories

Resources