I am looking at the Roslyn September 2012 CTP with Reflector, and I noticed that the ChildSyntaxList struct has the following:
public struct ChildSyntaxList : IEnumerable<SyntaxNodeOrToken>
{
private readonly SyntaxNode node;
private readonly int count;
public Enumerator GetEnumerator()
{
return node == null ? new Enumerator() : new Enumerator(node, count);
}
IEnumerator<SyntaxNodeOrToken> IEnumerable<SyntaxNodeOrToken>.GetEnumerator()
{
return node == null
? SpecializedCollections.EmptyEnumerator<SyntaxNodeOrToken>()
: new EnumeratorImpl(node, count);
}
IEnumerator IEnumerable.GetEnumerator()
{
return node == null
? SpecializedCollections.EmptyEnumerator<SyntaxNodeOrToken>()
: new EnumeratorImpl(node, count);
}
public struct Enumerator
{
internal Enumerator(SyntaxNode node, int count)
{
/* logic */
}
public SyntaxNodeOrToken Current { get { /* logic */ } }
public bool MoveNext()
{
/* logic */
}
public void Reset()
{
/* logic */
}
}
private class EnumeratorImpl : IEnumerator<SyntaxNodeOrToken>
{
private Enumerator enumerator;
internal EnumeratorImpl(SyntaxNode node, int count)
{
enumerator = new Enumerator(node, count);
}
public SyntaxNodeOrToken Current { get { return enumerator.Current; } }
object IEnumerator.Current { get { return enumerator.Current; } }
public void Dispose()
{
}
public bool MoveNext()
{
return enumerator.MoveNext();
}
public void Reset()
{
enumerator.Reset();
}
}
}
That is, there is a GetEnumerator method which returns a struct.
It looks like that
using a struct is a performance gain similar to the BCL List<T>.Enumerator struct, as noted in this answer, and that
the struct does not implement IDisposable so as to not have to worry about bugs that could arise from doing so, as noted on Eric Lippert's blog.
However, unlike the BCL List<T> class, there is a nested EnumeratorImpl class. Is the purpose of this to
avoid having a disposable struct, and
avoid boxing within the explicitly implemented IEnumerable<SyntaxNodeOrToken>.GetEnumerator and IEnumerable.GetEnumerator methods?
Are there any other reasons?
Are there any other reasons?
None come to mind. You seem to have accurately described the purposes of this rather odd implementation of the sequence pattern.
I hasten to add: Roslyn is an unusual .NET application in its complexity, in its performance requirements, and in the number of objects that it generates. A compiler that analyzes programs with thousands of files, millions of lines and tens of millions of characters while the user is typing has to do some pretty unusual things to ensure that it does not overwhelm the garbage collector. Roslyn therefore uses pooling strategies, uses mutable value types, and other out-of-the-mainstream practices to help achieve these performance goals. I do not recommend taking on the expense and difficulty associated with these practices unless you have empirical evidence identifying a serious performance problem that these practices mitigate. Just because this code was written by the C# compiler team does not mean that this is the gold standard for how you should be writing your mainstream business objects.
According to Albahari brothers[Page No.273],IEnumerable is used because:
By defining a single method returning an enumerator,IEnumerable provides flexibility in that the
->iteration logic can be framed off to another class(Understood)
->Moreover it means that several consumers can enumerate the collection at once without interfering with each other(Not understood)
I am not able to understand the second point!
How can IEnumerable instead of using IEnumerator enable multiple consumers to enumerate the collection at once?
IEnumerable implements a single method, GetEnumerator(), which returns an IEnumerator. Since each time the method is called, a new IEnumerator is returned, which has its own state. In this way, multiple threads can iterate over the same collection without there being any danger of one thread changing the current pointer of another one.
If a collection implements IEnumerator, then it effectively can only be iterated over by a single thread at a time. Consider the following code:
public class EnumeratorList : IEnumerator
{
private object[] _list = new object[10];
private int _currentIndex = -1;
public object Current { get { return _list[_currentIndex] } };
public bool MoveNext()
{
return ++_currentIndex < 10;
}
public void Reset()
{
_currentIndex = -1;
}
}
Given that implementation, if two threads attempt to loop through the EnumeratorList at the same time, they will get interleaved results, and will not see the entire list.
If we refactor it into an IEnumerable, multiple threads can access the same list without such issues.
public class EnumerableList : IEnumerable
{
private object[] _list = new object[10];
public IEnumerator GetEnumerator()
{
return new ListEnumerator(this);
}
private object this[int i]
{
return _list[i];
}
private class ListEnumerator : IEnumerator
{
private EnumeratorList _list;
private int _currentIndex = -1;
public ListEnumerator(EnumeratorList list)
{
_list = list;
}
public object Current { get { return _list[_currentIndex] } };
public bool MoveNext()
{
return ++_currentIndex < 10;
}
public void Reset()
{
_currentIndex = -1;
}
}
}
Now this is a simple, contrived example, of course, but I hope this helps make it more clear.
The MSDN article for IEnumerable provides a good example of proper usage.
To directly answer your question, when correctly implemented, the IEnumerator object being used to go through the collection is unique for each caller. This means that you can have several consumers call foreach on your collection, and each will have their own enumerator, with its own index into the collection.
Note that this provides only basic protection from modifications to the collection. For that, you must employ proper lock() blocks (see here) .
Consider the code:
class Program
{
static void Main(string[] args)
{
var test = new EnumTest();
test.ConsumeEnumerable2Times();
Console.ReadKey();
}
}
public class EnumTest
{
public IEnumerable<int> CountTo10()
{
for (var i = 0; i <= 10; i++)
yield return i;
}
public void ConsumeEnumerable2Times()
{
var enumerable = CountTo10();
foreach (var n in enumerable)
{
foreach (int i in enumerable)
{
Console.WriteLine("Outer: {0}, Inner: {1}", n, i);
}
}
}
}
This code will produce the output:
Outer: 0, Inner: 1
Outer: 0, Inner: 2
...
Outer: 1, Inner: 0
Outer: 1, Inner: 1
...
Outer: 10, Inner: 10
Using IEnumerable you can enumerate the same collection over and over. IEnumerable actually will return a new instance of IEnumerator for every request for enumeration.
In the example above the method EnumTest() was called once but the IEnumerable returned was used 2 times. Each time counted to 10 independently.
This is why "several consumers can enumerate the collection at once without interfering with each other". You can pass the same IEnumerable object to 2 methods and they will enumerate the collection independently. With IEnumerator you can't achieve that.
Sorry about my english.
A type that implements IEnumerator must have a method and properties such that you can iterate over it, namely MoveNext(), Reset(), and Current. What happens if you have multiple threads that try to iterate over this object at the same time? They'll step on each other as they both call the same MoveNext() function, which modifies the same Current property.
A type that implements IEnumerable must be able to provide an instance of an IEnumerator. Now what happens when multiple threads iterate over the object? Each thread will have a separate instance of an IEnumerator object. The IEnumerator's returned are not the same type of object as your collection. They are something different entirely. However, they do know how to get the next item and show the current item of your collection, and each of those objects will have it's own internal data about the current state of the enumeration. Thus, they don't step on each other and safely iterate your collection from separate threads.
Sometimes, a collection type will implement IEnumerator for itself (it is it's own enumerator), and then implement IEnumerable by just returning itself. In this case, you would gain nothing for multiple threads, because they are still all using the same object for the enumeration. This is backwards. Instead, the correct procedure is to first implement a separate (can be nested) enumerator type for your collection. You then implement IEnumerable by returning new instances of the type and implement IEnumerator by keeping a private instance.
I was thrilled to see the new System.Collections.Concurrent namespace in .Net 4.0, quite nice! I've seen ConcurrentDictionary, ConcurrentQueue, ConcurrentStack, ConcurrentBag and BlockingCollection.
One thing that seems to be mysteriously missing is a ConcurrentList<T>. Do I have to write that myself (or get it off the web :) )?
Am I missing something obvious here?
I gave it a try a while back (also: on GitHub). My implementation had some problems, which I won't get into here. Let me tell you, more importantly, what I learned.
Firstly, there's no way you're going to get a full implementation of IList<T> that is lockless and thread-safe. In particular, random insertions and removals are not going to work, unless you also forget about O(1) random access (i.e., unless you "cheat" and just use some sort of linked list and let the indexing suck).
What I thought might be worthwhile was a thread-safe, limited subset of IList<T>: in particular, one that would allow an Add and provide random read-only access by index (but no Insert, RemoveAt, etc., and also no random write access).
This was the goal of my ConcurrentList<T> implementation. But when I tested its performance in multithreaded scenarios, I found that simply synchronizing adds to a List<T> was faster. Basically, adding to a List<T> is lightning fast already; the complexity of the computational steps involved is miniscule (increment an index and assign to an element in an array; that's really it). You would need a ton of concurrent writes to see any sort of lock contention on this; and even then, the average performance of each write would still beat out the more expensive albeit lockless implementation in ConcurrentList<T>.
In the relatively rare event that the list's internal array needs to resize itself, you do pay a small cost. So ultimately I concluded that this was the one niche scenario where an add-only ConcurrentList<T> collection type would make sense: when you want guaranteed low overhead of adding an element on every single call (so, as opposed to an amortized performance goal).
It's simply not nearly as useful a class as you would think.
What would you use a ConcurrentList for?
The concept of a Random Access container in a threaded world isn't as useful as it may appear. The statement
if (i < MyConcurrentList.Count)
x = MyConcurrentList[i];
as a whole would still not be thread-safe.
Instead of creating a ConcurrentList, try to build solutions with what's there. The most common classes are the ConcurrentBag and especially the BlockingCollection.
With all due respect to the great answers provided already, there are times that I simply want a thread-safe IList. Nothing advanced or fancy. Performance is important in many cases but at times that just isn't a concern. Yes, there are always going to be challenges without methods like "TryGetValue" etc, but most cases I just want something that I can enumerate without needing to worry about putting locks around everything. And yes, somebody can probably find some "bug" in my implementation that might lead to a deadlock or something (I suppose) but lets be honest: When it comes to multi-threading, if you don't write your code correctly, it is going deadlock anyway. With that in mind I decided to make a simple ConcurrentList implementation that provides these basic needs.
And for what its worth: I did a basic test of adding 10,000,000 items to regular List and ConcurrentList and the results were:
List finished in: 7793 milliseconds.
Concurrent finished in: 8064 milliseconds.
public class ConcurrentList<T> : IList<T>, IDisposable
{
#region Fields
private readonly List<T> _list;
private readonly ReaderWriterLockSlim _lock;
#endregion
#region Constructors
public ConcurrentList()
{
this._lock = new ReaderWriterLockSlim(LockRecursionPolicy.NoRecursion);
this._list = new List<T>();
}
public ConcurrentList(int capacity)
{
this._lock = new ReaderWriterLockSlim(LockRecursionPolicy.NoRecursion);
this._list = new List<T>(capacity);
}
public ConcurrentList(IEnumerable<T> items)
{
this._lock = new ReaderWriterLockSlim(LockRecursionPolicy.NoRecursion);
this._list = new List<T>(items);
}
#endregion
#region Methods
public void Add(T item)
{
try
{
this._lock.EnterWriteLock();
this._list.Add(item);
}
finally
{
this._lock.ExitWriteLock();
}
}
public void Insert(int index, T item)
{
try
{
this._lock.EnterWriteLock();
this._list.Insert(index, item);
}
finally
{
this._lock.ExitWriteLock();
}
}
public bool Remove(T item)
{
try
{
this._lock.EnterWriteLock();
return this._list.Remove(item);
}
finally
{
this._lock.ExitWriteLock();
}
}
public void RemoveAt(int index)
{
try
{
this._lock.EnterWriteLock();
this._list.RemoveAt(index);
}
finally
{
this._lock.ExitWriteLock();
}
}
public int IndexOf(T item)
{
try
{
this._lock.EnterReadLock();
return this._list.IndexOf(item);
}
finally
{
this._lock.ExitReadLock();
}
}
public void Clear()
{
try
{
this._lock.EnterWriteLock();
this._list.Clear();
}
finally
{
this._lock.ExitWriteLock();
}
}
public bool Contains(T item)
{
try
{
this._lock.EnterReadLock();
return this._list.Contains(item);
}
finally
{
this._lock.ExitReadLock();
}
}
public void CopyTo(T[] array, int arrayIndex)
{
try
{
this._lock.EnterReadLock();
this._list.CopyTo(array, arrayIndex);
}
finally
{
this._lock.ExitReadLock();
}
}
public IEnumerator<T> GetEnumerator()
{
return new ConcurrentEnumerator<T>(this._list, this._lock);
}
IEnumerator IEnumerable.GetEnumerator()
{
return new ConcurrentEnumerator<T>(this._list, this._lock);
}
~ConcurrentList()
{
this.Dispose(false);
}
public void Dispose()
{
this.Dispose(true);
}
private void Dispose(bool disposing)
{
if (disposing)
GC.SuppressFinalize(this);
this._lock.Dispose();
}
#endregion
#region Properties
public T this[int index]
{
get
{
try
{
this._lock.EnterReadLock();
return this._list[index];
}
finally
{
this._lock.ExitReadLock();
}
}
set
{
try
{
this._lock.EnterWriteLock();
this._list[index] = value;
}
finally
{
this._lock.ExitWriteLock();
}
}
}
public int Count
{
get
{
try
{
this._lock.EnterReadLock();
return this._list.Count;
}
finally
{
this._lock.ExitReadLock();
}
}
}
public bool IsReadOnly
{
get { return false; }
}
#endregion
}
public class ConcurrentEnumerator<T> : IEnumerator<T>
{
#region Fields
private readonly IEnumerator<T> _inner;
private readonly ReaderWriterLockSlim _lock;
#endregion
#region Constructor
public ConcurrentEnumerator(IEnumerable<T> inner, ReaderWriterLockSlim #lock)
{
this._lock = #lock;
this._lock.EnterReadLock();
this._inner = inner.GetEnumerator();
}
#endregion
#region Methods
public bool MoveNext()
{
return _inner.MoveNext();
}
public void Reset()
{
_inner.Reset();
}
public void Dispose()
{
this._lock.ExitReadLock();
}
#endregion
#region Properties
public T Current
{
get { return _inner.Current; }
}
object IEnumerator.Current
{
get { return _inner.Current; }
}
#endregion
}
The reason why there is no ConcurrentList is because it fundamentally cannot be written. The reason why is that several important operations in IList rely on indices, and that just plain won't work. For example:
int catIndex = list.IndexOf("cat");
list.Insert(catIndex, "dog");
The effect that the author is going after is to insert "dog" before "cat", but in a multithreaded environment, anything can happen to the list between those two lines of code. For example, another thread might do list.RemoveAt(0), shifting the entire list to the left, but crucially, catIndex will not change. The impact here is that the Insert operation will actually put the "dog" after the cat, not before it.
The several implementations that you see offered as "answers" to this question are well-meaning, but as the above shows, they don't offer reliable results. If you really want list-like semantics in a multithreaded environment, you can't get there by putting locks inside the list implementation methods. You have to ensure that any index you use lives entirely inside the context of the lock. The upshot is that you can use a List in a multithreaded environment with the right locking, but the list itself cannot be made to exist in that world.
If you think you need a concurrent list, there are really just two possibilities:
What you really need is a ConcurrentBag
You need to create your own collection, perhaps implemented with a List and your own concurrency control.
If you have a ConcurrentBag and are in a position where you need to pass it as an IList, then you have a problem, because the method you're calling has specified that they might try to do something like I did above with the cat & dog. In most worlds, what that means is that the method you're calling is simply not built to work in a multi-threaded environment. That means you either refactor it so that it is or, if you can't, you're going to have to handle it very carefully. You you'll almost certainly be required to create your own collection with its own locks, and call the offending method within a lock.
ConcurrentList (as a resizeable array, not a linked list) is not easy to write with nonblocking operations. Its API doesn't translate well to a "concurrent" version.
In cases where reads greatly outnumber writes, or (however frequent) writes are non-concurrent, a copy-on-write approach may be appropriate.
The implementation shown below is
lockless
blazingly fast for concurrent reads, even while concurrent modifications are ongoing - no matter how long they take
because "snapshots" are immutable, lockless atomicity is possible, i.e. var snap = _list; snap[snap.Count - 1]; will never (well, except for an empty list of course) throw, and you also get thread-safe enumeration with snapshot semantics for free.. how I LOVE immutability!
implemented generically, applicable to any data structure and any type of modification
dead simple, i.e. easy to test, debug, verify by reading the code
usable in .Net 3.5
For copy-on-write to work, you have to keep your data structures effectively immutable, i.e. no one is allowed to change them after you made them available to other threads. When you want to modify, you
clone the structure
make modifications on the clone
atomically swap in the reference to the modified clone
Code
static class CopyOnWriteSwapper
{
public static void Swap<T>(ref T obj, Func<T, T> cloner, Action<T> op)
where T : class
{
while (true)
{
var objBefore = Volatile.Read(ref obj);
var newObj = cloner(objBefore);
op(newObj);
if (Interlocked.CompareExchange(ref obj, newObj, objBefore) == objBefore)
return;
}
}
}
Usage
CopyOnWriteSwapper.Swap(ref _myList,
orig => new List<string>(orig),
clone => clone.Add("asdf"));
If you need more performance, it will help to ungenerify the method, e.g. create one method for every type of modification (Add, Remove, ...) you want, and hard code the function pointers cloner and op.
N.B. #1 It is your responsibility to make sure the no one modifies the (supposedly) immutable data structure. There's nothing we can do in a generic implementation to prevent that, but when specializing to List<T>, you could guard against modification using List.AsReadOnly()
N.B. #2 Be careful about the values in the list. The copy on write approach above guards their list membership only, but if you'd put not strings, but some other mutable objects in there, you have to take care of thread safety (e.g. locking). But that is orthogonal to this solution and e.g. locking of the mutable values can be easily used without issues. You just need to be aware of it.
N.B. #3 If your data structure is huge and you modify it frequently, the copy-all-on-write approach might be prohibitive both in terms of memory consumption and the CPU cost of copying involved. In that case, you might want to use MS's Immutable Collections instead.
System.Collections.Generic.List<t> is already thread safe for multiple readers. Trying to make it thread safe for multiple writers wouldn't make sense. (For reasons Henk and Stephen already mentioned)
Some people hilighted some goods points (and some of my thoughts):
It could looklikes insane to unable random accesser (indexer) but to me it appears fine. You only have to think that there is many methods on multi-threaded collections that could fail like Indexer and Delete. You could also define failure (fallback) action for write accessor like "fail" or simply "add at the end".
It is not because it is a multithreaded collection that it will always be used in a multithreaded context. Or it could also be used by only one writer and one reader.
Another way to be able to use indexer in a safe manner could be to wrap actions into a lock of the collection using its root (if made public).
For many people, making a rootLock visible goes agaist "Good practice". I'm not 100% sure about this point because if it is hidden you remove a lot of flexibility to the user. We always have to remember that programming multithread is not for anybody. We can't prevent every kind of wrong usage.
Microsoft will have to do some work and define some new standard to introduce proper usage of Multithreaded collection. First the IEnumerator should not have a moveNext but should have a GetNext that return true or false and get an out paramter of type T (this way the iteration would not be blocking anymore). Also, Microsoft already use "using" internally in the foreach but sometimes use the IEnumerator directly without wrapping it with "using" (a bug in collection view and probably at more places) - Wrapping usage of IEnumerator is a recommended pratice by Microsoft. This bug remove good potential for safe iterator... Iterator that lock collection in constructor and unlock on its Dispose method - for a blocking foreach method.
That is not an answer. This is only comments that do not really fit to a specific place.
... My conclusion, Microsoft has to make some deep changes to the "foreach" to make MultiThreaded collection easier to use. Also it has to follow there own rules of IEnumerator usage. Until that, we can write a MultiThreadList easily that would use a blocking iterator but that will not follow "IList". Instead, you will have to define own "IListPersonnal" interface that could fail on "insert", "remove" and random accessor (indexer) without exception. But who will want to use it if it is not standard ?
I implemented one similar to Brian's. Mine is different:
I manage the array directly.
I don't enter the locks within the try block.
I use yield return for producing an enumerator.
I support lock recursion. This allows reads from list during iteration.
I use upgradable read locks where possible.
DoSync and GetSync methods allowing sequential interactions that require exclusive access to the list.
The code:
public class ConcurrentList<T> : IList<T>, IDisposable
{
private ReaderWriterLockSlim _lock = new ReaderWriterLockSlim(LockRecursionPolicy.SupportsRecursion);
private int _count = 0;
public int Count
{
get
{
_lock.EnterReadLock();
try
{
return _count;
}
finally
{
_lock.ExitReadLock();
}
}
}
public int InternalArrayLength
{
get
{
_lock.EnterReadLock();
try
{
return _arr.Length;
}
finally
{
_lock.ExitReadLock();
}
}
}
private T[] _arr;
public ConcurrentList(int initialCapacity)
{
_arr = new T[initialCapacity];
}
public ConcurrentList():this(4)
{ }
public ConcurrentList(IEnumerable<T> items)
{
_arr = items.ToArray();
_count = _arr.Length;
}
public void Add(T item)
{
_lock.EnterWriteLock();
try
{
var newCount = _count + 1;
EnsureCapacity(newCount);
_arr[_count] = item;
_count = newCount;
}
finally
{
_lock.ExitWriteLock();
}
}
public void AddRange(IEnumerable<T> items)
{
if (items == null)
throw new ArgumentNullException("items");
_lock.EnterWriteLock();
try
{
var arr = items as T[] ?? items.ToArray();
var newCount = _count + arr.Length;
EnsureCapacity(newCount);
Array.Copy(arr, 0, _arr, _count, arr.Length);
_count = newCount;
}
finally
{
_lock.ExitWriteLock();
}
}
private void EnsureCapacity(int capacity)
{
if (_arr.Length >= capacity)
return;
int doubled;
checked
{
try
{
doubled = _arr.Length * 2;
}
catch (OverflowException)
{
doubled = int.MaxValue;
}
}
var newLength = Math.Max(doubled, capacity);
Array.Resize(ref _arr, newLength);
}
public bool Remove(T item)
{
_lock.EnterUpgradeableReadLock();
try
{
var i = IndexOfInternal(item);
if (i == -1)
return false;
_lock.EnterWriteLock();
try
{
RemoveAtInternal(i);
return true;
}
finally
{
_lock.ExitWriteLock();
}
}
finally
{
_lock.ExitUpgradeableReadLock();
}
}
public IEnumerator<T> GetEnumerator()
{
_lock.EnterReadLock();
try
{
for (int i = 0; i < _count; i++)
// deadlocking potential mitigated by lock recursion enforcement
yield return _arr[i];
}
finally
{
_lock.ExitReadLock();
}
}
IEnumerator IEnumerable.GetEnumerator()
{
return this.GetEnumerator();
}
public int IndexOf(T item)
{
_lock.EnterReadLock();
try
{
return IndexOfInternal(item);
}
finally
{
_lock.ExitReadLock();
}
}
private int IndexOfInternal(T item)
{
return Array.FindIndex(_arr, 0, _count, x => x.Equals(item));
}
public void Insert(int index, T item)
{
_lock.EnterUpgradeableReadLock();
try
{
if (index > _count)
throw new ArgumentOutOfRangeException("index");
_lock.EnterWriteLock();
try
{
var newCount = _count + 1;
EnsureCapacity(newCount);
// shift everything right by one, starting at index
Array.Copy(_arr, index, _arr, index + 1, _count - index);
// insert
_arr[index] = item;
_count = newCount;
}
finally
{
_lock.ExitWriteLock();
}
}
finally
{
_lock.ExitUpgradeableReadLock();
}
}
public void RemoveAt(int index)
{
_lock.EnterUpgradeableReadLock();
try
{
if (index >= _count)
throw new ArgumentOutOfRangeException("index");
_lock.EnterWriteLock();
try
{
RemoveAtInternal(index);
}
finally
{
_lock.ExitWriteLock();
}
}
finally
{
_lock.ExitUpgradeableReadLock();
}
}
private void RemoveAtInternal(int index)
{
Array.Copy(_arr, index + 1, _arr, index, _count - index-1);
_count--;
// release last element
Array.Clear(_arr, _count, 1);
}
public void Clear()
{
_lock.EnterWriteLock();
try
{
Array.Clear(_arr, 0, _count);
_count = 0;
}
finally
{
_lock.ExitWriteLock();
}
}
public bool Contains(T item)
{
_lock.EnterReadLock();
try
{
return IndexOfInternal(item) != -1;
}
finally
{
_lock.ExitReadLock();
}
}
public void CopyTo(T[] array, int arrayIndex)
{
_lock.EnterReadLock();
try
{
if(_count > array.Length - arrayIndex)
throw new ArgumentException("Destination array was not long enough.");
Array.Copy(_arr, 0, array, arrayIndex, _count);
}
finally
{
_lock.ExitReadLock();
}
}
public bool IsReadOnly
{
get { return false; }
}
public T this[int index]
{
get
{
_lock.EnterReadLock();
try
{
if (index >= _count)
throw new ArgumentOutOfRangeException("index");
return _arr[index];
}
finally
{
_lock.ExitReadLock();
}
}
set
{
_lock.EnterUpgradeableReadLock();
try
{
if (index >= _count)
throw new ArgumentOutOfRangeException("index");
_lock.EnterWriteLock();
try
{
_arr[index] = value;
}
finally
{
_lock.ExitWriteLock();
}
}
finally
{
_lock.ExitUpgradeableReadLock();
}
}
}
public void DoSync(Action<ConcurrentList<T>> action)
{
GetSync(l =>
{
action(l);
return 0;
});
}
public TResult GetSync<TResult>(Func<ConcurrentList<T>,TResult> func)
{
_lock.EnterWriteLock();
try
{
return func(this);
}
finally
{
_lock.ExitWriteLock();
}
}
public void Dispose()
{
_lock.Dispose();
}
}
In sequentially executing code the data structures used are different from (well written) concurrently executing code. The reason is that sequential code implies implicit order. Concurrent code however does not imply any order; better yet it implies the lack of any defined order!
Due to this, data structures with implied order (like List) are not very useful for solving concurrent problems. A list implies order, but it does not clearly define what that order is. Because of this the execution order of the code manipulating the list will determine (to some degree) the implicit order of the list, which is in direct conflict with an efficient concurrent solution.
Remember concurrency is a data problem, not a code problem! You cannot Implement the code first (or rewriting existing sequential code) and get a well designed concurrent solution. You need to design the data structures first while keeping in mind that implicit ordering doesn’t exist in a concurrent system.
lockless Copy and Write approach works great if you're not dealing with too many items.
Here's a class I wrote:
public class CopyAndWriteList<T>
{
public static List<T> Clear(List<T> list)
{
var a = new List<T>(list);
a.Clear();
return a;
}
public static List<T> Add(List<T> list, T item)
{
var a = new List<T>(list);
a.Add(item);
return a;
}
public static List<T> RemoveAt(List<T> list, int index)
{
var a = new List<T>(list);
a.RemoveAt(index);
return a;
}
public static List<T> Remove(List<T> list, T item)
{
var a = new List<T>(list);
a.Remove(item);
return a;
}
}
example usage:
orders_BUY = CopyAndWriteList.Clear(orders_BUY);
I'm surprised no-one has mentioned using LinkedList as a base for writing a specialised class.
Often we don't need the full API's of the various collection classes, and if you write mostly functional side effect free code, using immutable classes as far as possible, then you'll actually NOT want to mutate the collection favouring various snapshot implementations.
LinkedList solves some difficult problems of creating snapshot copies/clones of large collections. I also use it to create "threadsafe" enumerators to enumerate over the collection. I can cheat, because I know that I'm not changing the collection in any way other than appending, I can keep track of the list size, and only lock on changes to list size. Then my enumerator code simply enumerates from 0 to n for any thread that wants a "snapshot" of the append only collection, that will be guaranteed to represent a "snapshot" of the collection at any moment in time, regardless of what other threads are appending to the head of the collection.
I'm pretty certain that most requirements are often extremely simple, and you need 2 or 3 methods only. Writing a truly generic library is awfully difficult, but solving your own codes needs can sometimes be easy with a trick or two.
Long live LinkedList and good functional programming.
Cheers, ... love ya all!
Al
p.s. sample hack AppendOnly class here : https://github.com/goblinfactory/AppendOnly
I have the following helper class (simplified):
public static class Cache
{
private static readonly object _syncRoot = new object();
private static Dictionary<Type, string> _lookup = new Dictionary<Type, string>();
public static void Add(Type type, string value)
{
lock (_syncRoot)
{
_lookup.Add(type, value);
}
}
public static string Lookup(Type type)
{
string result;
lock (_syncRoot)
{
_lookup.TryGetValue(type, out result);
}
return result;
}
}
Add will be called roughly 10/100 times in the application and Lookup will be called by many threads, many of thousands of times. What I would like is to get rid of the read lock.
How do you normally get rid of the read lock in this situation?
I have the following ideas:
Require that _lookup is stable before the application starts operation. The could be build up from an Attribute. This is done automatically through the static constructor the attribute is assigned to. Requiring the above would require me to go through all types that could have the attribute and calling RuntimeHelpers.RunClassConstructor which is an expensive operation;
Move to COW semantics.
public static void Add(Type type, string value)
{
lock (_syncRoot)
{
var lookup = new Dictionary<Type, string>(_lookup);
lookup.Add(type, value);
_lookup = lookup;
}
}
(With the lock (_syncRoot) removed in the Lookup method.) The problem with this is that this uses an unnecessary amount of memory (which might not be a problem) and I would probably make _lookup volatile, but I'm not sure how this should be applied. (John Skeets' comment here gives me pause.)
Using ReaderWriterLock. I believe this would make things worse since the region being locked is small.
Suggestions are very welcome.
UPDATE:
The values of the cache are immutable.
To remove locks completely (slightly differnt then "lock free" where locks almost eliminated and remaining are cleverly replaced with Interlocked instructions) you need to make sure that your dictionary is immutable. If items in the dictionary are not immutable (and as result have they own locks) you probably should not worry about locking on dictionary level.
is the best and easiest solution if you can use it.
reasonable and easy to debug. (Note: as written it does not work well for concurrent adding of the same item. Conside double checking locking pattern if needed - Double-checked locking in .NET)
I would not do it if 1/2 is an option.
If you can use new 4.0 collections - ConcurrentDictionary there matches your criteria (see http://msdn.microsoft.com/en-us/library/dd997305.aspx and http://blogs.msdn.com/b/pfxteam/archive/2010/01/26/9953725.aspx).
At work at the moment, so nothing elegant, came up with this (untested)
public static class Cache
{
private static readonly object _syncRoot = new object();
private static Dictionary<Type, string> _lookup = new Dictionary<Type, string>();
public static class OneToManyLocker
{
private static readonly Object WriteLocker = new Object();
private static readonly List<Object> ReadLockers = new List<Object>();
private static readonly Object myLocker = new Object();
public static Object GetLock(LockType lockType)
{
lock(WriteLocker)
{
if(lockType == LockType.Read)
{
var newReadLocker = new Object();
lock(myLocker)
{
ReadLockers.Add(newReadLocker);
}
return newReadLocker;
}
foreach(var readLocker in ReadLockers)
{
lock(readLocker) { }
}
return WriteLocker;
}
}
public enum LockType {Read, Write};
}
public static void Add(Type type, string value)
{
lock(OneToManyLocker.GetLock(OneToManyLocker.LockType.Write))
{
_lookup.Add(type, value);
}
}
public static string Lookup(Type type)
{
string result;
lock (OneToManyLocker.GetLock(OneToManyLocker.LockType.Read))
{
_lookup.TryGetValue(type, out result);
}
return result;
}
}
You will need some sort of cleanup for the read lockers, but should be threadsafe allowing multiple reads at a time while also locking on writes, unless I'm totally missing something
Either:
Dont use normal locks, go spinlock if the lookup is fast (dictionary is not).
If that is not the case, then use http://msdn.microsoft.com/en-us/library/system.threading.readerwriterlock.aspx. This allows multiple readers and only one writer.
I was reading this question, and read this response
This is actually a fantastic feature.
This lets you have a closure that
accesses something normally hidden,
say, a private class variable, and let
it manipulate it in a controlled way
as a response to something like an
event.
You can simulate what you want quite
easily by creating a local copy of the
variable, and using that.
Would we need to implement Lock() in this situation?
What would that look like?
According to Eric Lippert a compiler makes code look like this:
private class Locals
{
public int count;
public void Anonymous()
{
this.count++;
}
}
public Action Counter()
{
Locals locals = new Locals();
locals.count = 0;
Action counter = new Action(locals.Anonymous);
return counter;
}
What does the Lambda would look like, as well as the long-form code?
If you have a reason to lock, then yes, there's nothing stopping you from putting a lock statement in a closure.
For example, you could do this:
public static Action<T> GetLockedAdd<T>(IList<T> list)
{
var lockObj = new object();
return x =>
{
lock (lockObj)
{
list.Add(x);
}
}
}
What does this look like, in terms of compiler-generated code? Ask yourself: what is captured?
A local object used for locking.
The IList<T> passed in.
These will be captured as instance fields in a compiler-generated class. So the result will look something like this:
class LockedAdder<T>
{
// This field serves the role of the lockObj variable; it will be
// initialized when the type is instantiated.
public object LockObj = new object();
// This field serves as the list parameter; it will be set within
// the method.
public IList<T> List;
// This is the method for the lambda.
public void Add(T x)
{
lock (LockObj)
{
List.Add(x);
}
}
}
public static Action<T> GetLockedAdd<T>(IList<T> list)
{
// Initializing the lockObj variable becomes equivalent to
// instantiating the generated class.
var lockedAdder = new LockedAdder<T> { List = list };
// The lambda becomes a method call on the instance we have
// just made.
return new Action<T>(lockedAdder.Add);
}
Does that make sense?
Yes it is possible.
Just make sure you do not mutate the locked object instance, else it will be useless.
You could have a function like this:
static Func<int> GetIncrementer()
{
object locker = new object();
int i = 0;
return () => { lock (locker) { return i++; } };
}
When you call it, it will return a function that increments an internal counter in a thread-safe manner. Although not the best way to implement such a function, it does demonstrate a lock inside of a closure.
I came across this on my internet travels and I know it's a very old question, but I thought I'd propose an alternative answer to it.
It is possible to lock inside a lambda with the help of a wrapper function, which allows a relatively elegant syntax.
Here's the helper function (in a static class):
public static class Locking
{
[MethodImpl(MethodImplOptions.AggressiveInlining)]
[DebuggerNonUserCode, DebuggerStepThrough]
public static T WithLock<T>(this object threadSync, Func<T> selector)
{
lock (threadSync)
{
return selector();
}
}
}
And here's how you use it:
private readonly object _threadSync = new object();
private int _myProperty;
public int MyProperty
=> _threadSync.WithLock(() => _myProperty);