Safe to get Count value from generic collection without locking the collection? - c#

I have two threads, a producer thread that places objects into a generic List collection and a consumer thread that pulls those objects out of the same generic List. I've got the reads and writes to the collection properly synchronized using the lock keyword, and everything is working fine.
What I want to know is if it is ok to access the Count property without first locking the collection.
JaredPar refers to the Count property in his blog as a decision procedure that can lead to race conditions, like this:
if (list.Count > 0)
{
return list[0];
}
If the list has one item and that item is removed after the Count property is accessed but before the indexer, an exception will occur. I get that.
But would it be ok to use the Count property to, say, determine the initial size a completely different collection? The MSDN documentation says that instance members are not guaranteed to be thread safe, so should I just lock the collection before accessing the Count property?

I suspect it's "safe" in terms of "it's not going to cause anything to go catastrophically wrong" - but that you may get stale data. That's because I suspect it's just held in a simple variable, and that that's likely to be the case in the future. That's not the same as a guarantee though.
Personally I'd keep it simple: if you're accessing shared mutable data, only do so in a lock (using the same lock for the same data). Lock-free programming is all very well if you've got appropriate isolation in place (so you know you've got appropriate memory barriers, and you know that you'll never be modifying it in one thread while you're reading from it in another) but it sounds like that isn't the case here.
The good news is that acquiring an uncontested lock is incredibly cheap - so I'd go for the safe route if I were you. Threading is hard enough without introducing race conditions which are likely to give no significant performance benefit but at the cost of rare and unreproducible bugs.

Related

Should I use a C# Dictionary if I only need fast lookup of keys, and values are irrelevant?

I am in need of a data type that is able to insert entries and then be able to quickly determine if an entry has already been inserted. A Dictionary seems to suit this need (see example). However, I have no use for the dictionary's values. Should I still use a dictionary or is there another better suited data type?
public class Foo
{
private Dictionary<string, bool> Entities;
...
public void AddEntity(string bar)
{
if (!Entities.ContainsKey(bar))
{
// bool value true here has no use and is just a placeholder
Entities.Add(bar, true);
}
}
public string[] GetEntities()
{
return Entities.Keys.ToArray();
}
}
You can use HashSet<T>.
The HashSet<T> class provides high-performance set operations. A set
is a collection that contains no duplicate elements, and whose
elements are in no particular order.
Habib's answer is excellent, but for multi-threaded environments if you use a HashSet<T> then by consequence you have to use locks to protect access to it. I find myself more prone to creating deadlocks with lock statements. Also, locks yield a worse speedup per Amdahl's law because adding a lock statement reduces the percentage of your code that is actually parallel.
For those reasons, a ConcurrentDictionary<T,object> fits the bill in multi-threaded environments. If you end up using one, then wrap it like you did in your question. Just new up objects to toss in as values as needed, since the values won't be important. You can verify that there are no lock statements in its source code.
If you didn't need mutability of the collection then this would be moot. But your question implies that you do need it, since you have an AddEntity method.
Additional info 2017-05-19 - actually, ConcurrentDictionary does use locks internally, although not lock statements per se--it uses Monitor.Enter (check out the TryAddInternal method). However, it seems to lock on individual buckets within the dictionary, which means there will be less contention than putting the entire thing in a lock statement.
So all in all, ConcurrentDictionary is often better for multithreaded environments.
It's actually quite difficult (impossible?) to make a concurrent hash set using only the Interlocked methods. I tried on my own and kept running into the problem of needing to alter two things at the same time--something that only locking can do in general. One workaround I found was to use singly-linked lists for the hash buckets and intentionally create cycles in a list when one thread needed to operate on a node without interference from other threads; this would cause other threads to get caught spinning around in the same spot until that thread was done with its node and undid the cycle. Sure, it technically didn't use locks, but it did not scale well.

Iterate and do operations on a generic collection concurrently

I've created a game emulation program using c# async socks. I need to remove/add & do iterations on a collection (a list that holds clients) concurrently. I am currently using "lock", however, it's a a huge performance drop. I also do not want to use "local lists/copies" to keep the list up-to-date. I've heard about "ConcurrentBags", however, I am not sure how thread safe they are for iterations (for instance if a thread removes an element from the list while another thread is doing an iteration on it!?).
What do you suggest?
Edit: here is a situation
this is when a packet is sent to all the users in a room
lock (parent.gameClientList)
{
for (int i = 0; i <= parent.gameClientList.Count() - 1; i++) if (parent.gameClientList[i].zoneId == zoneId) parent.gameClientList[i].SendXt(packetElements); //if room matches - SendXt sends a packet
}
When a new client connects
Client connectedClient = new Client(socket, this);
lock (gameClientList)
{
gameClientList.Add(connectedClient);
}
Same case when a client disconnects.
I am asking for a better alternative (performance-wise) because the locks slow down everything.
It sounds like the problem is that you're doing all the work within your foreach loop, and it's locking out the add/remove methods for too long. The way around this is to quickly make a copy of the collection while it's locked, and then you can close the lock and iterate on the copy.
Thing[] copy;
lock(myLock) {
copy = _collection.ToArray();
}
foreach(var thing in copy) {...}
The drawback is that by the time you get around to operating on some object of that copy, it may have been removed from the original collection and so maybe you don't want to operate on it anymore. That's another thing you'll just have to figure out the requirements. If that's a problem, a simple option would be to lock each iteration of the loop, which of course will slow things down but at least it won't lock for the entire duration the loop is running:
foreac(var thing in copy) {
lock(myLock) {
if (_collection.Contains(thing)) //check that it's still in the original colleciton
DoWork(thing); //If you can move this outside the lock it'd make your app snappier, but it could also cause problems if you're doing something "dangerous" in DoWork.
}
}
If this is what you meant by "local copies", then you can disregard this option, but I figured I'd offer it in case you meant something else.
Every time you do something concurrently you are going to have loss due to task management (i.e. locks). I suggest you look at what is the bottleneck in your process. You seem to have a shared memory model, as opposed to a message passing model. If you know you need to modify the entire collection at once, there may not be a good solution. But if you are making changes in a particular order you can leverage that order to prevent delays. Locks is an implementation of pessimistic concurrency. You could switch to an optimistic concurrency model. In one the cost is waiting in the other the cost is retrying. Again the actual solution depends on your use case.
On problem with ConcurrentBag is that it is unordered so you cannot pull items out by index the same way you are doing it currently. However, you can iterate it via foreach to get the same effect. This iteration will be thread-safe. It will not go bizerk if an item is added or removed while the iteration is happening.
There is another problem with ConcurrentBag though. It actually copies the contents to a new List internally to make the enumerator work correctly. So even if you wanted to just pick off a single item via the enumerator it would still be a O(n) operation because of the way enumerator works. You can verify this by disassembling it.
However, based on context clues from your update I assume that this collection is going to be small. It appears that there is only one entry per "game client" which means it is probably going to store a small number of items right? If that is correct then the performance of the GetEnumerator method will be mostly insignificant.
You should also consider ConcurrentDictionary as well. I noticed that you are trying to match items from the collection based on zoneId. If you store the items in the ConcurrentDictionary keyed by zoneId then you would not need to iterate the collection at all. Of course, this assumes that there is only one entry per zoneId which may not be the case.
You mentioned that you did not want to use "local lists/copies", but you never said why. I think you should reconsider this for the following reasons.
Iterations could be lock-free.
Adding and removing items appears to be infrequent based context clues from your code.
There are a couple of patterns you can use to make the list copying strategy work really well. I talk about them in my answers here and here.

Multi-threaded access to C# dictionary

I understand that C# dictionaries are not thread-safe when it comes to adding, reading and removing elements; however, can you access the Count Property of a C# dictionary in a thread safe way if another thread is writing, reading, and removing from the dictionary?
Since property is a method call under the hood hood so really situation is not such simple as at first glance.
T1: Accessing Count property
T1: get_Count() call (some kind of JMP/GOTO ASM instruction)
T1: read variable which represents a number of items == 1
T2: addign a new item, real count becomes 2
T1: returns 1 but really there are already two items
So if an application logic relies on a Count property value - theoretically you could ends up with race condition.
It should be threadsafe in that it won't blow up, but I'm fairly sure it is not threadsafe in that it might not give you the correct count due to other threads manipulating the dictionary.
[Edit: as usr points out, just because it's currently threadsafe at this level does not mean that it will continue to be so. You have no guarantees]
First: This is dangerous thinking. Be very careful if you put such a thing into production. Probably, you should just use a lock or ConcurrentDictionary.
Are you comfortable with an approximate answer?
But: Reflector shows that count just reads some field and returns it. This is likely not to change forever. So you reasonably could take the risk in this case.
You could create the dictionary as a static readonly object, and lock it when modifying it... that should deal with most problems, I would think.
Depending on what you want to achieve, you can protect the dictionary with a ReaderWriterLockSlim. This way you can have faster reads and only lock the dictionary when performing mutating operations.
Rather than assuming, I went and looked at it in reflector:
public int Count
{
get
{
// count and freeCount are local fields
return (this.count - this.freeCount);
}
}
So yes, it is "thread safe" in that accessing it will not cause corruption in the dictionary object. That said, I am not sure that I can think of any use case where getting the count with any accuracy is important if the dictionary is being accessed by another thread.

Thread safety of List<T> with One Writer, No Enumerators

While going through some database code looking for a bug unrelated to this question, I noticed that in some places List<T> was being used inappropriately. Specifically:
There were many threads concurrently accessing the List as readers, but using indexes into the list instead of enumerators.
There was a single writer to the list.
There was zero synchronization, readers and writers were accessing the list at the same time, but because of code structure the last element would never be accessed until the method that executed the Add() returned.
No elements were ever removed from the list.
By the C# documentation, this should not be thread safe.
Yet it has never failed. I am wondering, because of the specific implementation of the List (I am assuming internally it's an array that re-allocs when it runs out of space), it the 1-writer 0-enumerator n-reader add-only scenario accidentally thread safe, or is there some unlikely scenario where this could blow up in the current .NET4 implementation?
edit: Important detail I left out reading some of the replies. The readers treat the List and its contents as read-only.
This can and will blow. It just hasn't yet. Stale indices is usually the first thing that goes. It will blow just when you don't want it to. You are probably lucky at the moment.
As you are using .Net 4.0, I'd suggest changing the list to a suitable collection from System.Collections.Concurrent which is guaranteed to be thread safe. I'd also avoid using array indices and switch to ConcurrentDictionary if you need to look up something:
http://msdn.microsoft.com/en-us/library/dd287108.aspx
Because of it has never failed or your application doesn't crash that doesn't mean that this scenario is thread safe. for instance suppose the writer thread does update a field within the list, lets say that is was a long field, at the same time the reader thread reading that field. the value returned maybe a bitwise combination of the two fields the old one and the new one! that could happen because the reader thread start reading the value from memory but before it finishes reading it the writer thread just updated it.
Edit: That of course if we suppose that the reader threads will just read all the data without updating anything, I am sure that they doesn't change the values of the arrays them self but, but they could change a property or field within the value they read. for instance:
for (int index =0 ; index < list.Count; index++)
{
MyClass myClass = list[index];//ok we are just reading the value from list
myClass.SomeInteger++;//boom the same variable will be updated from another threads...
}
This example not talking about thread safe of the list itself rather than the shared variables that the list exposed.
The conclusion is that you have to use a synchronization mechanism such as lock before interaction with the list, even if it has only one writer and no item removed, that will help you prevent tinny bugs and failure scenarios you are dispensable for in the first place.
Thread safety only matters when data is modified more than once at a time. The number of readers does not matter. Even when someone is writing while someone reads, the reader either gets the old data or the new, it still works. The fact that elements can only be accessed after the Add() returns, prevents parts of the element being read seperately. If you would start using the Insert() method readers could get the wrong data.
It follows then, that if the architecture is 32 bits, writing a field bigger than 32 bits, such as long and double, is not a thread safe operation; see the documentation for System.Double:
Assigning an instance of this type is not thread safe on all hardware platforms because the
binary representation of that instance might be too large to assign in a single atomic
operation.
If the list is fixed in size, however, this situation matters only if the List is storing value types greater than 32 bits. If the list is only holding reference types, then any thread safety issues stem from the reference types themselves, not from their storage and retrieval from the List. For instance, immutable reference types are less likely to cause thread safety issues than mutable reference types.
Moreover, you can't control the implementation details of List: that class was mainly designed for performance, and it's likely to change in the future with that aspect, rather than thread safety, in mind.
In particular, adding elements to a list or otherwise changing its size is not thread safe even if the list's elements are 32 bits long, since there is more involved in inserting, adding, or removing than just placing the element in the list. If such operations are needed after other threads have access to the list, then locking access to the list or using a concurrent list implementation is a better choice.
First off, to some of the posts and comments, since when was documentation reliable?
Second, this answer is more to the general question than the specifics of the OP.
I agree with MrFox in theory because this all boils down to two questions:
Is the List class is implemented as a flat array?
If yes, then:
Can a write instruction be preempted in the middle of a write>
I believe this is not the case -- the full write will happen before anything can read that DWORD or whatever. In other words, it will never happen that I write two of the four bytes of a DWORD and then you read 1/2 of the new value and 1/2 of the old one.
So, if you're indexing an array by providing an offset to some pointer, you can read safely without thread-locking. If the List is doing more than just simple pointer math, then it is not thread safe.
If the List was not using a flat array, I think you would have seen it crash by now.
My own experience is that it is safe to read a single item from a List via index without thread-locking. This is all just IMHO though, so take it for what it's worth.
Worst case, such as if you need to iterate through the list, the best thing to do is:
lock the List
create an array the same size
use CopyTo() to copy the List to the array
unlock the List
then iterate through the array instead of the list.
in (whatever you call the .net) C++:
List<Object^>^ objects = gcnew List<Object^>^();
// in some reader thread:
Monitor::Enter(objects);
array<Object^>^ objs = gcnew array<Object^>(objects->Count);
objects->CopyTo(objs);
Monitor::Exit(objects);
// use objs array
Even with the memory allocation, this will be faster than locking the List and iterating through the entire thing before unlocking it.
Just a heads up though: if you want a fast system, thread-locking is your worst enemy. Use ZeroMQ instead. I can speak from experience, message-based synch is the right way to go.

When implementing a thread safe queue or list, is it required to lock before returning Count?

When implementing a thread safe list or queue; does it require to lock on the List.Count property before returning the Count i.e:
//...
public int Count
{
lock (_syncObject)
{
return _list.Count;
}
}
//...
Is it necessary to do a lock because of the original _list.Count variable maybe not a volatile variable?
Yes, that is necessary, but mostly irrelevant. Without the lock you could be reading stale values or even, depending on the inner working and the type of _list.Count, introduce errors.
But note that using that Count property is problematic, any calling code cannot really rely on it:
if (myStore.Count > 0) // this Count's getter locks internally
{
var item = myStore.Dequeue(); // not safe, myStore could be empty
}
So you should aim for a design where checking the Count and acting on it are combined:
ItemType GetNullOrFirst()
{
lock (_syncObject)
{
if (_list.Count > 0)
{
....
}
}
}
Additional:
is it nessesary to do a lock because of the original _list.Count variable maybe not a volatile variable?
The _list.Count is not a variable but a property. It cannot be marked volatile. Whether it is thread-safe depends on the getter code of the property, but it will usually be safe. But unreliable.
It depends.
Now, theoretically, because the underlying value is not threadsafe, just about anything could go wrong because of something the implementation does. In practice though, it's a read from a variable of 32-bits and so guaranteed to be atomic. Hence it might be stale, but it will be a real stale value rather than garbage caused by reading half a value before a change and the other half after it.
So the question is, does staleness matter? Possibly it doesn't. The more you can put up with staleness the better for performance (because the more you can put up with it the less you need to do to ensure you don't have anything stale). E.g. if you were putting the count out to the UI and the collection was changing rapidly, then just the time it takes for the person to read the number and process it in their own brain is going to be enough to make it obsolete anyway, and hence stale.
If however, you needed it to ensure an operation was reasonable before it was attempted, then that staleness is going to cause problems.
However, that staleness is going to happen anyway, because in between your locked (and guaranteed to be fresh) read and the operation happening, there's the opportunity for the collection to change, so you've got a race condition even when locking.
Hence, if freshness is important, you are going to have to lock at a higher level. With this being the case, the lower-level lock is just a waste. Hence you can probably do without it at that point.
The important thing here is that even with a class that is threadsafe in every regard, the best that can guarantee is that each operation will be fresh (though even that may not be guaranteed; indeed when there are more than one core involved "fresh" begins to become meaningless as changes that are near to truly simultaneous can happen) and that each operation won't put the object into an invalid safe. It is still possible to write non-threadsafe code with threadsafe objects (indeed very many non-threadsafe objects are composed of ints, strings and bools or objects that are in turn composed of them, and each of those is threadsafe in and of itself).
One thing that can be useful with mutable classes intended for multithreaded use are synchronised "Try" methods. E.g. to use a List as a stack we need the following operations:
See if the list is empty, reporting failure if it is.
Obtain the "top" value.
Remove the top value.
Even if we synchronise each of those individually, the code doing so won't be threadsafe as something can happen between each step. However, we can provide a Pop() method that does each three in one synchronised method. Analogous approaches are also useful with lock-free classes (where different techniques are used to ensure the method either succeeds or fails completely, and without damage to the object).
No, you don't need to lock, but the caller should otherwise something like this might happen
count is n
thread asks for count
Count returns n
another thread dequeues
count is n - 1
thread asking for count sees count is n when count is actually n - 1

Categories

Resources