C# - Locking a resource when obtained from dictionary - c#

I have a Dictionary that tracks objects (ClientObject). Both the dictionary and ClientObject's are accessed by multiple threads. When I modify or read any object in this dictionary, I obtain a read or write lock on the dictionary using ReaderWriterLockSlim (rwl_clients) then obtain an exclusive lock on the actual object.
I just wanted to know if I am using these .net threading facilities correctly
Example:
rwl_clients.EnterReadLock();
ClientObject clobj;
if(!m_clients.TryGetValue(key, out clobj))
return;
rwl_clients.ExitReadLock();
SomeMethod(clobj);
SomeMethod(ClientObject clobj) would do something like:
lock(clobj) {
/// Read / Write operations on clobj
}
Does getting and locking a value (ClientObject) from the dictionary in one thread mean that other threads will respect that lock? In other words, does .net see a value in the dictionary as a single resource (and not a copy) and will therefore respect a lock on that resource in all threads?
One more question, when removing a resource from the dictionary - should I lock it before performing Remove()
Example:
rwl_clients.EnterWriteLock();
ClientObject clobj;
if(m_clients.TryGetValue(key, out clobj)) {
lock(clobj) {
m_clients.Remove(key);
}
}
rwl_clients.ExitWriteLock();
I have learned so much from this site and appreciate any responses!
Thanks.

Does getting and locking a value (ClientObject) from the dictionary in one thread mean that other threads will respect that lock? In other words, does .net see a value in the dictionary as a single resource (and not a copy) and will therefore respect a lock on that resource in all threads?
It depends on the type - if a reference type then yes, if a value type no. This is also why you should never, ever lock on a value type since the value type will be boxed and any subsequent attempts to lock on that value will actually acquire a lock on a different object.
One more question, when removing a resource from the dictionary - should I lock it before performing Remove()
Yes, you should lock before any operation that mutates the state of the object.
As a side note - are you sure that this setup is the best possible solution to your problem? Mutable objects shared across threads tend to create more problems then they solve.

If you are adding or removing items from the dictionary, lock the dictionary.
When you put an object in the dictionary, you are putting a REFERENCE to that object in the dictionary. To prevent that object from being changed by a second thread while the first thread is in the process of changing it, lock the object, not the dictionary.

Related

Is list copy thread safe?

Is it safe to use the following pattern in a multithreaded scenario?:
var collection = new List<T>(sharedCollection);
Where sharedCollection can be modified at the same time by another thread (i.e. have elements added or removed from it)?
The scenario I'm currently dealing with is copying the items from a BindingList, but the question should be relative to any standard collection type.
If it isn't thread safe, should I put a lock on the sharedCollection, or are there better solutions?
You seem to have answered your own question(s). No, copying a changing list to another list is not thread-safe, and yes, you could lock on sharedCollection. Note that it's not enough to lock sharedCollection while copying it; you need to lock it anytime you read or change its contents as well.
Edit: just a note about when it's bad to lock on the object you're modifying--if the object reference itself can be changed (like `sharedCollection = new List) or if it can be null, then make a separate object to lock on as a member of the class where the reading/writing is happening.
You can lock the SyncRoot object of sharedCollection.
Explain here :
Lock vs. ToArray for thread safe foreach access of List collection

Multi threading on dictionary with fixed keys

I have a dictionary with a fixed collection of keys, which I create at the beginning of the program. Later, I have some threads updating the dictionary with values.
No pairs are added or removed once the threads started.
Each thread has its own key. meaning, only one thread will access a certain key.
the thread might update the value.
The question is, should I lock the dictionary?
UPDATE:
Thanks all for the answers,
I tried to simplify the situation when i asked this question, just to understand the behaviour of the dictionary.
To make myself clear, here is the full version:
I have a dictionary with ~3000 entries (fixed keys), and I have more than one thread accessing the key (shared resourse), but I know for a fact that only one thread is accessing a key entry at a time.
so, should I lock the dictionary? and - when you have the full version now, is a dictionary the right choise at all?
Thanks!
FROM MSDN
A Dictionary can support multiple readers concurrently, as long as the collection is not modified.
To allow the collection to be accessed by multiple threads for reading and writing, you must implement your own synchronization.
For a thread-safe alternative, see ConcurrentDictionary<TKey, TValue>.
Let's deal with your question one interpretation at a time.
First interpretation: Given how Dictionary<TKey, TValue> is implemented, with the context I've given, do I need to lock the dictionary?
No, you don't.
Second interpretation: Given how Dictionary<TKey, TValue is documented, with the context I've given, do I need to lock the dictionary?
Yes, you definitely should.
There is no guarantee that the access, which might be OK today, will be OK tomorrow, in a multithreaded world since the type is documented as not threadsafe. This allows the programmers to make certain assumptions about the state and integrity of the type that they would otherwise have to build in guarantees for.
A hotfix or update to .NET, or a whole new version, might change the implementation and make it break and this is your fault for relying on undocumented behavior.
Third interpretation: Given the context I've given, is a dictionary the right choice?
No it isn't. Either switch to a threadsafe type, or simply don't use a dictionary at all. Why not just use a variable per thread instead?
Conclusion: If you intend to use the dictionary, lock the dictionary. If it's OK to switch to something else, do it.
Use a ConcurrentDictionary, don't reinvent the wheel.
Better still, refactor your code to avoid this unecessary contention.
If there is no communication between the threads you could just do something like this:
assuming a function that changes a value.
private static KeyValuePair<TKey, TValue> ValueChanger<TKey, TValue>(
KeyValuePair<TKey, TValue> initial)
{
// I don't know what you do so, i'll just return the value.
return initial;
}
lets say you have some starting data,
var start = Enumerable.Range(1, 3000)
.Select(i => new KeyValuePair<int, object>(i, new object()));
you could process them all at once like this,
var results = start.AsParallel().Select(ValueChanger);
when, results is evaluated, all 3000 ValueChangers will run concurrently, yielding a IEnumerable<KeyValuePair<int, object>>.
There will be no interaction between the threads, thus no possible concurrency problems.
If you want to turn the results into a Dictionary you could,
var resultsDictionary = results.ToDictionary(p => p.Key, p => p.Value);
This may or may not be useful in your situation but, without more detail its hard to say.
If each thread access only one "value" and if you dont care about others I'll say you dont need a Dictionary at all. You can use ThreadLocal or ThreadStatic variables.
If at all you need a Dictionary you definitely need a lock.
If you're in .Net 4.0 or above I'll strongly suggest you to use ConcurrentDictionary, you don't need to synchronize access when using ConcurrentDictionary because it is already "ThreadSafe".
The Diectionary is not thread safe but in your code you do not have to do that; you said one thread update one value so you do not have multi threading problem!
I do not have the code so I'm not sure 100%.
Also check this :Making dictionary access thread-safe?
If you're not adding keys, but simply modifying values, why not completely remove the need for writing directly to the Dictionary by storing complex objects as the value and modifying a value within the complex type. That way, you respect the thread safety constraints of the dictionary.
So:
class ValueWrapper<T>
{
public T Value{get;set;}
}
//...
var myDic = new Dictionary<KeyType, ValueWrapper<ValueType>>();
//...
myDic[someKey].Value = newValue;
You're now not writing directly to the dictionary but you can modify values.
Don't try to do the same with keys. Necessarily, they should be immutable
Given the constraint "I know for a fact that only one thread is accessing a key entry at a time", I don't think you have any problem.
Possible modifications of a Dictionary are: add, update and remove.
If the Dictionary is modified or allowed to be modified, you must use a synchronization mechanism of choice to eliminate the potential race condition, in which one thread reads the old dirty value while a second thread is currently replacing the value/updating the key.
To safe you some work, use the ConcurentDictionary in this scenario.
If the Dictionary is never modified after creation, there won't be any race conditions. A synchronization is therefore not required.
This is a special scenario in which you can replace the table with a read-only table. To add the important robustness, like guarding against potential bugs by accidentally manipulating the table, you should make the Dictionary immutable (or read-only). To give the developer compiler support, such an immutable implementation must throw an exception on any manipulation attempts.
To safe you some work, you can use the ReadOnlyDictionary in this scenario. Note that the underlying Dictionary of the ReadOnlyDictionary is still mutable and that its changes are propagated to the ReadOnlyDictionary facade. The ReadOnlyDictionary only helps to ensure that the table is not accidentally modified by its consumers.
This means: Dictionary is never an option in a multithreaded context.
Rather use the ConcurrentDictionary or a synchronization mechanism in general (or use the ReadOnlyDictionary if you can guarantee that the original source collection never changes).
Since you allow and expect manipulations of the table ("[...] the thread might update the value"), you must use a synchronization mechanism of choice or the ConcurrentDictionary.

Lock for ConcurrentDictionary when AddOrUpdate-ing?

I use a ConcurrentDictioanry<string, HashSet<string>> to access some data across many threads.
I read in this article (scroll down) that the method AddOrUpdate is not executed in the lock, so it could endanger thread-safety.
My code is as follows:
//keys and bar are not the concern here
ConcurrentDictioanry<string, HashSet<string>> foo = new ...;
foreach(var key in keys) {
foo.AddOrUpdate(key, new HashSet<string> { bar }, (key, val) => {
val.Add(bar);
return val;
});
}
Should I enclose the AddOrUpdate call in a lock statement in order to be sure everything is thread-safe?
Locking during AddOrUpdate on its own wouldn't help - you'd still have to lock every time you read from the set.
If you're going to treat this collection as thread-safe, you really need the values to be thread-safe too. You need a ConcurrentSet, ideally. Now that doesn't exist within the framework (unless I've missed something) but you could probably create your own ConcurrentSet<T> which used a ConcurrentDictionary<T, int> (or whatever TValue you like) as its underlying data structure. Basically you'd ignore the value within the dictionary, and just treat the presence of the key as the important part.
You don't need to implement everything within ISet<T> - just the bits you actually need.
You'd then create a ConcurrentDictionary<string, ConcurrentSet<string>> in your application code, and you're away - no need for locking.
You'll need to fix this code, it creates a lot of garbage. You create a new HashSet even if none is required. Use the other overload, the one that accepts the valueFactory delegate. So the HashSet is only created when the key isn't yet present in the dictionary.
The valueFactory might be called multiple times if multiple threads concurrently try to add the same value of key and it is not present. Very low odds but not zero. Only one of these hashsets will be used. Not a problem, creating the HashSet has no side effects that could cause threading trouble, the extra copies just get garbage collected.
The article states that the add delegate is not executed in the dictionary's lock, and that the element you get might not be the element created in that thread by the add delegate. That's not a thread safety issue; the dictionary's state will be consistent and all callers will get the same instance, even if a different instance was created for each of them (and all but one get dropped).
Seems the better answer would be to use Lazy, per this article on the methods that pass in a delegate.
Also another good article Here on Lazy loading the add delegate.

Multithreaded Collections lock types

Is there a difference in the below code segments in the way we lock?
public Hashtable mySet= new Hashtable() //mySet is visible to other threads.
lock (mySet)
{
mySet.Add("Hello World");
}
and
public Hashtable mySet= new Hashtable();
lock(mySet.SyncRoot)
{
mySet.Add("Hello World");
}
lock doesn't actually lock the object in question, so it makes no difference which object is used. Instead it uses the object to establish a protocol and as long as all threads use the same object the protocol guarantees that only one thread will execute code guarded by that lock.
You can think of the object as the microphone on a talk show. Whoever holds the microphone is the only one allowed to talk (I know that is not always how it turns out on some of the shows, but that's the idea anyway).
As the object passend to the lock will only be used as a "flag holder", this will not make any difference.
Please see this
According to the MSDN documentation here only a lock on the SyncRoot of a collection does guarantee thread safety.
Enumerating through a collection is intrinsically not a thread-safe
procedure. Even when a collection is synchronized, other threads can
still modify the collection, which causes the enumerator to throw an
exception. To guarantee thread safety during enumeration, you can
either lock the collection during the entire enumeration or catch the
exceptions resulting from changes made by other threads.

Why lock when reading from a dictionary

I am confused by a code listing in a book i am reading, C# 3 in a Nutshell, on threading.
In the topic on Thread Safety in Application Servers, below code is given as an example of a UserCache:
static class UserCache
{
static Dictionary< int,User> _users = new Dictionary< int, User>();
internal static User GetUser(int id)
{
User u = null;
lock (_users) // Why lock this???
if (_users.TryGetValue(id, out u))
return u;
u = RetrieveUser(id); //Method to retrieve from databse
lock (_users) _users[id] = u; //Why lock this???
return u;
}
}
The authors explain why the RetrieveUser method is not in a lock, this is to avoid locking the cache for a longer period.
I am confused as to why lock the TryGetValue and the update of the dictionary since even with the above the dictionary is being updated twice if 2 threads call simultaneously with the same unretrieved id.
What is being achieved by locking the dictionary read?
Many thanks in advance for all your comments and insights.
The Dictionary<TKey, TValue> class is not threadsafe.
If one thread writes one key to the dictionary while a different thread reads the dictionary, it may get messed up. (For example, if the write operation triggers an array resize, or if the two keys are a hash collision)
Therefore, the code uses a lock to prevent concurrent writes.
There is a benign race condition when writing to the dictionary; it is possible, as you stated, for two threads to determine there is not a matching entry in the cache. In this case, both of them will read from the DB and then attempt to insert. Only the object inserted by the last thread is kept; the other object will be garbage collected when the first thread is done with it.
The read to the dictionary needs to be locked because another thread may be writing at the same time, and the read needs to search over a consistent structure.
Note that the ConcurrentDictionary introduced in .NET 4.0 pretty much replaces this kind of idiom.
That's a common practice to access any non thread safe structures like lists, dictionaries, common shared values, etc.
And answering main question: locking a read we guarantee that dictionary will not be changed by another thread while we are reading its value. This is not implemented in dictionary and that is why it’s called non thread safe :)
If two threads call in simultaneously and the id exists, then they will both return the correct User information. The first lock is to prevent errors like SLaks said - if someone is writing to the dictionary while you are trying to read it, you'll have issues. In this scenario, the second lock will never be reached.
If two threads call in simultaneously and the id does not exist, one thread will lock and enter TryGetValue, this will return false and set u to a default value. This first lock is again, to prevent the errors described by SLaks. At this point, that first thread will release the lock and the second thread will enter and do the same. Both will then set 'u' to information from 'RetrieveUser(id)'; this should be the same information. One thread will then lock the dictionary and assign _users[id] to the value of u. This second lock is so that two threads are trying to write values to the same memory locations simultaneously and corrupting that memory. I don't know what the second thread will do when it enters the assignment. It will either return early ignoring the update, or overwrite the existing data from the first thread. Regardless, the Dictionary will contain the same information because both threads should have recieved the same data in 'u' from RetrieveUser.
For performance, the auther compared two scenarios - the above scenario, which will be extremely rare and block while two threads try and write the same data, and second one where it is far more likely that two threads call in requesting data for an object that needs written, and one that exists. For example, threadA and threadB call in simultaneously and ThreadA locks for an id that doesn't exist. There is no reason to make threadB wait for a lookup while threadA is working on RetriveUser. This situation is probably far more likely than the duplicate ids described above, so for performance the author chose not to lock on the whole block.

Categories

Resources