Is it true or false of this sentence?
It is optimized for usage where writes from multiple sources are
common
ReaderWriterLockSlim allows thread to lock for read or write and only one lock. But about many threads - is it allow for 2 threads to lock itself or not? I'm confused...
Reader/writer locks, both the slim and the fat variety, are optimized for situations where there are multiple readers but few writers.
Both lock types allow multiple readers to access the resource simultaneously but only one writer. If a writer requests access, it is queued up until all current readers have exited, no new readers are allowed to enter during this process, and the one writer thread then has exclusive access until it releases its writer lock again.
The main difference between slim and normal is that the latter is newer and has better performance characteristics for most common scenarios.
Related
I am confused about the warning for ReaderWriterLockSlim SupportsRecursion
From MSDN
By default, new instances of ReaderWriterLockSlim are created with the
LockRecursionPolicy.NoRecursion flag and do not allow recursion. This
default policy is recommended for all new development, because
recursion introduces unnecessary complications and makes your code
more prone to deadlocks.
What I don't understand is why do this warning not apply to the built in lock statement which is recursive ?
As explained here, the lock keyword in C# is based on a Monitor object, an exclusive synchronization mechanism. "Exclusive" means that, when the first thread enters the critical section, any subsequent threads are blocked.
ReaderWriterLockSlim, on the other hand, distinguishes between reader locks and writer locks. They are intended to be used in (and provide improved concurrency in) scenarios where there are many readers but only occasional write updates. Reader/Writer locks are non-exclusive.
A lock knows which thread it was locked on, so if that thread re-enters the critical section, it just increments a counter and continues.
A ReaderWriterLockSlim is in a more complicated position. Because it distinguishes between read locks and write locks, and because it is sometimes desirable to lock on a write without creating a race condition, the ReaderWriterLockSlim offers an UpgradableLock that allows you to temporarily enhance the lock for write capabilities without worrying about a race condition caused by a rogue write from another thread coming in while you transition to write mode.
As you can see, ReaderWriterLockSlim offers a more feature-rich, but also more complex model for synchronization than lock does. Its requirement to declare your intention to use recursion is an admission of this additional complexity.
Further Reading
Synchronization Essentials: Locking
Advanced Threading: Reader/Writer Locks
Why Lock Recursion is Generally a Bad Idea Anyway
I have a few threads writing and reading different files.
Is it ok to use a single lock {} (the same variable for all protected regions) for all disk operations? So I don't have two threads simultaneously reading and writing to disk to about seeks?
I also heard that I could also use on thread for reads and another for writes, is this always true? why?
If each thread reads or writes to a different file, I don't see why you need concurrency.
Usually, there are multiple threads accessing the same file (resource) for reading and writing.
In that scenario, when a thread is writing to the file, all the other threads have to wait.
This is a classic concurrency problem called "Readers-Writers".
You can find more information here:
http://en.wikipedia.org/wiki/Readers-writers_problem
If you are not accessing code of other thread from any thread then one object for synchronization would be enough but it would increase the thread queue waiting for resource. One sync object for each resource or group of resource would be better option
Your requirement seems somewhat confusing and morphs. One comment says 'the threads are writing to the same file' and another 'all write to the same collection of files simultaneously'.
There are some choices:
1) Lock up the reads and writes with one lock. this is the simplest method but has the highest probability of contention between the calling threads because the lock is held for the duration of a disk operation.
2) Lock up the reads and writes with one reader/writer lock per file - this is better than (1) in that contentin on different files does not happen. There could still be contention between reads/writes to the same file.
2) Queueing off the reads/writes to one writer thread. This tends to exercise the disk more because it has to swap around between files as it dequeues and executes write requests, but minimizes write contention in the calling threads - they only have to lock a queue for the time taken to push a pointer on. Reading becomes a slow operation because the calling threads would have to wait on a synchro object until their read request is completed. Low contention on writes but high latency on all reads.
3) Like (2), but using a thread per file. This can get expensive memory-wise for several files and only really helps over (2) if the output files are spread over several physical disks. Like (2), low contention and slow reads.
4) Queueing off the writes as threadpool tasks. I'm not sure how to do this exactly - the file context would have to be passed as a parameter and access to it would probably need locking up - this may not work effectively. Like (2), low contention and slow reads.
5) Redesign your app to avoid this requirement entirely?
Using only one lock could slow your application. If a thread is writing a file for a long time, maybe other threads should be allowed to read some other files.
Could you be more precise on how which threads access which files?
I'm not entirely sure how best accomplish this multi-threading scenario so any input would be appreciated.
I have one block, that reads data, that several threads can access at once. I have another block that writes data, only one thread can write at any time. Also it can't start writing as long as any thread is reading the data. Is ReaderWriterLockSlim the way to go here, will it wait for the read threads to exit before blocking the thread for writing?
Yes, ReaderWriterLockSlim is perfect for frequent reader/infrequent writer scenarios.
The behaviour is as you guessed - single writer only, writers block until all readers are done, readers cannot access while writer is in process.
Be careful that the time you hold the lock (whether for read or write) is long enough to prevent any concurrent access, and no longer.
Yes, it sounds like ReaderWriterLockSlim is what you want.
A write lock will not be acquired as long as read locks are in place. I suggest you read the documentation for a complete description of the behavior (locking queues, etc):
http://msdn.microsoft.com/en-us/library/system.threading.readerwriterlockslim.aspx
Please explain what are the main differences and when should I use what.
The focus on web multi-threaded applications.
lock allows only one thread to execute the code at the same time. ReaderWriterLock may allow multiple threads to read at the same time or have exclusive access for writing, so it might be more efficient. If you are using .NET 3.5 ReaderWriterLockSlim is even faster. So if your shared resource is being read more often than being written, use ReaderWriterLockSlim. A good example for using it is a file that you read very often (on each request) and you update the contents of the file rarely. So when you read from the file you enter a read lock so that many requests can open it for reading and when you decide to write you enter a write lock. Using a lock on the file will basically mean that you can serve one request at a time.
Consider using ReaderWriterLock if you have lots of threads that only need to read the data and these threads are getting blocked waiting for the lock and and you don’t often need to change the data.
However ReaderWriterLock may block a thread that is waiting to write for a long time.
Therefore only use ReaderWriterLock after you have confirmed you get high contention for the lock in “real life” and you have confirmed you can’t redesign your locking design to reduce how long the lock is held for.
Also consider if you can't rather store the shared data in a database and let it take care of all the locking, as this is a lot less likely to give you a hard time tracking down bugs, iff a database is fast enough for your application.
In some cases you may also be able to use the Aps.net cache to handle shared data, and just remove the item from the cache when the data changes. The next read can put a fresh copy in the cache.
Remember
"The best kind of locking is the
locking you don't need (i.e. don't
share data between threads)."
Monitor and the underlying "syncblock" that can be associated with any reference object—the underlying mechanism under C#'s lock—support exclusive execution. Only one thread can ever have the lock. This is simple and efficient.
ReaderWriterLock (or, in V3.5, the better ReaderWriterLockSlim) provide a more complex model. Avoid unless you know it will be more efficient (i.e. have performance measurements to support yourself).
The best kind of locking is the locking you don't need (i.e. don't share data between threads).
ReaderWriterLock allows you to have multiple threads hold the ReadLock at the same time... so that your shared data can be consumed by many threads at once. As soon as a WriteLock is requested no more ReadLocks are granted and the code waiting for the WriteLock is blocked until all the threads with ReadLocks have released them.
The WriteLock can only ever be held by one thread, allow your 'data updates' to appear atomic from the point of view of the consuming parts of your code.
The Lock on the other hand only allows one thread to enter at a time, with no allowance for threads that are simply trying to consume the shared data.
ReaderWriterLockSlim is a new more performant version of ReaderWriterLock with better support for recursion and the ability to have a thread move from a Lock that is essentially a ReadLock to the WriteLock smoothly (UpgradeableReadLock).
ReaderWriterLock/Slim is specifically designed to help you efficiently lock in a multiple consumer/ single producer scenario. Doing so with the lock statement is possible, but not efficient. RWL/S gets the upper hand by being able to aggressively spinlock to acquire the lock. That also helps you avoid lock convoys, a problem with the lock statement where a thread relinquishes its thread quantum when it cannot acquire the lock, making it fall behind because it won't be rescheduled for a while.
It is true that ReaderWriterLockSlim is FASTER than ReaderWriterLock. But the memory consumption by ReaderWriterLockSlim is outright outrageous. Try attaching a memory profiler and see for yourself. I would pick ReaderWriterLock anyday over ReaderWriterLockSlim.
I would suggest looking through http://www.albahari.com/threading/part4.aspx#_Reader_Writer_Locks. It talks about ReaderWriterLockSlim (which you want to use instead of ReaderWriterLock).
Under what circumstances should each of the following synchronization objects be used?
ReaderWriter lock
Semaphore
Mutex
Since wait() will return once for each time post() is called, semaphores are a basic producer-consumer model - the simplest form of inter-thread message except maybe signals. They are used so one thread can tell another thread that something has happened that it's interested in (and how many times), and for managing access to resources which can have at most a fixed finite number of users. They offer ordering guarantees needed for multi-threaded code.
Mutexes do what they say on the tin - "mutual exclusion". They ensure that the right to access some resource is "held" by only on thread at a time. This gives guarantees of atomicity and ordering needed for multi-threaded code. On most OSes, they also offer reasonably sophisticated waiter behaviour, in particular to avoid priority inversion.
Note that a semaphore can easily be used to implement mutual exclusion, but that because a semaphore does not have an "owner thread", you don't get priority inversion avoidance with semaphores. So they are not suitable for all uses which require a "lock".
ReaderWriter locks are an optimisation over mutexes, in cases where you will have a lot of contention, most accesses are read-only, and simultaneous reads are permissible for the data structure being protected. In such cases, exclusion is required only when a writer is involved - readers don't need to be excluded from each other. To promote a reader to writer all other readers must finish (or abort and start waiting to retry if they also wish to become writers) before the writer lock is acquired. ReaderWriter locks are likely to be slower in cases where they aren't faster, due to the additional book-keeping they do over mutexes.
Condition variables are for allowing threads to wait on certain facts or combinations of facts being true, where the condition in question is more complex than just "it has been poked" as for semaphores, or "nobody else is using it" for mutexes and the writer part of reader-writer locks, or "no writers are using it" for the reader part of reader-writer locks. They are also used where the triggering condition is different for different waiting threads, but depends on some or all of the same state (memory locations or whatever).
Spin locks are for when you will be waiting a very short period of time (like a few cycles) on one processor or core, while another core (or piece of hardware such as an I/O bus) simultaneously does some work that you care about. In some cases they give a performance enhancement over other primitives such as semaphores or interrupts, but must be used with extreme care (since lock-free algorithms are difficult in modern memory models) and only when proven necessary (since bright ideas to avoid system primitives are often premature optimisation).
Btw, these answers aren't C# specific (hence for example the comment about "most OSes"). Richard makes the excellent point that in C# you should be using plain old locks where appropriate. I believe Monitors are a mutex/condition variable pair rolled into one object.
I would say each of them can be "the best" - depends on the use case ;-)
Simple answer: almost never.
The best type of locking is to not need a lock (no shared mutable state).
If you do need a lock, try and use a Monitor (via a lock statement), unless you have specific needs for something different (in which case see Onebyone's answer
Additionally, prefer ReaderWriteLockSlim to ReaderWriterLock (except in the extremely rare case of requiring the latter's fairness).