Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
Is it the same if I used ConcurrentBag (to handle scenario of one writer & multiple readers) instead of using ReaderWriterLockSlim on a List<> ??
UPDATE 1:
The scenario is that there are multiple threads that can reach a static list and some may need to read others may need to write, what I want is:
1- Allow only one thread to add\edit\delete from the list while there are no other threads trying to adding\editing\deleting on it.
2- Allow many threads to read from it at the same time if there's no thread adding\editing\deleting.
In your scenario it sounds like you should be using a ReaderWriterLockSlim on a list.
A concurrent bag does not support deleting (at all) and editing is not safe.
Locking a list with a ReaderWriterLockSlim will allow safe deletion and will allow safe editing provided the editing is done within the write lock scope.
Even though both constructs are related to synchronization and threading they are definitely not interchangeable.
ConcurrentBag is a collection which you can add, take, peek and (most importantly) enumerate in a thread safe way.
ReaderWriterLockSlim is a synch object which allows to read lock or write lock on whatever you want.
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
I am working on implementing threading to my C# program. Each thread requires access to the same array, but does not need to write to it, only read data. Should this array be deep copied for each thread?
The reason I think this might be important (from my very limited knowledge of threading) is so that threads on different CPU cores can store copies of the array in their own cache instead of constantly requesting data from a single core's cache where the array is stored, but perhaps the compiler or something else will optimise away this inefficiency?
Any insight would be much appreciated.
Since you haven't specified the hardware architecture you are running on I'm going to assume it is either and Intel or AMD x64 processor. In which case I recommend trusting the processor to correctly optimize this situation. By creating multiple copies that the processor that the compiler can't know are duplicate copies you will force the processor to use more memory and spread the available cache space over more memory lessening it's effectiveness.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I am reading Concurrency in C# Cookbook and the book told me that:
ConcurrentDictionary<TKey, TValue> is best when you have multiple
threads reading and writing to a shared collection. If the updates are
not constant (if they’re more rare), than ImmutableDictionary<TKey,TValue>
may be a better choice.
I know that Add or Remove a large immutable collection can be slow, and my question is, is there any other difference between them? Now that they are all thread safe, why is ImmutableDictionary a better choice when the updates are not constant?
These two classes ConcurrentDictionary and ImmutableDictionary were compared just because of the simple reason, both are thread safe.
However, it is not a good idea to use ImmutableDictionary for multi-threading. It is designed to represent data which should be loaded once, and shouldn't be changed / modified later on. Any modifications would lead to creating new instance of ImmutableDictionary, which is not really efficient.
Immutable can't be changed, i.e. no add/remove can alter existing dictionary, rather would create a new one shipping all but deleted item. Concurrency proceeds on the lock { } statement which has nothing to do with immutability. Locks are necessary to manage write operations done by more than single thread to the same memory piece (i.e. same object).
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I want to use ThreadStatic atrribute in my code. I want to know if there will be some performance issue in IIS if I use ThreadStatic attribute in my application as multiple threads are going to access those fields . So I want to get idea if the resources of IIS are overused or any another thing I should keep in mind before implementing this.
There is no direct performance issue using ThreadStatic through IIS, but you have to take in consideration that IIS use a thread pool.
It means that your thread static is not free after a single call.
In an other hand, a web request can be composed by multiple threads executions (page for example but not web service) and may not share the same thread for a same "client request".
If you don't free yourself the ThreadStatic thing, it may cost memory usage.
If you valuate a ThreadStatic in a synchronous method that call only synchronous process and free it in a finally block at end of the same method, you can use it without any side effect.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
Hi what will be the cleanest solution for the following pattern ?
Given a class for Read and Write some file/resouce, providing already implemented "read()" and "write()" functions. Create a "Read()" and "Write()" function that would wrap the "read()" and "write()"and prevent threads from interfering as follows:
a. multiple threads are allowed to Read
b. Only one thread is allowed to Write - so if a thread is already writing the other threads must wait.
c. writing must be prevented while a thread is reading and vice versa
Use ReaderWriterLockSlim (https://msdn.microsoft.com/en-us/library/system.threading.readerwriterlockslim(v=vs.110).aspx) as the most efficient construct.
Quote from MSDN:
ReaderWriterLockSlim is similar to ReaderWriterLock, but it has simplified rules for recursion and for upgrading and downgrading lock state. ReaderWriterLockSlimavoids many cases of potential deadlock. In addition, the performance of ReaderWriterLockSlim is significantly better than ReaderWriterLock. ReaderWriterLockSlim is recommended for all new development.
As indicated in the comments, there's already a well-established pattern to solve this - reader-writer locks. This pattern works exactly how you describe - at any given time, you can have either an arbitrary number of readers or a single writer.
The .NET Framework already has an implementation of this pattern.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I'm looking for a way to deal with threads (I guess using semaphors) to stop all threads from entering a certain region for a while. When all threads that are currently in the region get out of it (so no thread is in it anymore), I want to do something that affects this region. After that, the region should be enterable by many threads at the same time again.
I don't see how to do it with the Semaphore since in the documentation I can't find any properties which f.e. Let me change its property on how many can enter the region and even get the amount how many are inside at all.
How could I do this?
Sounds bit like you looking for ReaderWriterLockSlim class. Many thread can enter/get read lock, but only one thread can get write lock.