Supposed there is something like Hashtable created by Hashtable.Synchronized() which is accessed by multiple thread. and the key value pair is Guid and Object in Hashtable .
One of thread need to polling this Hashtable until a specific Guid key had been added into this list by another thread.
Below is my code .
public Hashtable syncHt = new Hashtable();
public void Init()
{
Hashtable ht = new Hashtable();
syncHt = Hashtable.Synchronized(ht);
}
In the application initialization i will call the init();
And In one of thread I will call isExist to find the specific Guid which is added by some other thread .
public bool isExist(Guid sId)
{
while (true)
{
if (syncHt.ContainsKey(sId))
{
return true;
}
}
}
I was wondering whether this loop could be ended. How can I know the Hashtable changed during the polling ?Thanks
Take a look on concurrent collections, especially on ConcurrentBag<T>
Update
About IsExist, here is better solution
Change Hashtableon ConcurrentDictionary<Guid, object>so no lock required
Add items to repositorywithout any lock
ConcurrentDictionary<Guid, object> repository = new ConcurrentDictionary<Guid, object>();
Check repository for existing items
public bool IsExist(Guid id)
{
SpinWait.SpinUntil(() => repository.ContainsKey(id)); - you can add Timout
return true;
}
Here is more about SpinWait
Reading and more important assigning to a reference is always atomic in .NET.
To do atomic operation, use the System.Threading.Interlocked class. See MSDN
I was wondering whether this loop could be ended.
It will end when another (only 1 writer allowed) thread inserts the wanted value, yes.
On MSDN: Hashtable is thread safe for use by multiple reader threads and a single writing thread.
But your solution is very inefficient. The busy-loop can consume a lot of CPU time for nothing. Storing (boxed) Guids in an old style collection isn't perfect either.
Related
I have a ASP.NET application where I use a sequence number from a database sequence when creating a record in a table with entity framework. I have a stored procedure that retrieves the next value in this sequence that is called from entity framework and I want to ensure that this retrieval is thread-safe.
I used this answer to attempt this in the following class:
public static class SequenceNumber
{
private static object Lock = new object();
public static long Next(Context db)
{
long next = 0;
lock (Lock)
{
next = db.GetNextComplaintNumber().Single().Value;
}
return next;
}
}
Will this ensure thread-safety?
The Next(...) method will be threadsafe but the db.GetNextComplaintNumber() will not be (unless it is implemented to be). The lock ensures that any thread calling Next(...) will have to wait their turn to run whatever is in the lock statement. However if any other threads have access to the object db then they could effectively call db.GetNextComplaintNumber() without calling the Next(...) method so it would not be protected by the lock statement. If you can guarantee db.GetNextComplaintNumber() is only called in the Next(...) method then you should be safe, otherwise you would want to implement thread safety into the db.GetNextComplaintNumber() method.
I have an interesting problem with deadlocks in an my application. There is an in-memory data store that uses a ReaderWriterLockSlim to synchronize reads and writes. One of the read methods uses Parallel.ForEach to search the store given a set of filters. It's possible that one of the filters requires a constant-time read of same store. Here is the scenario that's producing a a deadlock:
UPDATE: Example code below. Steps updated with actual method calls
Given singleton instance store of ConcreteStoreThatExtendsGenericStore
Thread1 gets a read lock on the store - store.Search(someCriteria)
Thread2 attempts to update the store with a write lock - store.Update() -, blocks behind Thread1
Thread1 executes Parallel.ForEach against the store to run a set of filters
Thread3 (spawned by Thread1's Parallel.ForEach) attempts a constant-time read of the store. It tries to get a read lock but is blocked behind Thread2's write lock.
Thread1 cannot finish because it can't join Thread3. Thread2 can't finish because it's blocked behind Thread1.
Ideally what I'd like to do is not try to acquire a read lock if an ancestor thread of the current thread already has the same lock. Is there any way to do this? Or is there a another/better approach?
public abstract class GenericStore<TKey, TValue>
{
private ReaderWriterLockSlim _lock = new ReaderWriterLockSlim();
private List<IFilter> _filters; //contains instance of ExampleOffendingFilter
protected Dictionary<TKey, TValue> Store { get; private set; }
public void Update()
{
_lock.EnterWriterLock();
//update the store
_lock.ExitWriteLock();
}
public TValue GetByKey(TKey key)
{
TValue value;
//TODO don't enter read lock if current thread
//was started by a thread holding this lock
_lock.EnterReadLock();
value = Store[key];
_lock.ExitReadLock();
return value;
}
public List<TValue> Search(Criteria criteria)
{
List<TValue> matches = new List<TValue>();
//TODO don't enter read lock if current thread
//was started by a thread holding this lock
_lock.EnterReadLock();
Parallel.ForEach(Store.Values, item =>
{
bool isMatch = true;
foreach(IFilter filter in _filters)
{
if (!filter.Check(criteria, item))
{
isMatch = false;
break;
}
}
if (isMatch)
{
lock(matches)
{
matches.Add(item);
}
}
});
_lock.ExitReadLock();
return matches;
}
}
public class ExampleOffendingFilter : IFilter
{
private ConcreteStoreThatExtendsGenericStore _sameStore;
public bool Check(Criteria criteria, ConcreteValueType item)
{
_sameStore.GetByKey(item.SomeRelatedProperty);
return trueOrFalse;
}
}
It's unclear what kind of concurrency, memory and performance requirements you actually have so here are a few options.
If you are using .Net 4.0, you could replace your Dictionary with a ConcurrentDictionary and remove your ReaderWriterLockSlim. Keep in mind that doing that will reduce your locking scope and change your method semantics, allowing changes to the contents while you're enumerating (among other things), but on the other hand that will give you a threadsafe enumerator that won't block reads or writes. You'll have to determine if that's an acceptable change for your situation.
If you really do need to lock down the entire collection in this way, you might be able to support a recursive lock policy (new ReaderWriterLockSlim(LockRecursionPolicy.SupportsRecursion)) if you can keep all operations on the same thread. Is performing your search in parallel a necessity?
Alternately, you may want to just get a snapshot of your current collection of values (locking around that operation) and then perform your search against the snapshot. It won't be guaranteed to have the latest data and you'll have to spend a little time on conversion, but maybe that's an acceptable tradeoff for your situation.
I'm using a named mutex to lock access to a file (with path 'strFilePath') in a construction like this:
private void DoSomethingsWithAFile(string strFilePath)
{
Mutex mutex = new Mutex(false,strFilePath.Replace("\\",""));
try
{
mutex.WaitOne();
//do something with the file....
}
catch(Exception ex)
{
//handle exception
}
finally
{
mutex.ReleaseMutex();
}
}
So, this way the code will only block the thread when the same file is being processed already.
Well, I tested this and seemed to work okay, but I really would like to know your thoughts about this.
Since you are talking about a producer-consumer situation with multiple threads the "standard solution would be to use BlockingCollection which is part of .NET 4 and up - several links with information:
http://msdn.microsoft.com/en-us/library/dd997371.aspx
http://blogs.msdn.com/b/csharpfaq/archive/2010/08/12/blocking-collection-and-the-producer-consumer-problem.aspx
http://geekswithblogs.net/BlackRabbitCoder/archive/2011/03/03/c.net-little-wonders-concurrentbag-and-blockingcollection.aspx
http://www.albahari.com/threading/part5.aspx
IF you just want to make the locking process work then:
use a ConcurrentDictionary in combination with the TryAdd method call... if it returns true then the file was not "locked" and is now "locked" so the thread can proceed - and "unlock" it by calling Remove at the end... any other thread gets false in the meantime and can decide what to do...
I would definitely recommend the BlockingCollection approach though!
I ran into the same problem with many threads that can write in the same file.
The one of the reason that mutex not good because it slowly:
duration of call mutexSyncTest: 00:00:08.9795826
duration of call NamedLockTest: 00:00:00.2565797
BlockingCollection collection - very good idea, but for my case with rare collisions, parallel writes better than serial writes. Also way with dictionary much more easy to realise.
I use this solution (UPDATED):
public class NamedLock
{
private class LockAndRefCounter
{
public long refCount;
}
private ConcurrentDictionary<string, LockAndRefCounter> locksDictionary = new ConcurrentDictionary<string, LockAndRefCounter>();
public void DoWithLockBy(string key, Action actionWithLock)
{
var lockObject = new LockAndRefCounter();
var keyLock = locksDictionary.GetOrAdd(key, lockObject);
Interlocked.Increment(ref keyLock.refCount);
lock (keyLock)
{
actionWithLock();
Interlocked.Decrement(ref keyLock.refCount);
if (Interlocked.Read(ref keyLock.refCount) <= 0)
{
LockAndRefCounter removed;
locksDictionary.TryRemove(key, out removed);
}
}
}
}
An alternative would be: make one consumer thread which works on a queue, and blocks if it is empty. You can have several producer threads adding several filepaths to this queue and inform the consumer.
Since .net 4.0 there's a nice new class: System.Collections.Concurrent.BlockingCollection<T>
A while ago I had the same issue here on Stack Overflow - How do I implement my own advanced Producer/Consumer scenario?
I have an infinite loop in a separate thread operating on a List of strings. I want to be able to add strings to this list while the thread is runnning. I have a feeling the code I am writing is 'wrong'. In the infinite loop I am iterating through each string in the list and performing operations on it, so it seems like I can't just add a string to this list from my main thread, as I will be interfering with a variable that is concurrently being accessed by another thread. Here's what my code looks like -
class StringTest
{
public List<string> ListOfStrings = new List<string>();
public Task MainLoopTask;
bool IsRunning = false;
public void AddToList(string myString)
{
ListOfStrings.Add(myString); // Adding a string to the list
if (!IsRunning)
{
IsRunning = true;
MainLoopTask = Task.Factory.StartNew(MainLoop);
}
}
public void MainLoop()
{
while (true)
{
foreach(string s in ListOfStrings) // Operating on the list in a separate thread
{
...
...
...
}
}
}
Is this bad code or is it ok? If it's is bad, what can I do to fix it?
That is not safe, and will eventually fail spectacularly in production.
Instead, you should use a thread-safe collection; probably a concurrent queue.
List<T> is safe for concurrent reading. That is, it's perfectly safe (from a stability standpoint) to have multiple threads reading the list, but writing to the list can only be done from one thread and you may not allow other threads to read from it while writing is taking place.
The simplest solution, especially if you only have two threads, is to use a simple lock statement to prevent two threads from interacting with it at the same time.
For instance:
Replace this:
ListOfStrings.Add(myString);
With this:
lock(ListOfStrings)
{
ListOfStrings.Add(myString);
}
And this:
foreach(string s in ListOfStrings) // Operating on the list in a separate thread
{
...
...
...
}
With this:
lock(ListOfStrings)
{
foreach(string s in ListOfStrings) // Operating on the list in a separate thread
{
...
...
...
}
}
This will make sure that your code blocks don't execute at the same time by creating an exclusive lock on the ListOfStrings object. If the list is small and the operations are trivial, this is likely sufficient. If either is not true (the list is large or the operations are non-trivial), then you'll probably want something more robust, such as creating a copy of the list and clearing the original within the body of the lock statement, then having your thread operate on that copy of the list.
This is not really an answer, but I cannot put code in a comment, so, I am writing as a reply to your comment to #SLaks which says "How so?"
If your AddToList method attempts to add a string while your MainLoop is in the middle of the foreach loop, your AddToList method will have to wait until the foreach loop is done. This can be remedied as follows:
public void MainLoop()
{
while (true)
{
string item;
lock( ListOfStrings )
{
if( ListOfStrings.Count == 0 )
continue;
item = ListOfStrings[0];
ListOfStrings.RemoveAt( 0 );
}
//do something with item
}
}
But using a thread-safe collection as SLaks proposed is still better because it is less hassle.
I have a search application that takes some time (10 to 15 seconds) to return results for some requests. It's not uncommon to have multiple concurrent requests for the same information. As it stands, I have to process those independently, which makes for quite a bit of unnecessary processing.
I've come up with a design that should allow me to avoid the unnecessary processing, but there's one lingering problem.
Each request has a key that identifies the data being requested. I maintain a dictionary of requests, keyed by the request key. The request object has some state information and a WaitHandle that is used to wait on the results.
When a client calls my Search method, the code checks the dictionary to see if a request already exists for that key. If so, the client just waits on the WaitHandle. If no request exists, I create one, add it to the dictionary, and issue an asynchronous call to get the information. Again, the code waits on the event.
When the asynchronous process has obtained the results, it updates the request object, removes the request from the dictionary, and then signals the event.
This all works great. Except I don't know when to dispose of the request object. That is, since I don't know when the last client is using it, I can't call Dispose on it. I have to wait for the garbage collector to come along and clean up.
Here's the code:
class SearchRequest: IDisposable
{
public readonly string RequestKey;
public string Results { get; set; }
public ManualResetEvent WaitEvent { get; private set; }
public SearchRequest(string key)
{
RequestKey = key;
WaitEvent = new ManualResetEvent(false);
}
public void Dispose()
{
WaitEvent.Dispose();
GC.SuppressFinalize(this);
}
}
ConcurrentDictionary<string, SearchRequest> Requests = new ConcurrentDictionary<string, SearchRequest>();
string Search(string key)
{
SearchRequest req;
bool addedNew = false;
req = Requests.GetOrAdd(key, (s) =>
{
// Create a new request.
var r = new SearchRequest(s);
Console.WriteLine("Added new request with key {0}", key);
addedNew = true;
return r;
});
if (addedNew)
{
// A new request was created.
// Start a search.
ThreadPool.QueueUserWorkItem((obj) =>
{
// Get the results
req.Results = DoSearch(req.RequestKey); // DoSearch takes several seconds
// Remove the request from the pending list
SearchRequest trash;
Requests.TryRemove(req.RequestKey, out trash);
// And signal that the request is finished
req.WaitEvent.Set();
});
}
Console.WriteLine("Waiting for results from request with key {0}", key);
req.WaitEvent.WaitOne();
return req.Results;
}
Basically, I don't know when the last client will be released. No matter how I slice it here, I have a race condition. Consider:
Thread A Creates a new request, starts Thread 2, and waits on the wait handle.
Thread B Begins processing the request.
Thread C detects that there's a pending request, and then gets swapped out.
Thread B Completes the request, removes the item from the dictionary, and sets the event.
Thread A's wait is satisfied, and it returns the result.
Thread C wakes up, calls WaitOne, is released, and returns the result.
If I use some kind of reference counting so that the "last" client calls Dispose, then the object would be disposed by Thread A in the above scenario. Thread C would then die when it tried to wait on the disposed WaitHandle.
The only way I can see to fix this is to use a reference counting scheme and protect access to the dictionary with a lock (in which case using ConcurrentDictionary is pointless) so that a lookup is always accompanied by an increment of the reference count. Whereas that would work, it seems like an ugly hack.
Another solution would be to ditch the WaitHandle and use an event-like mechanism with callbacks. But that, too, would require me to protect the lookups with a lock, and I have the added complication of dealing with an event or a naked multicast delegate. That seems like a hack, too.
This probably isn't a problem currently, because this application doesn't yet get enough traffic for those abandoned handles to add up before the next GC pass comes and cleans them up. And maybe it won't ever be a problem? It worries me, though, that I'm leaving them to be cleaned up by the GC when I should be calling Dispose to get rid of them.
Ideas? Is this a potential problem? If so, do you have a clean solution?
Consider using Lazy<T> for SearchRequest.Results maybe? But that would probably entail a bit of redesign. Haven't thought this out completely.
But what would probably be almost a drop-in replacement for your use case is to implement your own Wait() and Set() methods in SearchRequest. Something like:
object _resultLock;
void Wait()
{
lock(_resultLock)
{
while (!_hasResult)
Monitor.Wait(_resultLock);
}
}
void Set(string results)
{
lock(_resultLock)
{
Results = results;
_hasResult = true;
Monitor.PulseAll(_resultLock);
}
}
No need to dispose. :)
I think that your best bet to make this work is to use the TPL for all of you multi-threading needs. That's what it is good at.
As per my comment on your question, you need to keep in mind that ConcurrentDictionary does have side-effects. If multiple threads try to call GetOrAdd at the same time then the factory can be invoked for all of them, but only one will win. The values produced for the other threads will just be discarded, however by then the compute has been done.
Since you also said that doing searches is expensive then the cost of taking a lock ad then using a standard dictionary would be minimal.
So this is what I suggest:
private Dictionary<string, Task<string>> _requests
= new Dictionary<string, Task<string>>();
public string Search(string key)
{
Task<string> task;
lock (_requests)
{
if (_requests.ContainsKey(key))
{
task = _requests[key];
}
else
{
task = Task<string>
.Factory
.StartNew(() => DoSearch(key));
_requests[key] = task;
task.ContinueWith(t =>
{
lock(_requests)
{
_requests.Remove(key);
}
});
}
}
return task.Result;
}
This option nicely runs the search, remembers the task throughout the duration of the search and then removes it from the dictionary when it completes. All requests for the same key while a search is executing get the same task and so will get the same result once the task is complete.
I've test the code and it works.