I have a class which has an inner state which can be changed.
These state changes are never simple, and often consist of several asynchronous operations which occur across multiple threads, such as opening a connection and sending some data
By using a lock and a boolean to indicate whether the state is currently changing I can ensure that only one operation can ever have access to the state at any given time
lock (thisLock) {
while (stateChanging)
Monitor.Wait(thisLock);
stateChanging= true;
//now free to go away and do other things while maintaining exclusive access to the inner state
}
This works fine, but it means there is needless blocking occurring in threads waiting to get exclusive access the state
So what I envision is a lock based on callbacks, where a state changing operation does something like this -
sharedLock.ObtainLock(delegate() {
//we now have exclusive access to the state
//do some work
socket = new Socket();
socket.BeginConnect(hostname, connectcallback);
});
void connectcallback(IAsyncResult result) {
socket.EndConnect(result);
isConnected = true;
sharedLock.ReleaseLock();
}
Is such a concept common? Does it have a name? Am I approaching things incorrectly?
I ended up creating an asynchronous semaphore and it works really really well without feeling hacky in any way.
Usually you would use Mutex or Semaphores for this purpose. For example, if a semaphore has just one token and one operation has taken that token, no other operation can be executed until the first operation is finished and the token has been put back to the semaphore.
Within your second code example you just called ObtainLock and ReleaseLock, but then the sharedLock does not know which operation called Obtain/Release. This is why ObtainLock usually returns a token that can either be diposed or released when the operation finished.
IDisposable myLock;
myLock = sharedLock.ObtainLock(delegate() {
socket = new Socket();
socket.BeginConnect(hostname, connectcallback);
});
void connectcallback(IAsyncResult result) {
socket.EndConnect(result);
isConnected = true;
myLock.Dispose();
}
The class that implements you sharedLock manages these tokens and according to the state of each token it knows if it's busy or not. Actually nothing more than a reference counter.
As another alternative you might use ManualResetEvent or AutoResetEvents as tokens that you return on ObtainLock. When your operation is finished, just call event.Set()
Related
The system I am working on is composed of a Windows service hosting various WCF services. Multiple clients can talk to a same service, but only one client can talk to a service at time.
I am therefore using a "lock ()" to prevent multiple clients to conflict. If Client1 makes a first request to a service and then Client2 makes another requested while the former is still executing, the "lock" puts that second request on hold until the first one is done.
So far so good. Now the problem is that I have to deal with events (I don't think a simple callback would do the trick here).
In other words, while the 1st request is running, something may happen that Client2 needs to know about.
When that happens, Client2 will receive that event and should eventually stop using that service all together.
The problem here is that our 2nd request is already in the locker's queue.
So how would I prevent that second request from running. Can it get cancelled?
Maybe it is just a matter of adding a dirty flag but I am hoping there are better ways to do that.
Here is what it may look like on the service side with the dirty flag "canRun":
lock (_locker)
{
if (canRun)
return SomeMethod();
else
return null;
}
If you can switch to async/await and something like a SemaphoreSlim, then: CancellationToken is what you're looking for; a CancellationTokenSource can be marked as cancelled at any time, and code can respond accordingly - and most framework/library code already correctly handles cancellation, for example: during async semaphore acquisition.
If you need to stick in a synchronous world, then your best bet is probably to change to a looped Monitor mode that can re-check every timeout, for example:
bool lockTaken = false;
try
{
do
{
if (!canRun) throw new OperationCanceledException();
Monitor.TryEnter(_locker, someTimeout, ref lockTaken);
}
while (!lockTaken);
// now we have the conch
SomeMethod();
}
finally
{
if (lockTaken) Monitor.Exit(_locker);
}
Note that if canRun is a field you would want to ensure that it is either volatile or otherwise accessed suitably to avoid being cached in a register or similar.
The situation I am uncertain of concerns the usage of a "threadsafe" PipeStream where multiple threads can add messages to be written. If there is no queue of messages to be written, the current thread will begin writing to the reading party. If there is a queue, and the queue grows while the pipe is writing, I want the thread that begun writing to deplete the queue.
I "hope" that this design (demonstrated below) discourages the continuous entering/releasing of the SemaphoreSlim and decrease the number of tasks scheduled. I say "hope" because I should test whether this complication has any positive performance implications. However, before even testing this I should first understand if the code does what I think it will, so please consider the following class, and below it a sequence of events;
Note: I understand that execution of tasks is not tied to any particular thread, but I find this is the easiest way to explain.
class SemaphoreExample
{
// Wrapper around a NamedPipeClientStream
private readonly MessagePipeClient m_pipe =
new MessagePipeClient("somePipe");
private readonly SemaphoreSlim m_semaphore =
new SemaphoreSlim(1, 1);
private readonly BlockingCollection<Message> m_messages =
new BlockingCollection<Message>(new ConcurrentQueue<Message>());
public Task Send<T>(T content)
where T : class
{
if (!this.m_messages.TryAdd(new Message<T>(content)))
throw new InvalidOperationException("No more requests!");
Task dequeue = TryDequeue();
return Task.FromResult(true);
// In reality this class (and method) is more complex.
// There is a similiar pipe (and wrkr) in the other direction.
// The "sent jobs" is kept in a dictionary and this method
// returns a task belonging to a completionsource tied
// to the "sent job". The wrkr responsible for the other
// pipe reads a response and sets the corresponding
// completionsource.
}
private async Task TryDequeue()
{
if (!this.m_semaphore.Wait(0))
return; // someone else is already here
try
{
Message message;
while (this.m_messages.TryTake(out message))
{
await this.m_pipe.WriteAsync(message);
}
}
finally { this.m_semaphore.Release(); }
}
}
Wrkr1 finishes writing to the pipe. (in TryDequeue)
Wrkr1 determines queue is empty. (in TryDequeue)
Wrkr2 adds item to queue. (in Send)
Wrkr2 determines Wrkr1 occupies the Semaphore, returns. (in Send)
Wrkr1 releases the Semaphore. (in TryDequeue)
Queue is left with 1 item that wont be acted upon for x amount of Time.
Is this sequence of events possible? Should I forget this idea altogether and have every call to "Send" await on "TryDeque" and the semaphore within it? Perhaps the potential performance implications of scheduling another task per method call is negligible, even at a "high" frequency.
UPDATE:
Following the advice of Alex I am doing the following;
Let the caller of "Send" specify a "maxWorkload" integer that specifies how many items the caller is prepared to do (for other callers, in the worst case) before delegating work to another thread to handle any extra work. However, before creating the new thread, other callers of "Send" is given an opportunity to enter the semaphore, thereby possibly preventing the use of an additional thread.
To not let any work be left lingering in the queue, any worker who successfully entered the semaphore and did some work must check if there is any new work added after exiting the semaphore. If this is true the same worker will try to re-enter (if "maxWorkload" is not reached) or delegate work as described above.
Example below: Send now sets up "TryPool" as a continuation of "TryDequeue". "TryPool" only begins if "TryDequeue" returns true (i.e. did some work while having entered the semaphore).
// maxWorkload cannot be -1 for this method
private async Task<bool> TryDequeue(int maxWorkload)
{
int currWorkload = 0;
while (this.m_messages.Count != 0 && this.m_semaphore.Wait(0))
{
try
{
currWorkload = await Dequeue(currWorkload, maxWorkload);
if (currWorkload >= maxWorkload)
return true;
}
finally
{
this.m_semaphore.Release();
}
}
return false;
}
private Task TryPool()
{
if (this.m_messages.Count == 0 || !this.m_semaphore.Wait(0))
return Task<bool>.FromResult(false);
return Task.Run(async () =>
{
do
{
try
{
await Dequeue(0, -1);
}
finally
{
this.m_semaphore.Release();
}
}
while (this.m_messages.Count != 0 && this.m_semaphore.Wait(0));
});
}
private async Task<int> Dequeue(int currWorkload, int maxWorkload)
{
while (currWorkload < maxWorkload || maxWorkload == -1)
{
Message message;
if (!this.m_messages.TryTake(out message))
return currWorkload;
await this.m_pipe.WriteAsync(message);
currWorkload++;
}
return maxWorkload;
}
I tend to call this pattern the "GatedBatchWriter", i.e. the first thread through the gate handles a batch of tasks; its own and a number of others on behalf of other writers, until it has done enough work.
This pattern is primarily useful, when it is more efficient to batch work, because of overheads associated with that work. E.g. writing larger blocks to disk in one go, instead of multiple small ones.
And yes, this particular pattern has a specific race condition to be aware of: The "responsible writer", i.e. the one that got through the gate, determines that no more messages are in the queue and stops before releasing the semaphore (i.e. its write responsibility). A second writer arrived and in between those two decision points failed to acquire write responsibility. Now there is a message in the queue that will not be delivered (or delivered late, when the next writer arrives).
Additionally, what you are doing now, is not fair, in terms of scheduling. If there are many messages, the queue might never be empty, and the writer that got through the gate will be busy writing messages on behalf of the others for all eternity. You need to limit the batch size for the responsible writer.
Some other things you may want to change are:
Have your Message contain a task completion token.
Have writers that could not acquire the write responsibility enqueue their message and wait for any of two task completions: the task completion associated with their message, the releasing of the write responsibility.
Have the responsible writer set the completion for messages that it processed.
Have the responsible writer release it's write responsibility when it has done enough work.
When a waiting writer is woken up by one of the two task completions:
if it was due to the completion token on its message, it can go its merry way.
otherwise, try to acquire the write responsibility, rinse, repeat...
One more note: if there are a lot of messages, i.e. a high message load on average, a dedicated thread / long running task handling the queue will generally have a better performance.
My team is developing a multi-threaded application using async/await in C# 5.0. In the process of implementing thread synchronization, after several iterations, we came up with a (possibly novel?) new SynchronizationContext implementation with an internal lock that:
When calling Post:
if a lock can be taken, the delegate is executed immediately
if a lock cannot be taken, the delegate is queued
When calling Send:
if a lock can be taken the delegate is executed
if a lock cannot be taken, the thread is blocked
In all cases, before executing the delegate, the context sets itself as the current context and restores the original context when the delegate returns.
It’s an unusual pattern and since we’re clearly not the first people writing such an application I’m wondering:
Is the pattern really safe?
Are there better ways of achieving thread synchronization?
Here’s the source for SerializingSynchronizationContext and a demo on GitHub.
Here’s how it’s used:
Each class wanting protection creates its own instance of the context like a mutex.
The context is awaitable so that statements like the following are possible.
await myContext;
This simply causes the rest of the method to be run under protection of the context.
All methods and properties of the class use this pattern to protect data. Between awaits, only one thread can run on the context at a time and so state will remain consistent. When an await is reached, the next scheduled thread is allowed to run on the context.
The custom SynchronizationContext can be used in combination with AsyncLock if needed to maintain atomicity even with awaited expressions.
Synchronous methods of a class can use custom context for protection as well.
Having a sync context that never runs more than one operation at a time is certainly not novel, and also not bad at all. Here you can see Stephen Toub describing how to make one two years ago. (In this case it's used simply as a tool to create a message pump, which actually sounds like it might be exactly what you want, but even if it's not, you could pull the sync context out of the solution and use it separately.)
It of course makes perfect conceptual sense to have a single threaded synchronization context. All of the sync contexts representing UI states are like this. The winforms, WPF, winphone, etc. sync contexts all ensure that only a single operation from that context is ever running at one time.
The one worrying bit is this:
In all cases, before executing the delegate, the context sets itself as the current context and restores the original context when the delegate returns.
I'd say that the context itself shouldn't be doing this. If the caller wants this sync context to be the current context they can set it. If they want to use it for something other than the current context you should allow them to do so. Sometimes you want to use a sync context without setting it as the current one to synchronize access to a certain resource; in such a case only operation specifically accessing that resource would need to use this context.
Regarding the use of locks. This question would be more appropriate for Code Review, but from the first glance I don't think your SerializingSynchronizationContext.Post is doing all right. Try calling it on a tight loop. Because of the Task.Run((Action)ProcessQueue), you'll quickly end up with more and more ThreadPool threads being blocked on lock (_lock) while waiting to acquire it inside ProcessQueue().
[EDITED] To address the comment, here is your current implementation:
public override void Post(SendOrPostCallback d, object state)
{
_queue.Enqueue(new CallbackInfo(d, state));
bool lockTaken = false;
try
{
Monitor.TryEnter(_lock, ref lockTaken);
if (lockTaken)
{
ProcessQueue();
}
else
{
Task.Run((Action)ProcessQueue);
}
}
finally
{
if (lockTaken)
{
Monitor.Exit(_lock);
}
}
}
// ...
private void ProcessQueue()
{
if (!_queue.IsEmpty)
{
lock (_lock)
{
var outer = SynchronizationContext.Current;
try
{
SynchronizationContext.SetSynchronizationContext(this);
CallbackInfo callback;
while (_queue.TryDequeue(out callback))
{
try
{
callback.D(callback.State);
}
catch (Exception e)
{
Console.WriteLine("Exception in posted callback on {0}: {1}",
GetType().FullName, e);
}
}
}
finally
{
SynchronizationContext.SetSynchronizationContext(outer);
}
}
}
}
In Post, why enqueue a callback with _queue.Enqueue and then preoccupy a new thread from the pool with Task.Run((Action)ProcessQueue), in the situation when ProcessQueue() is already pumping the _queue in a loop on another pool thread and dispatching callbacks? In this case, Task.Run looks like wasting a pool thread to me.
I have a thread that I am trying to discontinue. What I have done is the following.
randomImages = new Thread(new ThreadStart(this.chooseRandomImage));
randomImages.Start();
This is the method called by the thread
bool threadAlive = true;
public void chooseRandomImage()
{
while(threadAlive)
{
try
{
//do stuff
}
catch (Exception exe)
{
MessageBox.Show(exe.Message, "Error", MessageBoxButtons.OK, MessageBoxIcon.Error);
}
}
}
Now, upon clicking a stop thread button I simply set threadAlive to false.
Problem is the thread doesnt stop immediately, as if it has gathered a form of momentum.
How can a stop a thread instantly, and possibly restart it again?
private void butStopThread_Click(object sender, EventArgs e)
{
threadAlive = false;
if(threadAlive == false)
{
//do stuff
}
}
I am sorry, that IS the best way to do it. Using .NET 4.0 upward you should use tasks, not threads, and then there is this thing called CancellationToken that pretty much does the same as your variable.
Then, after cancelling, you wait until the processing is finishing. If that needs to happen fast, then - well - make the check for the cancellation more granular, i.e. check more often.
Aborting threads has possibly significant side effects as explained at http://www.interact-sw.co.uk/iangblog/2004/11/12/cancellation - this is why the method generally should not be used.
And no, stopped threads etc. can not be restarted magically - this you have to put into your logic (restart points, save points ,long running transaction in steps, remembering where it finished).
As a sidenote - if you insist on not using tasks and have access to the latest versin of .NET, Volatile is not needed if you use the Interlocked access class methods, which ago down to some assembler instructions that are thread safe per definition.
It is possible to terminate a thread from another thread with a call
to Abort, but this forcefully terminates the affected thread without
concern for whether it has completed its task and provides no
opportunity for the cleanup of resources. The technique shown in this
example is preferred.
You need to use Abort method BUT IS NOT RECOMMENDED
From the information provided by you, it seems the threadAlive variable is being accessed by both the worker thread and the UI thread. Try declaring threadAlive using volatile keyword which is ensure cross-thread access happens without synchronization issues.
volatile bool threadAlive;
To restart the thread, you first need to ensure that it performs all necessary cleanup. Use the Join method call on your thread object in the main/UI thread to make sure your thread terminates safely. To restart, simply invoke the Start method on the thread.
randomImages.Join();
I have a class that implements the Begin/End Invocation pattern where I initially used ThreadPool.QueueUserWorkItem() to thread my work. The work done on the thread doesn't loop but does takes a bit of time to process so the work itself is not easily stopped.
I now have the side effect where someone using my class is calling the Begin (with callback) a ton of times to do a lot of processing so ThreadPool.QueueUserWorkItem is creating a ton of threads to do the processing. That in itself isn't bad but there are instances where they want to abandon the processing and start a new process but they are forced to wait for their first request to finish.
Since ThreadPool.QueueUseWorkItem() doesn't allow me to cancel the threads I am trying to come up with a better way to queue up the work and maybe use an explicit FlushQueue() method in my class to allow the caller to abandon work in my queue.
Anyone have any suggestion on a threading pattern that fits my needs?
Edit: I'm currently targeting the 2.0 framework. I'm currently thinking that a Consumer/Producer queue might work. Does anyone have thoughts on the idea of flushing the queue?
Edit 2 Problem Clarification:
Since I'm using the Begin/End pattern in my class every time the caller uses the Begin with callback I create a whole new thread on the thread pool. This call does a very small amount of processing and is not where I want to cancel. It's the uncompleted jobs in the queue I wish to stop.
The fact that the ThreadPool will create 250 threads per processor by default means if you ask the ThreadPool to queue a large amount of items with QueueUserWorkItem() you end up creating a huge amount of concurrent threads that you have no way of stopping.
The caller is able to push the CPU to 100% with not only the work but the creation of the work because of the way I queued the threads.
I was thinking by using the Producer/Consumer pattern I could queue these threads into my own queue that would allow me to moderate how many threads I create to avoid the CPU spike creating all the concurrent threads. And that I might be able to allow the caller of my class to flush all the jobs in the queue when they are abandoning the requests.
I am currently trying to implement this myself but figured SO was a good place to have someone say look at this code or you won't be able to flush because of this or flushing isn't the right term you mean this.
EDIT My answer does not apply since OP is using 2.0. Leaving up and switching to CW for anyone who reads this question and using 4.0
If you are using C# 4.0, or can take a depedency on one of the earlier version of the parallel frameworks, you can use their built-in cancellation support. It's not as easy as cancelling a thread but the framework is much more reliable (cancelling a thread is very attractive but also very dangerous).
Reed did an excellent article on this you should take a look at
http://reedcopsey.com/2010/02/17/parallelism-in-net-part-10-cancellation-in-plinq-and-the-parallel-class/
A method I've used in the past, though it's certainly not a best practice is to dedicate a class instance to each thread, and have an abort flag on the class. Then create a ThrowIfAborting method on the class that is called periodically from the thread (particularly if the thread's running a loop, just call it every iteration). If the flag has been set, ThrowIfAborting will simply throw an exception, which is caught in the main method for the thread. Just make sure to clean up your resources as you're aborting.
You could extend the Begin/End pattern to become the Begin/Cancel/End pattern. The Cancel method could set a cancel flag that the worker thread polls periodically. When the worker thread detects a cancel request, it can stop its work, clean-up resources as needed, and report that the operation was canceled as part of the End arguments.
I've solved what I believe to be your exact problem by using a wrapper class around 1+ BackgroundWorker instances.
Unfortunately, I'm not able to post my entire class, but here's the basic concept along with it's limitations.
Usage:
You simply create an instance and call RunOrReplace(...) when you want to cancel your old worker and start a new one. If the old worker was busy, it is asked to cancel and then another worker is used to immediately execute your request.
public class BackgroundWorkerReplaceable : IDisposable
{
BackgroupWorker activeWorker = null;
object activeWorkerSyncRoot = new object();
List<BackgroupWorker> workerPool = new List<BackgroupWorker>();
DoWorkEventHandler doWork;
RunWorkerCompletedEventHandler runWorkerCompleted;
public bool IsBusy
{
get { return activeWorker != null ? activeWorker.IsBusy; : false }
}
public BackgroundWorkerReplaceable(DoWorkEventHandler doWork, RunWorkerCompletedEventHandler runWorkerCompleted)
{
this.doWork = doWork;
this.runWorkerCompleted = runWorkerCompleted;
ResetActiveWorker();
}
public void RunOrReplace(Object param, ...) // Overloads could include ProgressChangedEventHandler and other stuff
{
try
{
lock(activeWorkerSyncRoot)
{
if(activeWorker.IsBusy)
{
ResetActiveWorker();
}
// This works because if IsBusy was false above, there is no way for it to become true without another thread obtaining a lock
if(!activeWorker.IsBusy)
{
// Optionally handle ProgressChangedEventHandler and other features (under the lock!)
// Work on this new param
activeWorker.RunWorkerAsync(param);
}
else
{ // This should never happen since we create new workers when there's none available!
throw new LogicException(...); // assert or similar
}
}
}
catch(...) // InvalidOperationException and Exception
{ // In my experience, it's safe to just show the user an error and ignore these, but that's going to depend on what you use this for and where you want the exception handling to be
}
}
public void Cancel()
{
ResetActiveWorker();
}
public void Dispose()
{ // You should implement a proper Dispose/Finalizer pattern
if(activeWorker != null)
{
activeWorker.CancelAsync();
}
foreach(BackgroundWorker worker in workerPool)
{
worker.CancelAsync();
worker.Dispose();
// perhaps use a for loop instead so you can set worker to null? This might help the GC, but it's probably not needed
}
}
void ResetActiveWorker()
{
lock(activeWorkerSyncRoot)
{
if(activeWorker == null)
{
activeWorker = GetAvailableWorker();
}
else if(activeWorker.IsBusy)
{ // Current worker is busy - issue a cancel and set another active worker
activeWorker.CancelAsync(); // Make sure WorkerSupportsCancellation must be set to true [Link9372]
// Optionally handle ProgressEventHandler -=
activeWorker = GetAvailableWorker(); // Ensure that the activeWorker is available
}
//else - do nothing, activeWorker is already ready for work!
}
}
BackgroupdWorker GetAvailableWorker()
{
// Loop through workerPool and return a worker if IsBusy is false
// if the loop exits without returning...
if(activeWorker != null)
{
workerPool.Add(activeWorker); // Save the old worker for possible future use
}
return GenerateNewWorker();
}
BackgroundWorker GenerateNewWorker()
{
BackgroundWorker worker = new BackgroundWorker();
worker.WorkerSupportsCancellation = true; // [Link9372]
//worker.WorkerReportsProgress
worker.DoWork += doWork;
worker.RunWorkerCompleted += runWorkerCompleted;
// Other stuff
return worker;
}
} // class
Pro/Con:
This has the benefit of having a very low delay in starting your new execution, since new threads don't have to wait for old ones to finish.
This comes at the cost of a theoretical never-ending growth of BackgroundWorker objects that never get GC'd. However, in practice the code below attempts to recycle old workers so you shouldn't normally encounter a large pool of ideal threads. If you are worried about this because of how you plan to use this class, you could implement a Timer which fires a CleanUpExcessWorkers(...) method, or have ResetActiveWorker() do this cleanup (at the cost of a longer RunOrReplace(...) delay).
The main cost from using this is precisely why it's beneficial - it doesn't wait for the previous thread to exit, so for example, if DoWork is performing a database call and you execute RunOrReplace(...) 10 times in rapid succession, the database call might not be immediately canceled when the thread is - so you'll have 10 queries running, making all of them slow! This generally tends to work fine with Oracle, causing only minor delays, but I do not have experiences with other databases (to speed up the cleanup, I have the canceled worker tell Oracle to cancel the command). Proper use of the EventArgs described below mostly solves this.
Another minor cost is that whatever code this BackgroundWorker is performing must be compatible with this concept - it must be able to safely recover from being canceled. The DoWorkEventArgs and RunWorkerCompletedEventArgs have a Cancel/Cancelled property which you should use. For example, if you do Database calls in the DoWork method (mainly what I use this class for), you need to make sure you periodically check these properties and take perform the appropriate clean-up.