Exactly when does WaitHandle WaitOne(int timeout) return? Does it return when the timeout has elapsed? I see some code online which suggests polling WaitOne() when implementing logic which does some cleanup before exiting. This implies that WaitOne() does not return when the timeout elapses; instead it returns whether or not it is signaled immediately after it is called.
public void SomeMethod()
{
while (!yourEvent.WaitOne(POLLING_INTERVAL))
{
if (IsShutdownRequested())
{
// Add code to end gracefully here.
}
}
// Your event was signaled so now we can proceed.
}
What I am trying to achieve here is a way to signal the WaitHandle using a CancellationToken while it is blocking the calling thread.
"I want to essentially stop blocking the calling thread while it is waiting even before the WaitHandle times out or is signaled" -- under what condition would you want the thread to become unblocked? Do you already have a CancellationToken object you're using?
If so, then you could do something like this:
public void SomeMethod(CancellationToken token)
{
int waitResult;
while ((waitResult = WaitHandle.WaitAny(
new [] { yourEvent, token.WaitHandle }, POLLING_INTERVAL)) == WaitHandle.WaitTimeout)
{
if (IsShutdownRequested())
{
// Add code to end gracefully here.
}
}
if (waitResult == 0)
{
// Your event was signaled so now we can proceed.
}
else if (waitResult == 1)
{
// The wait was cancelled via the token
}
}
Note that the use of WaitHandle is not necessarily ideal. .NET has modern, managed thread synchronization mechanisms that work more efficiently than WaitHandle (which is based on native OS objects that incur greater overhead). But if you must use WaitHandle to start with, the above is probably an appropriate way to extend your current implementation to work with CancellationToken.
If the above does not address your question, please improve the question by providing a good, minimal, complete code example that clearly illustrates the scenario, along with a detailed explanation of what that code example does now and how that's different from what you want it to do.
Related
I've just started using Nito.AsyncEx package and AsyncLock instead of a normal lock() { ... } section where I have async calls within the locked section (since you can't use lock() in such cases for good reasons I've just read about). This is within a job that I'm running from Hangfire. Let's call this the 'worker' thread.
In another thread, from an ASP.NET controller, I'd like to check if there's a thread that's currently executing within the locked section. If there's no thread in the locked section then I'll schedule a background job via Hangfire. If there's already a thread in the locked section then I don't want to schedule another one. (Yes, this might sound a little weird, but that's another story).
Is there a way to check this using the Nito.AsyncEx objects, or should I just set a flag at the start of the locked section and unset it at the end?
e.g. I'd like to this:
public async Task DoAJobInTheBackground(string queueName, int someParam)
{
// do other stuff...
// Ensure I'm the only job in this section
using (await _asyncLock.LockAsync())
{
await _aService.CallSomethingAsync());
}
// do other stuff...
}
and from a service called by a controller use my imaginary method IsSomeoneInThereNow():
public void ScheduleAJobUnlessOneIsRunning(string queueName, int someParam)
{
if (!_asyncLock.IsSomeoneInThereNow())
{
_backgroundJobClient.Enqueue<MyJob>(x =>
x.DoAJobInTheBackground(queueName, someParam));
}
}
but so far I can only see how to do this with a separate variable (imagining _isAnybodyInHere is a thread-safe bool or I used Interlocked instead):
public async Task DoAJobInTheBackground(string queueName, int someParam)
{
// do other stuff...
// Ensure I'm the only job in this section
using (await _asyncLock.LockAsync())
{
try
{
_isAnybodyInHere = true;
await _aService.CallSomethingAsync());
}
finally
{
_isAnybodyInHere = false;
}
}
// do other stuff...
}
and from a service called by a controller:
public void ScheduleAJobUnlessOneIsRunning(string queueName, int someParam)
{
if (!_isAnybodyInHere)
{
_backgroundJobClient.Enqueue<MyJob>(x =>
x.DoAJobInTheBackground(queueName, someParam));
}
}
Really it feels like there should be a better way. The AsyncLock doc says:
You can call Lock or LockAsync with an already-cancelled CancellationToken
to attempt to acquire the AsyncLock immediately
without actually entering the wait queue.
but I don't understand how to do that, at least using the synchronous Lock method.
I don't understand how to do that
You can create a new CancellationToken and pass true to create one that is already canceled:
using (_asyncLock.Lock(new CancellationToken(canceled: true)))
{
...
}
The call to Lock will throw if the lock is already held.
That said, I don't think this is a good solution to your problem. There's always the possibility that the background job is just about to finish, the controller checks the lock and determines it's held, and then the background job releases the lock. In that case, the controller will not trigger a background job.
You must never(!) make any assumptions about any other thread or process!
What you must instead do, in this particular example, is to "schedule another job," unless you have already done so. (To avoid "fork bombs.") Then, the job, once it actually begins executing, must decide: "Should I be doing this?" If not, the job quietly exits.
Or – perhaps the actual question here is: "Has somebody else already ≤scheduled_this_job≥?"
I'm really new to threading multitasking/multithreading, but I'm working on a project where I think I need it. The user will be editing a fairly complex diagram, and I want the program to check for validity of the diagram. The validity check is non-trivial (polynomial time, though, not NP - seconds, not minutes or years, but I don't want to hold the user up for a few seconds after every change) so I would like the program to check for validity in the background and highlight inconsistencies when it finds them. When the user makes certain changes to the diagram (changes the structure, not just the labels on elements), the validation will have to throw away what it was doing and start again. I'm assuming the user will eventually take a break to think/go for a pee/go for a coffee/chat to that rather cute person two cubicles along, but in case they don't, I have to let the validation run to completion in some circumstances (before a save or a printout, for example). Broad-brush, what are the features of C# I'll need to learn, and how do I structure that?
Broad Brush. Here we go.
Q: "What are the features of C# I'll need to learn?"
A: You can get by nicely with a basic toolkit consisting (roughly speaking) of:
System.Threading.Tasks.Task
System.Threading.CancellationTokenSource
System.Threading.SemaphoreSlim
Q: "I don't want to hold the user up for a few seconds after every change"
A: OK, so we will never-ever block the UI thread. Fire off a Task to run a background validation routine that checks every now and then to see if it's been cancelled.
CancellationTokenSource _cts = null;
SemaphoreSlim ssBusy = new SemaphoreSlim(2);
private void ExecValidityCheck()
{
ssBusy.Wait();
Task.Run(() =>
{
try
{
_cts = new CancellationTokenSource();
LongRunningValidation(_cts.Token);
}
finally
{
ssBusy.Release();
}
})
.GetAwaiter()
.OnCompleted(CheckForRestart);
}
We'll call CheckForRestart using GetAwaiter().OnCompleted(). This just means that without blocking we'll be notified as a callback when the thread finishes for one of three reasons:
Cancelled
Cancelled, but with an intent to start the validation over from the beginning.
Ran validation to completion
By calling CheckForRestart we determine whether to start it over again or not.
void CheckForRestart()
{
BeginInvoke((MethodInvoker)delegate
{
if (_restart)
{
_restart = false;
ExecValidityCheck();
}
else
{
buttonCancel.Enabled = false;
}
});
}
Rather that post the complete code here, I pushed a simple working example to our GitHub. You can browse it there or clone and run it. 20-second screen capture. When the RESTART button is clicked in the video, it's checking the CurrentCount property of the Semaphore. In a threadsafe way it determines whether the validation routine is already running or not.
I hope I've managed to give you a few ideas about where to start. Sure, the explanation I've given here has a few holes but feel free to address your critical concerns in the comments and I'll try to respond.
You probably need to learn about asynchronous programming with async/await, and about cooperative cancellation. The standard practice for communicating cancellation is by throwing an OperationCanceledException. Methods that are intended to be cancelable accept a CancellationToken as argument, and observe frequently the IsCancellationRequested method of the token. So here is the basic structure of a cancelable Validate method with a boolean result:
bool Validate(CancellationToken token)
{
for (int i = 0; i < 50; i++)
{
// Throw an OperationCanceledException if cancellation is requested
token.ThrowIfCancellationRequested();
Thread.Sleep(100); // Simulate some CPU-bound work
}
return true;
}
The "driver" of the CancellationToken is a class named CancellationTokenSource. In your case you'll have to create multiple instances of this class, one for every time that the diagram is changed. You must store them somewhere so that you can call later their Cancel method, so lets make two private fields inside the Form, one for the most recent CancellationTokenSource, and one for the most recent validation Task:
private Task<bool> _validateTask;
private CancellationTokenSource _validateCTS;
Finally you'll have to write the logic for the event handler of the Diagram_Changed event. It is probably not desirable to have multiple validation tasks running side by side, so it's a good idea to await for the completion of the previous task before launching a new one. It is important that awaiting a task doesn't block the UI. This introduces the complexity that multiple Diagram_Changed events, along with other unrelated events, can occur before the completion of the code inside the handler. Fortunately you can count on the single-threaded nature of the UI, and not have to worry about the thread-safety of accessing the _validateTask and _validateCTS fields by multiple asynchronous workflows. You do need to be aware though that after every await these fields may hold different values than before the await.
private async void Diagram_Changed(object sender, EventArgs e)
{
bool validationResult;
using (var cts = new CancellationTokenSource())
{
_validateCTS?.Cancel(); // Cancel the existing CancellationTokenSource
_validateCTS = cts; // Publish the new CancellationTokenSource
if (_validateTask != null)
{
// Await the completion of the previous task before spawning a new one
try { await _validateTask; }
catch { } // Ignore any exception
}
if (cts != _validateCTS) return; // Preempted (the event was fired again)
// Run the Validate method in a background thread
var task = Task.Run(() => Validate(cts.Token), cts.Token);
_validateTask = task; // Publish the new task
try
{
validationResult = await task; // Await the completion of the task
}
catch (OperationCanceledException)
{
return; // Preempted (the validation was canceled)
}
finally
{
// Cleanup before disposing the CancellationTokenSource
if (_validateTask == task) _validateTask = null;
if (_validateCTS == cts) _validateCTS = null;
}
}
// Do something here with the result of the validation
}
The Validate method should not include any UI manipulation code, because it will be running in a background thread. Any effects to the UI should occur after the completion of the method, through the returned result of the validation task.
I am writing a windows service and found an example which suggests writing a polling windows services as follows:
private void Poll()
{
CancellationToken cancellationPoll = ctsPoll.Token;
while (!cancellationPoll.WaitHandle.WaitOne(tsInterval))
{
PollDatabase();
// Occasionally check the cancellation state.
if (cancellationPoll.IsCancellationRequested)
{
break;
}
}
}
I am a little confused when it comes to the cancellation and if i need both the cancellationPoll.WaitHandle.WaitOne() and cancellationPoll.IsCancellationRequested or are they doing the same thing and only one is required?
The !cancellationPoll.WaitHandle.WaitOne(tsInterval) is there to ensure the polling interval so you will have at least tsIntetval between polling(+ the operation duration):
--tsInterval--|--operation--|--tsInterval--|...
If you look at the documentation for CancellationToken.WaitHandle it says the following:
A WaitHandle that is signaled when the token is canceled.
So in your case the operation cancellationPoll.IsCancellationRequested is sufficient because you don't have anything after it. But imagine the situation like this:
while (!cancellationPoll.WaitHandle.WaitOne(tsInterval))
{
//long operation A
if (cancellationPoll.IsCancellationRequested)
{
break;
}
//long operation B
if (cancellationPoll.IsCancellationRequested)
{
break;
}
//long operation C
}
In this case it makes sense to occasionally check the cancellation state to avoid running long operation...
The waiting of WaitHanlde is redundant here as from the result standpoint it does the same as IsCancellationRequested - indicates that cancellation is requested (but does it in slightly different way). So for your case you can choose single method: either WaitHandle or IsCancellationRequested. But please keep in mind that WaitHandle is IDisposable and requires disposing the associated CancellationTokenSource. If you choose to use IsCancellationRequested don't forget to add a call which is supposed to reschedule thread such as Thread.Sleep in order not to over-utilize the CPU resources.
One of scenario when WaitHanlde can be applied is when you need to wait for a handle and would like to introdece cancellation semantic to this wait:
WaitHandle.WaitAny(new [] { handleToWait, cancellationHandle });
The !cancellationPoll.WaitHandle.WaitOne(tsInterval) is needed so that you do not wait the whole time. WaitOne(tsInterval) will either return because the token recevied a singnal to cancel or because the time is run out. If the token recieved a signal to cancel WaitOne(tsInterval) will return true and so end the loop.
For example, if you would do something like:
while(true)
{
// long operation
if (cancellationPoll.IsCancellationRequested)
{
break;
}
Thread.Sleep(tsInterval);
}
if then the cancellation is reqested while the thread is blocked by Thread.Sleep() the whole operation would not know that an cancellation is reqested not until Thread.Sleep() is finished and the next loop run has come to the if statement.
I'm doing some asynchronous network I/O using Begin/End style methods. (It's actually a query against Azure Table Storage, but I don't think that matters.) I've implemented a client side timeout using the ThreadPool.RegisterWaitForSingleObject(). This is working fine as far as I can tell.
Because ThreadPool.RegisterWaitForSingleObject() takes a WaitHandle as an argument, I have to begin the I/O operation, then execute ThreadPool.RegisterWaitForSingleObject(). It seems like this introduces the possibility that the I/O completes before I even register the wait.
A simplified code sample:
private void RunQuery(QueryState queryState)
{
//Start I/O operation
IAsyncResult asyncResult = queryState.Query.BeginExecuteSegmented(NoopAsyncCallback, queryState);
//What if the I/O operation completes here?
queryState.TimeoutWaitHandle = ThreadPool.RegisterWaitForSingleObject(asyncResult.AsyncWaitHandle, QuerySegmentCompleted, asyncResult, queryTimeout, true);
}
private void QuerySegmentCompleted(object opState, bool timedOut){
IAsyncResult asyncResult = opState as IAsyncResult;
QueryState state = asyncResult.AsyncState as QueryState;
//If the I/O completed quickly, could TimeoutWaitHandle could be null here?
//If so, what do I do about that?
state.TimeoutWaitHandle.Unregister(asyncResult.AsyncWaitHandle);
}
What's the proper way to handle this? Do I still need to worry about Unregister()'ing the AsyncWaitHandle? If so, is there a fairly easy way to wait for it to be set?
Yep, you and everyone else has this problem. And it does not matter if the IO completed synchronously or not. There is still a race between the callback and the assignment. Microsoft should have provided the RegisteredWaitHandle to that callback function automatically. That would have solved everything. Oh well, hindsight is always 20-20 as they say.
What you need to do is keep reading the RegisteredWaitHandle variable until it is no longer null. It is okay to do this in a tight loop because the race is subtle enough that the loop will not be spinning around very many times.
private void RunQuery(QueryState queryState)
{
// Start the operation.
var asyncResult = queryState.Query.BeginExecuteSegmented(NoopAsyncCallback, queryState);
// Register a callback.
RegisteredWaitHandle shared = null;
RegisteredWaitHandle produced = ThreadPool.RegisterWaitForSingleObject(asyncResult.AsyncWaitHandle,
(state, timedout) =>
{
var asyncResult = opState as IAsyncResult;
var state = asyncResult.AsyncState as QueryState;
while (true)
{
// Keep reading until the value is no longer null.
RegisteredWaitHandle consumed = Interlocked.CompareExchange(ref shared, null, null);
if (consumed != null)
{
consumed.Unregister(asyncResult.AsyncWaitHandle);
break;
}
}
}, asyncResult, queryTimeout, true);
// Publish the RegisteredWaitHandle so that the callback can see it.
Interlocked.CompareExchange(ref shared, produced, null);
}
You do not need to Unregister if the I/O completed before the timeout as it was the completion that signalled your callback. In fact upon reading the docs of the Unregister method it seems totally unnecessary to call it as you are executing only once and you are not Unregistering in an unrelated method.
http://msdn.microsoft.com/en-us/library/system.threading.registeredwaithandle.unregister.aspx
If a callback method is in progress when Unregister executes, waitObject is not signaled until the callback method completes. In particular, if a callback method executes Unregister, waitObject is not signaled until that callback method completes.
I have these two classes: LineWriter and AsyncLineWriter. My goal is to allow a caller to queue up multiple pending asynchronous invocations on AsyncLineWriter’s methods, but under the covers never truly allow them to run in parallel; i.e., they must somehow queue up and wait for one another. That's it. The rest of this question gives a complete example so there's absolutely no ambiguity about what I'm really asking.
LineWriter has a single synchronous method (WriteLine) that takes about 5 seconds to run :
public class LineWriter
{
public void WriteLine(string line)
{
Console.WriteLine("1:" + line);
Thread.Sleep(1000);
Console.WriteLine("2:" + line);
Thread.Sleep(1000);
Console.WriteLine("3:" + line);
Thread.Sleep(1000);
Console.WriteLine("4:" + line);
Thread.Sleep(1000);
Console.WriteLine("5:" + line);
Thread.Sleep(1000);
}
}
AsyncLineWriter just encapsulates LineWriter and provides an asynchronous interface (BeginWriteLine and EndWriteLine):
public class AsyncLineWriter
{
public AsyncLineWriter()
{
// Async Stuff
m_LineWriter = new LineWriter();
DoWriteLine = new WriteLineDelegate(m_LineWriter.WriteLine);
// Locking Stuff
m_Lock = new object();
}
#region Async Stuff
private LineWriter m_LineWriter;
private delegate void WriteLineDelegate(string line);
private WriteLineDelegate DoWriteLine;
public IAsyncResult BeginWriteLine(string line, AsyncCallback callback, object state)
{
EnterLock();
return DoWriteLine.BeginInvoke(line, callback, state);
}
public void EndWriteLine(IAsyncResult result)
{
DoWriteLine.EndInvoke(result);
ExitLock();
}
#endregion
#region Locking Stuff
private object m_Lock;
private void EnterLock()
{
Monitor.Enter(m_Lock);
Console.WriteLine("----EnterLock----");
}
private void ExitLock()
{
Console.WriteLine("----ExitLock----");
Monitor.Exit(m_Lock);
}
#endregion
}
As I said in the first paragraph, my goal is to only allow one pending asynchronous operation at a time. I’d really like it if I didn’t need to use locks here; i.e., if BeginWriteLine could return an IAsyncResult handle which always returns immediately, and yet retains the expected behavior, that would be great, but I can’t figure out how to do that. So the next best way to explain what I'm after seemed to be to just use locks.
Still, my “Locking Stuff” section isn’t working as expected. I would expect the above code (which runs EnterLock before an async operation begins and ExitLock after an async operation ends) to only allow one pending async operation to run at any given time.
If I run the following code:
static void Main(string[] args)
{
AsyncLineWriter writer = new AsyncLineWriter();
var aresult = writer.BeginWriteLine("atest", null, null);
var bresult = writer.BeginWriteLine("btest", null, null);
var cresult = writer.BeginWriteLine("ctest", null, null);
writer.EndWriteLine(aresult);
writer.EndWriteLine(bresult);
writer.EndWriteLine(cresult);
Console.WriteLine("----Done----");
Console.ReadLine();
}
I expect to see the following output:
----EnterLock----
1:atest
2:atest
3:atest
4:atest
5:atest
----ExitLock----
----EnterLock----
1:btest
2:btest
3:btest
4:btest
5:btest
----ExitLock----
----EnterLock----
1:ctest
2:ctest
3:ctest
4:ctest
5:ctest
----ExitLock----
----Done----
But instead I see the following, which means they’re all just running in parallel as if the locks had no effect:
----EnterLock----
----EnterLock----
----EnterLock----
1:atest
1:btest
1:ctest
2:atest
2:btest
2:ctest
3:atest
3:btest
3:ctest
4:atest
4:btest
4:ctest
5:atest
5:btest
5:ctest
----ExitLock----
----ExitLock----
----ExitLock----
----Done----
I’m assuming the locks are “ignored” because (and correct me if I’m wrong) I’m locking them all from the same thread. My question is: how can I get my expected behavior? And “don’t use asynchronous operations” is not an acceptable answer.
There is a bigger, more complex, real life use case for this, using asynchronous operations, where sometimes it’s not possible to have multiple truly parallel operations, but I need to at least emulate the behavior even if it means queuing the operations and running them one after another. Specifically, the interface to AsyncLineWriter shouldn’t change, but it’s internal behavior somehow needs to queue up any async operations it’s given in a thread-safe manner. Another gotcha is that I can't add locks into LineWriter's WriteLine, because in my real case, this is a method which I can't change (although, in this example, doing so actually does give me the expected output).
A link to some code designed to solve a similar issue might be good enough to get me on the right path. Or perhaps some alternative ideas. Thanks.
P.S. If you're wondering what kind of use case could possibly use such a thing: it's a class which maintains a network connection, upon which only one operation can be active at a time; I'm using asynchronous calls for each operation. There's no obvious way to have truly parallel lines of communication go on over a single network connection, so they will need to wait for each other in one way or another.
Locks cannot possibly work here. You're trying to enter the lock on one thread (the calling thread) and exit from it on a different thread (the ThreadPool).
Since .Net locks are re-entrant, entering it a second time on the caller thread doesn't wait. (had it waited, it would deadlock, since you can only exit the lock on that thread)
Instead, you should create a method that calls the synchronous version in a lock, then call that method asynchronously.
This way, you're entering the lock on the async thread rather than the caller thread, and exiting it on the same thread after the method finishes.
However, this is inefficient,since you're wasting threads waiting on the lock.
It would be better to make a single thread with a queue of delegates to execute on it, so that calling the async method would start one copy of that thread, or add it to the queue if the thread is already running.
When this thread finishes the queue, it would exit, and then be restarted next time you call an async method.
I've done this in a similar context; see my earlier question.
Note that you'll need to implement IAsyncResult yourself, or use a callback pattern (which is generally much simpler).
I created the following "Asynchronizer" class:
public class Asynchronizer
{
public readonly Action<Action> DoAction;
public Asynchronizer()
{
m_Lock = new object();
DoAction = new Action<Action>(ActionWrapper);
}
private object m_Lock;
private void ActionWrapper(Action action)
{
lock (m_Lock)
{
action();
}
}
}
You can use it like this:
public IAsyncResult BeginWriteLine(string line, AsyncCallback callback, object state)
{
Action action = () => m_LineWriter.WriteLine(line);
return m_Asynchronizer.DoAction.BeginInvoke(action, callback, state);
}
public void EndWriteLine(IAsyncResult result)
{
m_Asynchronizer.DoAction.EndInvoke(result);
}
This way it's easy to synchronize multiple asynchronous operations even if they have different method signatures. E.g., we could also do this:
public IAsyncResult BeginWriteLine2(string line1, string line2, AsyncCallback callback, object state)
{
Action action = () => m_LineWriter.WriteLine(line1, line2);
return m_Asynchronizer.DoAction.BeginInvoke(action, callback, state);
}
public void EndWriteLine2(IAsyncResult result)
{
m_Asynchronizer.DoAction.EndInvoke(result);
}
A user can queue up as many BeginWriteLine/EndWriteLine and BeginWriteLine2/EndWriteLine2 as they want, and they will be called in a staggered, synchronized, fashion (although you will have as many threads open as operations are pending, so only practical if you know you'll only ever have a asynchronous operations pending at a time). A better, more complex, solution is, like SLaks pointed out, to use a dedicated queue-thread and queue the actions into it.