Program hangs possibly due to lambda closures or dispatcher issue - c#

I implemented an experimental lightweight in-memory message bus, where recipients can subscribe to messages via the Subscribe() method which I pasted below.
A sender can send messages and the message bus will invoke an internal call back internalCallback. Before the callback is invoked the message may be deep-cloned and marshaled onto a UI thread.
This is where my problem arises: When I comment out the UI dispatcher (as is done in the snippet below) then the callback is invoked correctly. With the dispatcher active the entire application hangs (no run-time error is thrown). What confuses me even more is that the entire below code worked perfectly fine when I ran the calling method on a UI thread. But now the entire framework may also send messages on different tasks/threads and this is when problems arose.
What should I be looking into or possibly adjust?
Thanks
public void Subscribe<T>(string subscriberId, string topic, Action<string, T> callback, bool returnOnUiThread, bool makeDeepCopy, bool catchAll)
{
//create new peer connection if it does not yet exist
if (!_peerConnections.ContainsKey(subscriberId))
{
var newPeer = new PeerConnection(subscriberId)
{
ConnectionStatus = PeerConnectionStatus.Connected,
CreationTimeStamp = DateTime.Now,
LastAliveTimeStamp = DateTime.Now
};
_peerConnections.Add(subscriberId, newPeer);
}
var internalCallBack = new Action<string, object>((header, msg) =>
{
//make a deep copy via serialization if requested
if (makeDeepCopy == true)
{
//try deep clone
var serializedObject = Serializers.JsonStringFromObject(msg);
msg = Serializers.ObjectFromJsonString<T>(serializedObject);
}
var handle = callback;
handle(header, (T)msg);
////return on ui thread if requested
//if (returnOnUiThread == true)
//{
// Application.Current.Dispatcher.Invoke(() =>
// {
// var handle = callback;
// handle(header, (T)msg);
// });
//}
//else
//{
// var handle = callback;
// handle(header, (T)msg);
//}
});
//adding subscription to collection
var subscription = new Subscription(subscriberId, topic, internalCallBack, catchAll);
_subscriptions.Add(subscription);
}

You should use BeginInvoke as the Invoke can lead to a deadlock. Invoke is kind of a lock so it will wait, blocking the current thread until the action finishes. BeginInvoke places the request onto the Windows Message Pump where it will be processed by the UI thread later without blocking the worker thread.
Even though the Windows GUI isn't multithreaded, you can still deadlock it. Checkout the article below.
More
Multithreaded toolkits: A failed dream? Blog, Oracle, October 19 2004

Related

Cancel Tasks in c# [duplicate]

We could abort a Thread like this:
Thread thread = new Thread(SomeMethod);
.
.
.
thread.Abort();
But can I abort a Task (in .Net 4.0) in the same way not by cancellation mechanism. I want to kill the Task immediately.
The guidance on not using a thread abort is controversial. I think there is still a place for it but in exceptional circumstance. However you should always attempt to design around it and see it as a last resort.
Example;
You have a simple windows form application that connects to a blocking synchronous web service. Within which it executes a function on the web service within a Parallel loop.
CancellationTokenSource cts = new CancellationTokenSource();
ParallelOptions po = new ParallelOptions();
po.CancellationToken = cts.Token;
po.MaxDegreeOfParallelism = System.Environment.ProcessorCount;
Parallel.ForEach(iListOfItems, po, (item, loopState) =>
{
Thread.Sleep(120000); // pretend web service call
});
Say in this example, the blocking call takes 2 mins to complete. Now I set my MaxDegreeOfParallelism to say ProcessorCount. iListOfItems has 1000 items within it to process.
The user clicks the process button and the loop commences, we have 'up-to' 20 threads executing against 1000 items in the iListOfItems collection. Each iteration executes on its own thread. Each thread will utilise a foreground thread when created by Parallel.ForEach. This means regardless of the main application shutdown, the app domain will be kept alive until all threads have finished.
However the user needs to close the application for some reason, say they close the form.
These 20 threads will continue to execute until all 1000 items are processed. This is not ideal in this scenario, as the application will not exit as the user expects and will continue to run behind the scenes, as can be seen by taking a look in task manger.
Say the user tries to rebuild the app again (VS 2010), it reports the exe is locked, then they would have to go into task manager to kill it or just wait until all 1000 items are processed.
I would not blame you for saying, but of course! I should be cancelling these threads using the CancellationTokenSource object and calling Cancel ... but there are some problems with this as of .net 4.0. Firstly this is still never going to result in a thread abort which would offer up an abort exception followed by thread termination, so the app domain will instead need to wait for the threads to finish normally, and this means waiting for the last blocking call, which would be the very last running iteration (thread) that ultimately gets to call po.CancellationToken.ThrowIfCancellationRequested.
In the example this would mean the app domain could still stay alive for up to 2 mins, even though the form has been closed and cancel called.
Note that Calling Cancel on CancellationTokenSource does not throw an exception on the processing thread(s), which would indeed act to interrupt the blocking call similar to a thread abort and stop the execution. An exception is cached ready for when all the other threads (concurrent iterations) eventually finish and return, the exception is thrown in the initiating thread (where the loop is declared).
I chose not to use the Cancel option on a CancellationTokenSource object. This is wasteful and arguably violates the well known anti-patten of controlling the flow of the code by Exceptions.
Instead, it is arguably 'better' to implement a simple thread safe property i.e. Bool stopExecuting. Then within the loop, check the value of stopExecuting and if the value is set to true by the external influence, we can take an alternate path to close down gracefully. Since we should not call cancel, this precludes checking CancellationTokenSource.IsCancellationRequested which would otherwise be another option.
Something like the following if condition would be appropriate within the loop;
if (loopState.ShouldExitCurrentIteration || loopState.IsExceptional || stopExecuting) {loopState.Stop(); return;}
The iteration will now exit in a 'controlled' manner as well as terminating further iterations, but as I said, this does little for our issue of having to wait on the long running and blocking call(s) that are made within each iteration (parallel loop thread), since these have to complete before each thread can get to the option of checking if it should stop.
In summary, as the user closes the form, the 20 threads will be signaled to stop via stopExecuting, but they will only stop when they have finished executing their long running function call.
We can't do anything about the fact that the application domain will always stay alive and only be released when all foreground threads have completed. And this means there will be a delay associated with waiting for any blocking calls made within the loop to complete.
Only a true thread abort can interrupt the blocking call, and you must mitigate leaving the system in a unstable/undefined state the best you can in the aborted thread's exception handler which goes without question. Whether that's appropriate is a matter for the programmer to decide, based on what resource handles they chose to maintain and how easy it is to close them in a thread's finally block. You could register with a token to terminate on cancel as a semi workaround i.e.
CancellationTokenSource cts = new CancellationTokenSource();
ParallelOptions po = new ParallelOptions();
po.CancellationToken = cts.Token;
po.MaxDegreeOfParallelism = System.Environment.ProcessorCount;
Parallel.ForEach(iListOfItems, po, (item, loopState) =>
{
using (cts.Token.Register(Thread.CurrentThread.Abort))
{
Try
{
Thread.Sleep(120000); // pretend web service call
}
Catch(ThreadAbortException ex)
{
// log etc.
}
Finally
{
// clean up here
}
}
});
but this will still result in an exception in the declaring thread.
All things considered, interrupt blocking calls using the parallel.loop constructs could have been a method on the options, avoiding the use of more obscure parts of the library. But why there is no option to cancel and avoid throwing an exception in the declaring method strikes me as a possible oversight.
But can I abort a Task (in .Net 4.0) in the same way not by
cancellation mechanism. I want to kill the Task immediately.
Other answerers have told you not to do it. But yes, you can do it. You can supply Thread.Abort() as the delegate to be called by the Task's cancellation mechanism. Here is how you could configure this:
class HardAborter
{
public bool WasAborted { get; private set; }
private CancellationTokenSource Canceller { get; set; }
private Task<object> Worker { get; set; }
public void Start(Func<object> DoFunc)
{
WasAborted = false;
// start a task with a means to do a hard abort (unsafe!)
Canceller = new CancellationTokenSource();
Worker = Task.Factory.StartNew(() =>
{
try
{
// specify this thread's Abort() as the cancel delegate
using (Canceller.Token.Register(Thread.CurrentThread.Abort))
{
return DoFunc();
}
}
catch (ThreadAbortException)
{
WasAborted = true;
return false;
}
}, Canceller.Token);
}
public void Abort()
{
Canceller.Cancel();
}
}
disclaimer: don't do this.
Here is an example of what not to do:
var doNotDoThis = new HardAborter();
// start a thread writing to the console
doNotDoThis.Start(() =>
{
while (true)
{
Thread.Sleep(100);
Console.Write(".");
}
return null;
});
// wait a second to see some output and show the WasAborted value as false
Thread.Sleep(1000);
Console.WriteLine("WasAborted: " + doNotDoThis.WasAborted);
// wait another second, abort, and print the time
Thread.Sleep(1000);
doNotDoThis.Abort();
Console.WriteLine("Abort triggered at " + DateTime.Now);
// wait until the abort finishes and print the time
while (!doNotDoThis.WasAborted) { Thread.CurrentThread.Join(0); }
Console.WriteLine("WasAborted: " + doNotDoThis.WasAborted + " at " + DateTime.Now);
Console.ReadKey();
You shouldn't use Thread.Abort()
Tasks can be Cancelled but not aborted.
The Thread.Abort() method is (severely) deprecated.
Both Threads and Tasks should cooperate when being stopped, otherwise you run the risk of leaving the system in a unstable/undefined state.
If you do need to run a Process and kill it from the outside, the only safe option is to run it in a separate AppDomain.
This answer is about .net 3.5 and earlier.
Thread-abort handling has been improved since then, a.o. by changing the way finally blocks work.
But Thread.Abort is still a suspect solution that you should always try to avoid.
And in .net Core (.net 5+) Thread.Abort() will now throw a PlatformNotSupportedException .
Kind of underscoring the 'deprecated' point.
Everyone knows (hopefully) its bad to terminate thread. The problem is when you don't own a piece of code you're calling. If this code is running in some do/while infinite loop , itself calling some native functions, etc. you're basically stuck. When this happens in your own code termination, stop or Dispose call, it's kinda ok to start shooting the bad guys (so you don't become a bad guy yourself).
So, for what it's worth, I've written those two blocking functions that use their own native thread, not a thread from the pool or some thread created by the CLR. They will stop the thread if a timeout occurs:
// returns true if the call went to completion successfully, false otherwise
public static bool RunWithAbort(this Action action, int milliseconds) => RunWithAbort(action, new TimeSpan(0, 0, 0, 0, milliseconds));
public static bool RunWithAbort(this Action action, TimeSpan delay)
{
if (action == null)
throw new ArgumentNullException(nameof(action));
var source = new CancellationTokenSource(delay);
var success = false;
var handle = IntPtr.Zero;
var fn = new Action(() =>
{
using (source.Token.Register(() => TerminateThread(handle, 0)))
{
action();
success = true;
}
});
handle = CreateThread(IntPtr.Zero, IntPtr.Zero, fn, IntPtr.Zero, 0, out var id);
WaitForSingleObject(handle, 100 + (int)delay.TotalMilliseconds);
CloseHandle(handle);
return success;
}
// returns what's the function should return if the call went to completion successfully, default(T) otherwise
public static T RunWithAbort<T>(this Func<T> func, int milliseconds) => RunWithAbort(func, new TimeSpan(0, 0, 0, 0, milliseconds));
public static T RunWithAbort<T>(this Func<T> func, TimeSpan delay)
{
if (func == null)
throw new ArgumentNullException(nameof(func));
var source = new CancellationTokenSource(delay);
var item = default(T);
var handle = IntPtr.Zero;
var fn = new Action(() =>
{
using (source.Token.Register(() => TerminateThread(handle, 0)))
{
item = func();
}
});
handle = CreateThread(IntPtr.Zero, IntPtr.Zero, fn, IntPtr.Zero, 0, out var id);
WaitForSingleObject(handle, 100 + (int)delay.TotalMilliseconds);
CloseHandle(handle);
return item;
}
[DllImport("kernel32")]
private static extern bool TerminateThread(IntPtr hThread, int dwExitCode);
[DllImport("kernel32")]
private static extern IntPtr CreateThread(IntPtr lpThreadAttributes, IntPtr dwStackSize, Delegate lpStartAddress, IntPtr lpParameter, int dwCreationFlags, out int lpThreadId);
[DllImport("kernel32")]
private static extern bool CloseHandle(IntPtr hObject);
[DllImport("kernel32")]
private static extern int WaitForSingleObject(IntPtr hHandle, int dwMilliseconds);
While it's possible to abort a thread, in practice it's almost always a very bad idea to do so. Aborthing a thread means the thread is not given a chance to clean up after itself, leaving resources undeleted, and things in unknown states.
In practice, if you abort a thread, you should only do so in conjunction with killing the process. Sadly, all too many people think ThreadAbort is a viable way of stopping something and continuing on, it's not.
Since Tasks run as threads, you can call ThreadAbort on them, but as with generic threads you almost never want to do this, except as a last resort.
I faced a similar problem with Excel's Application.Workbooks.
If the application is busy, the method hangs eternally. My approach was simply to try to get it in a task and wait, if it takes too long, I just leave the task be and go away (there is no harm "in this case", Excel will unfreeze the moment the user finishes whatever is busy).
In this case, it's impossible to use a cancellation token. The advantage is that I don't need excessive code, aborting threads, etc.
public static List<Workbook> GetAllOpenWorkbooks()
{
//gets all open Excel applications
List<Application> applications = GetAllOpenApplications();
//this is what we want to get from the third party library that may freeze
List<Workbook> books = null;
//as Excel may freeze here due to being busy, we try to get the workbooks asynchronously
Task task = Task.Run(() =>
{
try
{
books = applications
.SelectMany(app => app.Workbooks.OfType<Workbook>()).ToList();
}
catch { }
});
//wait for task completion
task.Wait(5000);
return books; //handle outside if books is null
}
This is my implementation of an idea presented by #Simon-Mourier, using the dotnet thread, short and simple code:
public static bool RunWithAbort(this Action action, int milliseconds)
{
if (action == null) throw new ArgumentNullException(nameof(action));
var success = false;
var thread = new Thread(() =>
{
action();
success = true;
});
thread.IsBackground = true;
thread.Start();
thread.Join(milliseconds);
thread.Abort();
return success;
}
You can "abort" a task by running it on a thread you control and aborting that thread. This causes the task to complete in a faulted state with a ThreadAbortException. You can control thread creation with a custom task scheduler, as described in this answer. Note that the caveat about aborting a thread applies.
(If you don't ensure the task is created on its own thread, aborting it would abort either a thread-pool thread or the thread initiating the task, neither of which you typically want to do.)
using System;
using System.Threading;
using System.Threading.Tasks;
...
var cts = new CancellationTokenSource();
var task = Task.Run(() => { while (true) { } });
Parallel.Invoke(() =>
{
task.Wait(cts.Token);
}, () =>
{
Thread.Sleep(1000);
cts.Cancel();
});
This is a simple snippet to abort a never-ending task with CancellationTokenSource.

BrokeredMessage Automatically Disposed after calling OnMessage()

I am trying to queue up items from an Azure Service Bus so I can process them in bulk. I am aware that the Azure Service Bus has a ReceiveBatch() but it seems problematic for the following reasons:
I can only get a max of 256 messages at a time and even this then can be random based on message size.
Even if I peek to see how many messages are waiting I don't know how many RequestBatch calls to make because I don't know how many messages each call will give me back. Since messages will keep coming in I can't just continue to make requests until it's empty since it will never be empty.
I decided to just use the message listener which is cheaper than doing wasted peeks and will give me more control.
Basically I am trying to let a set number of messages build up and
then process them at once. I use a timer to force a delay but I need
to be able to queue my items as they come in.
Based on my timer requirement it seemed like the blocking collection was not a good option so I am trying to use ConcurrentBag.
var batchingQueue = new ConcurrentBag<BrokeredMessage>();
myQueueClient.OnMessage((m) =>
{
Console.WriteLine("Queueing message");
batchingQueue.Add(m);
});
while (true)
{
var sw = WaitableStopwatch.StartNew();
BrokeredMessage msg;
while (batchingQueue.TryTake(out msg)) // <== Object is already disposed
{
...do this until I have a thousand ready to be written to DB in batch
Console.WriteLine("Completing message");
msg.Complete(); // <== ERRORS HERE
}
sw.Wait(MINIMUM_DELAY);
}
However as soon as I access the message outside of the OnMessage
pipeline it shows the BrokeredMessage as already being disposed.
I am thinking this must be some automatic behavior of OnMessage and I don't see any way to do anything with the message other than process it right away which I don't want to do.
This is incredibly easy to do with BlockingCollection.
var batchingQueue = new BlockingCollection<BrokeredMessage>();
myQueueClient.OnMessage((m) =>
{
Console.WriteLine("Queueing message");
batchingQueue.Add(m);
});
And your consumer thread:
foreach (var msg in batchingQueue.GetConsumingEnumerable())
{
Console.WriteLine("Completing message");
msg.Complete();
}
GetConsumingEnumerable returns an iterator that consumes items in the queue until the IsCompleted property is set and the queue is empty. If the queue is empty but IsCompleted is False, it does a non-busy wait for the next item.
To cancel the consumer thread (i.e. shut down the program), you stop adding things to the queue and have the main thread call batchingQueue.CompleteAdding. The consumer will empty the queue, see that the IsCompleted property is True, and exit.
Using BlockingCollection here is better than ConcurrentBag or ConcurrentQueue, because the BlockingCollection interface is easier to work with. In particular, the use of GetConsumingEnumerable relieves you from having to worry about checking the count or doing busy waits (polling loops). It just works.
Also note that ConcurrentBag has some rather strange removal behavior. In particular, the order in which items are removed differs depending on which thread removes the item. The thread that created the bag removes items in a different order than other threads. See Using the ConcurrentBag Collection for the details.
You haven't said why you want to batch items on input. Unless there's an overriding performance reason to do so, it doesn't seem like a particularly good idea to complicate your code with that batching logic.
If you want to do batch writes to the database, then I would suggest using a simple List<T> to buffer the items. If you have to process the items before they're written to the database, then use the technique I showed above to process them. Then, rather writing directly to the database, add the item to a list. When the list gets 1,000 items, or a given amount of time elapses, allocate a new list and start a task to write the old list to the database. Like this:
// at class scope
// Flush every 5 minutes.
private readonly TimeSpan FlushDelay = TimeSpan.FromMinutes(5);
private const int MaxBufferItems = 1000;
// Create a timer for the buffer flush.
System.Threading.Timer _flushTimer = new System.Threading.Timer(TimedFlush, FlushDelay.TotalMilliseconds, Timeout.Infinite);
// A lock for the list. Unless you're getting hundreds of thousands
// of items per second, this will not be a performance problem.
object _listLock = new Object();
List<BrokeredMessage> _recordBuffer = new List<BrokeredMessage>();
Then, in your consumer:
foreach (var msg in batchingQueue.GetConsumingEnumerable())
{
// process the message
Console.WriteLine("Completing message");
msg.Complete();
lock (_listLock)
{
_recordBuffer.Add(msg);
if (_recordBuffer.Count >= MaxBufferItems)
{
// Stop the timer
_flushTimer.Change(Timeout.Infinite, Timeout.Infinite);
// Save the old list and allocate a new one
var myList = _recordBuffer;
_recordBuffer = new List<BrokeredMessage>();
// Start a task to write to the database
Task.Factory.StartNew(() => FlushBuffer(myList));
// Restart the timer
_flushTimer.Change(FlushDelay.TotalMilliseconds, Timeout.Infinite);
}
}
}
private void TimedFlush()
{
bool lockTaken = false;
List<BrokeredMessage> myList = null;
try
{
if (Monitor.TryEnter(_listLock, 0, out lockTaken))
{
// Save the old list and allocate a new one
myList = _recordBuffer;
_recordBuffer = new List<BrokeredMessage>();
}
}
finally
{
if (lockTaken)
{
Monitor.Exit(_listLock);
}
}
if (myList != null)
{
FlushBuffer(myList);
}
// Restart the timer
_flushTimer.Change(FlushDelay.TotalMilliseconds, Timeout.Infinite);
}
The idea here is that you get the old list out of the way, allocate a new list so that processing can continue, and then write the old list's items to the database. The lock is there to prevent the timer and the record counter from stepping on each other. Without the lock, things would likely appear to work fine for a while, and then you'd get weird crashes at unpredictable times.
I like this design because it eliminates polling by the consumer. The only thing I don't like is that the consumer has to be aware of the timer (i.e. it has to stop and then restart the timer). With a little more thought, I could eliminate that requirement. But it works well the way it's written.
Switching to OnMessageAsync solved the problem for me
_queueClient.OnMessageAsync(async receivedMessage =>
I reached out to Microsoft about the BrokeredMessage being disposed issue on MSDN, this is the response:
Very basic rule and I am not sure if this is documented. The received message needs to be processed in the callback function's life time. In your case, messages will be disposed when async callback completes, this is why your complete attempts are failing with ObjectDisposedException in another thread.
I don't really see how queuing messages for further processing helps on the throughput. This will add more burden to client for sure. Try processing the message in the async callback, that should be performant enough.
In my case that means I can't use ServiceBus in the way I wanted to, and I have to re-think how I wanted things to work. Bugger.
I had the same issue when started to work with Azure Service Bus service.
I have found that method OnMessage always dispose BrokedMessage object. The approach proposed by Jim Mischel didn't help me (but it was very interesting to read - thanks!).
After some investigation I have found that the whole approach is wrong. Let me explain the right way to do what you want.
Use BrokedMessage.Complete() method only inside OnMessage method handler.
If you need to process message outside of this method that you should use method QueueClient.Complete(Guid lockToken). "LockToken" is property of BrokeredMessage object.
Example:
var messageOptions = new OnMessageOptions {
AutoComplete = false,
AutoRenewTimeout = TimeSpan.FromMinutes( 5 ),
MaxConcurrentCalls = 1
};
var buffer = new Dictionary<string, Guid>();
// get message from queue
myQueueClient.OnMessage(
m => buffer.Add(key: m.GetBody<string>(), value: m.LockToken),
messageOptions // this option says to ServiceBus to "froze" message in he queue until we process it
);
foreach(var item in buffer){
try {
Console.WriteLine($"Process item: {item.Key}");
myQueueClient.Complete(item.Value);// you can also use method CompleteBatch(...) to improve performance
}
catch{
// "unfroze" message in ServiceBus. Message would be delivered to other listener
myQueueClient.Defer(item.Value);
}
}
My solution was to get the message SequenceNumber then defer the message and add the SequenceNumber the BlockingCollection. Once the BlockingCollection picks up a new item it can receive the deferred message by the SequenceNumber and mark the message as complete. If for some reason the BlockingCollection doesn't process the SequenceNumber it will remain in the queue as deferred so it can be picked up later when the process is restarted. This protects against loosing messages if the process abnormally terminates while there's still items in the BlockingCollection.
BlockingCollection<long> queueSequenceNumbers = new BlockingCollection<long>();
//This finds any deferred/unfinished messages on startup.
BrokeredMessage existingMessage = client.Peek();
while (existingMessage != null)
{
if (existingMessage.State == MessageState.Deferred)
{
queueSequenceNumbers.Add(existingMessage.SequenceNumber);
}
existingMessage = client.Peek();
}
//setup the message handler
Action<BrokeredMessage> processMessage = new Action<BrokeredMessage>((message) =>
{
try
{
//skip deferred messages if they are already in the queueSequenceNumbers collection.
if (message.State != MessageState.Deferred || (message.State == MessageState.Deferred && !queueSequenceNumbers.Any(x => x == message.SequenceNumber)))
{
message.Defer();
queueSequenceNumbers.Add(message.SequenceNumber);
}
}
catch (Exception ex)
{
// Indicates a problem, unlock message in queue
message.Abandon();
}
});
// Callback to handle newly received messages
client.OnMessage(processMessage, new OnMessageOptions() { AutoComplete = false, MaxConcurrentCalls = 1 });
//start the blocking loop to process messages as they are added to the collection
foreach (var queueSequenceNumber in queueSequenceNumbers.GetConsumingEnumerable())
{
var message = client.Receive(queueSequenceNumber);
//mark the message as complete so it's removed from the queue
message.Complete();
//do something with the message
}

WPF verify call not made from GUI thread

I've tried various methods from other SO questions over the last week and this is the best approach I found yet, but I do not know how to unit test this with NUnit.
private void VerifyDispatcherThread()
{
var context = SynchronizationContext.Current;
if (context != null && context is DispatcherSynchronizationContext)
{
Log.For(this).Info("WARNING! - Method Should Not Be Called From Dispatcher Thread.");
if (DispatcherThreadSafetyEnabled)
{
throw new ThreadStateException("Method Should Not Be Called From Dispatcher Thread.");
}
}
}
HERE IS OUR TEST (Not Working)
[Test]
public void Should_throw_exception_when_called_from_dispatcher()
{
// Arrange
var called = false;
Boolean? success = null;
var httpClient = new HttpClient(HttpTimeOut);
// This emulates WPF dispatcher thread
SynchronizationContext.SetSynchronizationContext(new DispatcherSynchronizationContext());
Dispatcher dispatcher = Dispatcher.CurrentDispatcher;
// Act
dispatcher.BeginInvoke(delegate()
{
try
{
ConfigurationManager.AppSettings.Set("DispatcherThreadSafetyEnabled", "true");
httpClient.Post("http://Foo", "content xml");
success = true;
}
catch
{
success = false;
}
finally
{
ConfigurationManager.AppSettings.Set("DispatcherThreadSafetyEnabled", "false");
}
called = true;
});
// this part is a bit ugly
// spinlock for a bit to give the other thread a chance
var timeout = 0;
while (!called && timeout++ < 1000)
Thread.Sleep(1);
// Assert
Assert.IsNotNull(success, "Method was never invoked");
Assert.IsFalse(success.Value, "Expected to fail");
}
DispatcherThreadSafetyEnabled is a app.config setting we have so we can toggle only logging or throwing exceptions.
What we are trying to do is verify the calling method isn't using the GUI thread when invoked. It appears to be working but we are having a hard time trying to unit test this.
Here's hoping someone has done this and can help us out with our challenge.
The current SynchronizationContext is set to different implementations by different frameworks (Winforms, WPF, etc).
Quoting from this MSDN article:
WindowsFormsSynchronizationContext (System.Windows.Forms.dll:
System.Windows.Forms) Windows Forms apps will create and install a
WindowsFormsSynchronizationContext as the current context for any
thread that creates UI controls.
DispatcherSynchronizationContext (WindowsBase.dll:
System.Windows.Threading) WPF and Silverlight applications use a
DispatcherSynchronizationContext, which queues delegates to the UI
thread’s Dispatcher with “Normal” priority. This
SynchronizationContext is installed as the current context when a
thread begins its Dispatcher loop by calling Dispatcher.Run.
Since you are only testing for not null and the type of the SynchronizationContext, try setting current SynchronizationContext to an instance of DispatcherSynchronizationContextby calling SynchronizationContext.SetSynchronizationContext from your unit tests. Sample code below:
SynchronizationContext.SetSynchronizationContext(new DispatcherSynchronizationContext(Dispatcher.CurrentDispatcher));
Please note this is not a recommended in a Winforms/WPF application though.

Call method on UI thread from background in .net 2.0

I am using MonoDevelop (.net 2.0) to develop a iOS and Android app. I use BeginGetResponse and EndGetResponse to asynchronously do a webrequest in a background thread.
IAsyncResult result = request.BeginGetResponse(new AsyncCallback(onLogin), state);
However, the callback onLogin does seem to still be running on a background thread, no allowing me to interact with the UI. How do I solve this?
Can see that there are Android and iOS specific solutions but want a cross-platform solution.
Edit: From mhutch answer I've got this far:
IAsyncResult result = request.BeginGetResponse(o => {
state.context.Post(() => { onLogin(o); });
}, state);
Where state contains a context variable of type SynchronizationContext set to SynchronizationContext.Current
It complains that Post requires two arguments, the second one being Object state. Inserting state gives the error
Argument `#1' cannot convert `anonymous method' expression to type `System.Threading.SendOrPostCallback' (CS1503) (Core.Droid)
Both Xamarin.iOS and Xamarin.Android set a SynchronizationContext for the GUI thread.
This means you get the SynchronizationContext.Current from the GUI thread and pass it to your callback (e.g via the state object or captured in a lambda). Then you can use the context's Post method to invoke things on the main thread.
For example:
//don't inline this into the callback, we need to get it from the GUI thread
var ctx = SynchronizationContext.Current;
IAsyncResult result = request.BeginGetResponse(o => {
// calculate stuff on the background thread
var loginInfo = GetLoginInfo (o);
// send it to the GUI thread
ctx.Post (_ => { ShowInGui (loginInfo); }, null);
}, state);
I'm not sure if this works on Mono, but I usually do this on WinForm applications. Let's suppose you want to execute the method X(). Then:
public void ResponseFinished() {
InvokeSafe(() => X()); //Instead of just X();
}
public void InvokeSafe(MethodInvoker m) {
if (InvokeRequired) {
BeginInvoke(m);
} else {
m.Invoke();
}
}
Of course, this is inside a Form class.

IAsyncResult.AsyncWaitHandle.WaitOne() completes ahead of callback

Here is the code:
class LongOp
{
//The delegate
Action longOpDelegate = LongOp.DoLongOp;
//The result
string longOpResult = null;
//The Main Method
public string CallLongOp()
{
//Call the asynchronous operation
IAsyncResult result = longOpDelegate.BeginInvoke(Callback, null);
//Wait for it to complete
result.AsyncWaitHandle.WaitOne();
//return result saved in Callback
return longOpResult;
}
//The long operation
static void DoLongOp()
{
Thread.Sleep(5000);
}
//The Callback
void Callback(IAsyncResult result)
{
longOpResult = "Completed";
this.longOpDelegate.EndInvoke(result);
}
}
Here is the test case:
[TestMethod]
public void TestBeginInvoke()
{
var longOp = new LongOp();
var result = longOp.CallLongOp();
//This can fail
Assert.IsNotNull(result);
}
If this is run the test case can fail. Why exactly?
There is very little documentation on how delegate.BeginInvoke works. Does anyone have any insights they would like to share?
Update
This is a subtle race-condition that is not well documented in MSDN or elsewhere. The problem, as explained in the accepted answer, is that when the operation completes the Wait Handle is signalled, and then the Callback is executed. The signal releases the waiting main thread and now the Callback execution enters the "race". Jeffry Richter's suggested implementation shows what's happening behind the scenes:
// If the event exists, set it
if (m_AsyncWaitHandle != null) m_AsyncWaitHandle.Set();
// If a callback method was set, call it
if (m_AsyncCallback != null) m_AsyncCallback(this);
For a solution refer to Ben Voigt's answer. That implementation does not incur the additional overhead of a second wait handle.
The ASyncWaitHandle.WaitOne() is signaled when the asynchronous operation completes. At the same time CallBack() is called.
This means that the the code after WaitOne() is run in the main thread and the CallBack is run in another thread (probably the same that runs DoLongOp()). This results in a race condition where the value of longOpResult essentially is unknown at the time it is returned.
One could have expected that ASyncWaitHandle.WaitOne() would have been signaled when the CallBack was finished, but that is just not how it works ;-)
You'll need another ManualResetEvent to have the main thread wait for the CallBack to set longOpResult.
As others have said, result.WaitOne just means that the target of BeginInvoke has finished, and not the callback. So just put the post-processing code into the BeginInvoke delegate.
//Call the asynchronous operation
Action callAndProcess = delegate { longOpDelegate(); Callafter(); };
IAsyncResult result = callAndProcess.BeginInvoke(r => callAndProcess.EndInvoke(r), null);
//Wait for it to complete
result.AsyncWaitHandle.WaitOne();
//return result saved in Callafter
return longOpResult;
What is happening
Since your operation DoLongOp has completed, control resumes within CallLongOp and the function completes before the Callback operation has completed. Assert.IsNotNull(result); then executes before longOpResult = "Completed";.
Why? AsyncWaitHandle.WaitOne() will only wait for your async operation to complete, not your Callback
The callback parameter of BeginInvoke is actually an AsyncCallback delegate, which means your callback is called asynchronously. This is by design, as the purpose is to process the operation results asynchronously (and is the whole purpose of this callback parameter).
Since the BeginInvoke function actually calls your Callback function the IAsyncResult.WaitOne call is just for the operation and does not influence the callback.
See the Microsoft documentation (section Executing a Callback Method When an Asynchronous Call Completes). There is also a good explanation and example.
If the thread that initiates the asynchronous call does not need to be the thread that processes the results, you can execute a callback method when the call completes. The callback method is executed on a ThreadPool thread.
Solution
If you want to wait for both the operation and the callback, you need to handle the signalling yourself. A ManualReset is one way of doing it which certainly gives you the most control (and it's how Microsoft has done it in their docs).
Here is amended code using ManualResetEvent.
public class LongOp
{
//The delegate
Action longOpDelegate = LongOp.DoLongOp;
//The result
public string longOpResult = null;
// Declare a manual reset at module level so it can be
// handled from both your callback and your called method
ManualResetEvent waiter;
//The Main Method
public string CallLongOp()
{
// Set a manual reset which you can reset within your callback
waiter = new ManualResetEvent(false);
//Call the asynchronous operation
IAsyncResult result = longOpDelegate.BeginInvoke(Callback, null);
// Wait
waiter.WaitOne();
//return result saved in Callback
return longOpResult;
}
//The long operation
static void DoLongOp()
{
Thread.Sleep(5000);
}
//The Callback
void Callback(IAsyncResult result)
{
longOpResult = "Completed";
this.longOpDelegate.EndInvoke(result);
waiter.Set();
}
}
For the example you have given, you would be better not using a callback and instead handling the result in your CallLongOp function, in which case your WaitOne on the operation delegate will work fine.
The callback is executed after the CallLongOp method. Since you only set the variable value in the callback, it stands to reason that it would be null.
Read this :link text
I had the same issue recently, and I figured another way to solve it, it worked in my case. Bacially if the timeout doesn't borther you, re-check the flag IsCompleted when Wait Handle is timeout. In my case, the wait handle is signaled before blocking the thread, and right after the if condition, so recheck it after timeout will do the trick.
while (!AsyncResult.IsCompleted)
{
if (AsyncWaitHandle.WaitOne(10000))
break;
}

Categories

Resources