WinForms message loop not responsive - c#

I'm deliberately abusing the message loop in a Windows Forms application, but my "just for fun" project quickly progressed beyond my level of understanding. While the task is running the form is unresponsive. Yes, there are lots of other questions like this, but in my case I am deliberately avoiding work on another thread (to win a bet against myself?)
I have a function that runs for (many) short slices of time on the UI thread: get_IsComplete() checks if the task is complete; DoWork() loops from 0 to 1000 (just to keep the CPU warm). The task is started up by calling control.BeginInvoke(new Action(ContinueWith), control); whereupon it (tail recursively) calls itself until completion, always running a short slice of work on the UI thread.
public void ContinueWith(Control control)
{
if (!IsComplete)
{
DoWork();
OnNext(control);
control.BeginInvoke(new Action(ContinueWith), control);
}
else
{
OnCompleted(control);
}
}
I expected the application to process other events (mouse clicks, control repaints, form moves etc.) but it seems my calls are getting more priority than I'd like.
Any suggestions?

The control.BeginInvoke() call places the delegate you pass in an internal queue and calls PostMessage() to wake up the message loop and pay attention. That's what gets the first BeginInvoke going. Any input events (mouse and keyboard) also go on the message queue, Windows puts them there.
The behavior you didn't count on is in the code that runs when the posted message is retrieved. It doesn't just dequeue one invoke request and executes it, it loops until the entire invoke queue is emptied. The way your code works, that queue is never emptied because invoking ContinueWith() adds another invoke request. So it just keeps looping and processing invoke requests and never gets around to retrieving more messages from the message queue. Or to put it another way: it is pumping the invoke queue, not the message queue.
The input messages stay in the message queue until the your code stops adding more invoke requests and the regular message loop pumping resumes, after your code stops recursing. Your UI will look frozen while this takes place because Paint events won't be delivered either. They only get generated when the message queue is empty.
It is important that it works the way it does, the PostMessage() call isn't guaranteed to work. Windows doesn't allow more than 10,000 message in the message queue. But Control.BeginInvoke() has no such limit. By emptying the invoke queue completely, a lost PostMessage message doesn't cause any problem. This behavior does cause other problems though. A classic one is calling BackgroundWorker.ReportProgress() too often. Same behavior, the UI thread is just flooded with invoke requests and doesn't get around its normal duties anymore. Frown upside down on anybody that runs into this: "I'm using BackgroundWorker but my UI still freezes".
Anyhoo, your experiment is an abysmal failure. Calling Application.DoEvents() would be required to force the message queue to be emptied. Lots of caveats with that, check this answer for details. The upcoming support for the async keyword will provide another way to do this. Not so sure if it treats the message priority any differently. I rather doubt it, Control.BeginInvoke() is pretty core. One hack around the problem is by using a Timer with a very short Interval. Timer messages also go on the message queue (sort of) but they have a very low priority. Input events get processed first. Or a low level hack: calling PostMessage with your own message yourself and overriding WndProc to detect it. That's getting a bit off the straight and narrow. The Application.Idle event is useful to do processing after any input events are retrieved.

Use the begininvoke overload that takes a priority. The 'normal' priority is higher that input and rendering. You need to choose something like applicationidle

Related

C# Windows Form Updates Timing [duplicate]

Hey, I have a sequence of code that goes something like this:
label.Text = "update 0";
doWork();
label.Text = "update 1";
doWork2();
label.Text = "update 2";
Basically, the GUI does not update at all, until all the code is done executing. How to overcome this?
An ugly hack is to use Application.DoEvents. While this works, I'd advise against it.
A better solution is to use a BackgroundWorker or a seperate thread to perform long running tasks. Don't use the GUI thread because this will cause it to block.
An important thing to be aware of is that changes to the GUI must be made on the GUI thread so you need to transfer control back to the GUI thread for you label updates. This is done using Invoke. If you use a BackgroundWorker you can use ReportProgress - this will automatically handle calling Invoke for you.
The UI updates when it get's a the WM_PAINT message to repaint the screen. If you are executing your code, the message handling routine is not execute.
So you can do the following to enable executing of the message handler:
Use a BackgroundWorker
Call Application.DoEvents()
The Application.DoEvents, calls the message handler, and then returns. It is not ideal for large jobs, but for a small procedures, it can be a much simpler solution, instead of introducing threading.
Currently, all of your processing is being performed on the main (UI) thread, so all processing must complete before the UI thread then has free cycles to repaint the UI.
You have 2 ways of overcoming this. The first way, which is not recommended, is to use
Application.DoEvents(); Run this whenever you want the Windows message queue to get processed.
The other, recommended, way: Create another thread to do the processing, and use a delegate to pass the UI updates back to the UI thread. If you are new to multithreaded development, then give the BackgroundWorker a try.
GUI cannot update while running your code that way. GUI on windows depend on message processing, and message processing is halted while you are in your code - no matter what you do with the labels, buttons and such, they will all be updated AFTER your code exits and main messaage loop of the form is processed.
You have several options here:
move processing to other thread and face Invoke() situtations
call DoEvents() and allow gui to refresh between your DoWork calls
do all the Work and update later

Application.DoEvents Never Exits

I'm working on a legacy application which has sprinklings of Application.DoEvents here and there. I'm fully aware that this is frowned upon due to re-entrancy issues, but re-factoring isn't currently an option.
An issue has started to occur where DoEvents never exits. The UI is responsive (I can see UI thread activity in the user logs) so DoEvents seems to be pumping the messages, but for some reason it never completes. Unfortunately this DoEvents is in the main data-processing component, which means this stops processing server responses as it's stuck on the DoEvents line.
We have Stopwatch trace which tells how long the DoEvents ran for - staggeringly I got a log where it said it was running for 188267770 milliseconds, which is 52 hours (gulp). It seemed to get into this state at about 3am on a Saturday, until the user came in on Monday and shut the app down (not killing the process, I can see the GUI thread trace closing things gracefully), at which point the DoEvents completes and the timer data is logged (so something which happens during shutdown must convince DoEvents to complete).
Of course, this only happens on the production user's machines, and not on my dev box :)
Has anyone ever seen a similar problem to this?
I've decompiled DoEvents and also how Conrol.BeginInvoke pushes method delegates onto the GUI thread using the Windows message queue, but I cannot see how DoEvents can get stuck like this, and keep the UI responsive.
Source control diff is also not an option since there's been around 30 versions since the last 'good' version the users had, and this new version with the problem - so about 200 files have changed.
Many thanks
Paul
For the loop to keep running there must be messages on the message queue. So I assume that there is a message that when dispatched then causes another message to be put on the queue. And so forth forever.
Do you have any background processing that causes this type of behaviour? Posting another message to continue processing? Is there an event in the system that can occur that when processed could simply occur immediately again?
The other alternative is that one of the messages is itself creating a nested message loop. For example, showing a dialog would cause a nested message loop that does not finish until the dialog is removed. Does your app try and show a dialog that will then not be dismissed for some reason?
Impossible for us to tell you the answer given the number of possibilities.
After much digging I finally found the cause - System.Windows.Forms.Timer.
Basically two or more Timers can cause a DoEvents() call to never end. Whilst processing one timer WM_TIMER message, another timer can post its WM_TIMER message, which is then processed by DoEvents, as that is being processed the first Timer will post, and so on.
The app I'm working on has about 8 Timers I've found so far....
But really, DoEvents is the real culprit so I plan to re-factor to get rid of it.

Best practice for continual running process in C#

I am working on a project in C#.NET using the .NET framework version 3.5.
My project has a class called Focuser.cs which represents a physical device, a telescope focuser, that can communicate with a PC via a serial (RS-232) port. My class (Focuser) has properties such as CurrentPosition, CurrentTemperature, ect which represents the current conditions of the focuser which can change at any time. So, my Focuser class needs to continually poll the device for these values and update its internal fields. My question is, what is the best way to perform this continual polling sequence? Occasionally, the user will need to switch the device into a different mode which will require the ability to stop the polling, perform some action, and then resume polling.
My first attempt was to use a time that ticks every 500ms and then calls up a background worker which polls for one position and one temperature then returns. When the timer ticks if the background worker isBusy then it just returns and tries again 500ms later. Someone suggested that I get rid of the background worker all together and just do the poll in the timer tick event. So I set the AutoReset property of the timer to false and then just restart the timer every time a poll finishes. These two techniques seemed to behave the exact same way in my application so I am not sure if one is better than the other. I also tried creating a new thread every time I want to do a poll operation using a new ThreadStart and all that. This also seemed to work fine.
I should mention one other thing. This class is part of a COM object server which basically means that the class library that is produced will be called upon via COM. I am not sure if this has any influence on the answer but I just thought I should throw it out there.
The reason I am asking all of this is that all of my test harness runs and debug builds work just fine but when I do a release build and try to make calls to my class from another application, that application freezes up and I am having a hard time determining the cause.
Any advice, suggestions, comments would be appreciated.
Thanks, Jordan
Remember that the timer hides its own background worker thread, which basically sleeps for the interval, then fires its Elapsed event. Knowing that, it makes sense just to put the polling in Elapsed. This would be the best practice IMO, rather than starting a thread from a thread. You can start and stop Timers as well, so the code that switches modes can Stop() the Timer, perform the task, then Start() it again, and the Timer doesn't even have to know the telescope IsBusy.
However, what I WOULD keep track of is whether another instance of the Elapsed event handler is still running. You could lock the Elapsed handler's code, or you could set a flag, visible from any thread, that indicates another Elapsed() event handler is still working; Elapsed event handlers that see this flag set can exit immediately, avoiding concurrency problems working with the serial port.
So it looks like you have looked at 2 options:
Timer. The Timer is non-blocking while waiting (uses another thread), so the rest of the program can continue running and be responsive. When the timer event kicks off, you simply get/update the current values.
Timer + BackgroundWorker. The background worker is also simply a separate thread. It may take longer to actually start the thread than to simply get the current values. Unless it takes a long time to get the current values and causes your program to become unresponsive, this is unnecessary complexity.
If getting values is fast enough, stick to #1 for simplicity.
If getting values is slow, #2 will work but unnecessarily has a thread start a thread. Instead, do it with only a BackgroundWorker (no Timer). Create the BackgroundWorker once and store in a variable. No need to recreate it every time. Make sure to set WorkerSupportsCancellation to true. Whenever you want to start checking values, on your main program thread do bgWorker.RunWorkerAsync(). When you want to stop, do bgWorker.CancelAsync(). Inside your DoWork method, have a loop that checks the values and does a Thread.Sleep(500). Since it's a separate thread, it won't make your program unresponsive. In the loop conditions, also check to see if the polling was cancelled and break out. You'll probably need a way to get the values back to the main thread. You can use ReportProgress() if an integer is good enough. Otherwise you can create an object to hold the content, but make sure to lock (object) { } before reading and modifying it. This is a quick summary, but if you go this route I would recommend you read: http://www.albahari.com/threading/part3.aspx#_BackgroundWorker
Is the process of contacting the telescope and getting the current values actually take long enough to warrant polling? Have you tried dropping the multithreading and just blocking while you get the current value?
To answer your question, however, I would suggest not using a background worker but an actual Thread that updates the properties continuously.
If all these properties are read only (can you set the temp of the telescope?) and there are no dependencies between them (e.g., no transactions are required to update multiple properties at once) you can drop all the blocking code and let your thread update willy-nilly while other threads access the properties.
I suggest a real, dedicated Thread rather than the thread pool just because of a lack of knowledge of what might happen when mixing background threads and COM servers. Also, apartment state might play into this; with a Thread you can try STA but you can't do that with a threadpool thread.
You say the app freezes up in a release build?
To eliminate extra variables, I'd take all the timer/multi-threaded code out of the application(just comment it out), and try it with a straightforward blocking method.
i.e. You click a button, it calls a function, that function hits the COM object for data, and then updates the UI. All in a blocking, synchronous fashion. This will tell you for sure whether it's the multi-threading code that's freezing you up, or if it's the COM interaction itself.
How about starting a background thread with ThreadPool? Then enter a loop based on a bool (While (bContinue)) that loops and does your work and then a Thread.Sleep at the end of the loop - exiting the program would include setting bContinue to false so the thread stops - perhaps hook it up to the OnStop event in a windows service
bool bRet = ThreadPool.QueueUserWorkItem(new WaitCallback(ThreadFunc));
private void ThreadFunc(object objState)
{
// enter loop
bContinue = true;
while (bContinue) {
// do stuff
// sleep
Thread.Sleep(m_iWaitTime_ms);
}
}

SynchronizationContext.Post(...) in transport event handler

We have a method which, due to threading in the client application requires the usage of SynchronizationContext.
There is a bit of code which one of my colleagues has written which doesnt "feel" right to me, and a performance profiler is telling me that quit a lot of processing is being used in this bit of code.
void transportHelper_SubscriptionMessageReceived(object sender, SubscriptionMessageEventArgs e)
{
if (SynchronizationContext.Current != synchronizationContext)
{
synchronizationContext.Post(delegate
{
transportHelper_SubscriptionMessageReceived(sender, e);
}, null);
return;
}
[code removed....]
}
This just doesnt feel right to me, as we are basically posting the same request to the gui thread event queue...however, I cannot see anyhting oviously problematic either, other than the performance of this area of code.
This method is an event handler attached to an event raised by our middle-tier messaging layer helper (transportHelper) and it exists within a service which handles requests from the GUI.
Does this seem like an acceptable way of making sure that we do not get cross-thread errors? If not, is there a better solution?
Thanks
Let's trace what's going on inside this method, and see what that tells us.
The method signature follows that of event handlers, and as the question indicates, we can expect it to be first called in the context of some thread that is not the UI thread.
The first thing the method does is to compare the SynchronizationContext of the thread it's running in with a SynchronizationContext saved in a member variable. We'll assume the saved context is that of the UI thread. (Mike Peretz posted an excellent series of introductory articles to the SynchronizationContext class on CodeProject)
The method will find the contexts not equal, as it is called in a thread different from the UI thread. The calling thread's context is likely to be null, where the UI thread's context is pretty much guarantied to be set to an instance of WindowsFormsSynchronizationContext. It will then issue a Post() on the UI context, passing a delegate to itself and its arguments, and return immediately. This finishes all processing on the background thread.
The Post() call causes the exact same method to be invoked on the UI thread. Tracing the implementation of WindowsFormsSynchronizationContext.Post() reveals that this is implemented by queueing a custom Windows message on the UI thread's message queue. Arguments are passed, but are not "marshaled", in the sense that they aren't copied or converted.
Our event handler method is now called again, as a result of the Post() call, with the exact same arguments. This time around, however, the thread's SynchronizationContext and the saved context are one and the same. The content of the if clause is skipped, and the [code removed] portion is executed.
Is this a good design? It's hard to say without knowing the content of the [code removed] portion. Here are some thoughts:
Superficially, this doesn't seem to be a horrible design. A message is received on a background thread, and is passed on to the UI thread for presentation. The caller returns immediately to do other things, and the receiver gets to continue with the task. This is somewhat similar to the Unix fork() pattern.
The method is recursive, in a unique way. It doesn't call itself on the same thread. Rather, it causes a different thread to invoke it. As with any recursive piece of code, we would be concerned with its termination condition. From reading the code, it appears reasonably safe to assume that it will always be invoked recursively exactly once, when passed to the UI thread. But it's another issue to be aware of. An alternative design might have passed a different method to Post(), perhaps an anonymous one, and avoid the recursion concern altogether.
There doesn't seem to be an obvious reason for a large amount of processing to occur inside the if clause. Reviewing the WindowsFormsSynchronizationContext implementation of Post() with the .NET reflector reveals some moderately long sequences of code in it, but nothing too fancy; It all happens in RAM, and it does not copy large amounts of data. Essentially it just prepares the arguments and queues a Windows message on the receiving thread's message queue.
You should review what is going on inside the [code removed] portion of the method. Code that touches UI controls totally belongs there -- it must execute inside the UI thread. However, if there is code in there that doesn't deal with UI, it might be a better idea to have it execute in the receiving thread. For example, any CPU-intensive parsing would be better hosted in the receiving thread, where it does not impact the UI responsiveness. You could just move that portion of the code above the if clause, and move the remaining code to a separate method -- to ensure neither portion gets executed twice.
If both the receiving thread and the UI thread need to remain responsive, e.g. both to further incoming message and to user input, you might need to introduce a third thread to process the messages before passing them to the UI thread.

How do I Yield to the UI thread to update the UI while doing batch processing in a WinForm app?

I have a WinForms app written in C# with .NET 3.5. It runs a lengthy batch process. I want the app to update status of what the batch process is doing. What is the best way to update the UI?
The BackgroundWorker sounds like the object you want.
The quick and dirty way is using Application.DoEvents() But this can cause problems with the order events are handled. So it's not recommended
The problem is probably not that you have to yield to the ui thread but that you do the processing on the ui thread blocking it from handling messages. You can use the backgroundworker component to do the batch processing on a different thread without blocking the UI thread.
Run the lengthy process on a background thread. The background worker class is an easy way of doing this - it provides simple support for sending progress updates and completion events for which the event handlers are called on the correct thread for you. This keeps the code clean and concise.
To display the updates, progress bars or status bar text are two of the most common approaches.
The key thing to remember is if you are doing things on a background thread, you must switch to the UI thread in order to update windows controls etc.
To beef out what people are saying about DoEvents, here's a description of what can happen.
Say you have some form with data on it and your long running event is saving it to the database or generating a report based on it. You start saving or generating the report, and then periodically you call DoEvents so that the screen keeps painting.
Unfortunately the screen isn't just painting, it will also react to user actions. This is because DoEvents stops what you're doing now to process all the windows messages waiting to be processed by your Winforms app. These messages include requests to redraw, as well as any user typing, clicking, etc.
So for example, while you're saving the data, the user can do things like making the app show a modal dialog box that's completely unrelated to the long running task (eg Help->About). Now you're reacting to new user actions inside the already running long running task. DoEvents will return when all the events that were waiting when you called it are finished, and then your long running task will continue.
What if the user doesn't close the modal dialog? Your long running task waits forever until this dialog is closed. If you're committing to a database and holding a transaction, now you're holding a transaction open while the user is having a coffee. Either your transaction times out and you lose your persistence work, or the transaction doesn't time out and you potentially deadlock other users of the DB.
What's happening here is that Application.DoEvents makes your code reentrant. See the wikipedia definition here. Note some points from the top of the article, that for code to be reentrant, it:
Must hold no static (or global) non-constant data.
Must work only on the data provided to it by the caller.
Must not rely on locks to singleton resources.
Must not call non-reentrant computer programs or routines.
It's very unlikely that long running code in a WinForms app is working only on data passed to the method by the caller, doesn't hold static data, holds no locks, and calls only other reentrant methods.
As many people here are saying, DoEvents can lead to some very weird scenarios in code. The bugs it can lead to can be very hard to diagnose, and your user is not likely to tell you "Oh, this might have happened because I clicked this unrelated button while I was waiting for it to save".
Use Backgroundworker, and if you are also trying to update the GUI thread by handling the ProgressChanged event(like, for a ProgressBar), be sure to also set WorkerReportsProgress=true, or the thread that is reporting progress will die the first time it tries to call ReportProgress...
an exception is thrown, but you might not see it unless you have 'when thrown' enabled, and the output will just show that the thread exited.
Use the backgroundworker component to run your batch processing in a seperate thread, this will then not impact on the UI thread.
I want to restate what my previous commenters noted: please avoid DoEvents() whenever possible, as this is almost always a form of "hack" and causes maintenance nightmares.
If you go the BackgroundWorker road (which I suggest), you'll have to deal with cross-threading calls to the UI if you want to call any methods or properties of Controls, as these are thread-affine and must be called only from the thread they were created on. Use Control.Invoke() and/or Control.BeginInvoke() as appropriate.
If you are running in a background/worker thread, you can call Control.Invoke on one of your UI controls to run a delegate in the UI thread.
Control.Invoke is synchronous (Waits until the delegate returns). If you don't want to wait you use .BeginInvoke() to only queue the command.
The returnvalue of .BeginInvoke() allows you to check if the method completed or to wait until it completed.
Application.DoEvents() or possibly run the batch on a separate thread?
DoEvents() was what I was looking for but I've also voted up the backgroundworker answers because that looks like a good solution that I will investigate some more.

Categories

Resources