I'm working on a legacy application which has sprinklings of Application.DoEvents here and there. I'm fully aware that this is frowned upon due to re-entrancy issues, but re-factoring isn't currently an option.
An issue has started to occur where DoEvents never exits. The UI is responsive (I can see UI thread activity in the user logs) so DoEvents seems to be pumping the messages, but for some reason it never completes. Unfortunately this DoEvents is in the main data-processing component, which means this stops processing server responses as it's stuck on the DoEvents line.
We have Stopwatch trace which tells how long the DoEvents ran for - staggeringly I got a log where it said it was running for 188267770 milliseconds, which is 52 hours (gulp). It seemed to get into this state at about 3am on a Saturday, until the user came in on Monday and shut the app down (not killing the process, I can see the GUI thread trace closing things gracefully), at which point the DoEvents completes and the timer data is logged (so something which happens during shutdown must convince DoEvents to complete).
Of course, this only happens on the production user's machines, and not on my dev box :)
Has anyone ever seen a similar problem to this?
I've decompiled DoEvents and also how Conrol.BeginInvoke pushes method delegates onto the GUI thread using the Windows message queue, but I cannot see how DoEvents can get stuck like this, and keep the UI responsive.
Source control diff is also not an option since there's been around 30 versions since the last 'good' version the users had, and this new version with the problem - so about 200 files have changed.
Many thanks
Paul
For the loop to keep running there must be messages on the message queue. So I assume that there is a message that when dispatched then causes another message to be put on the queue. And so forth forever.
Do you have any background processing that causes this type of behaviour? Posting another message to continue processing? Is there an event in the system that can occur that when processed could simply occur immediately again?
The other alternative is that one of the messages is itself creating a nested message loop. For example, showing a dialog would cause a nested message loop that does not finish until the dialog is removed. Does your app try and show a dialog that will then not be dismissed for some reason?
Impossible for us to tell you the answer given the number of possibilities.
After much digging I finally found the cause - System.Windows.Forms.Timer.
Basically two or more Timers can cause a DoEvents() call to never end. Whilst processing one timer WM_TIMER message, another timer can post its WM_TIMER message, which is then processed by DoEvents, as that is being processed the first Timer will post, and so on.
The app I'm working on has about 8 Timers I've found so far....
But really, DoEvents is the real culprit so I plan to re-factor to get rid of it.
Related
I'm deliberately abusing the message loop in a Windows Forms application, but my "just for fun" project quickly progressed beyond my level of understanding. While the task is running the form is unresponsive. Yes, there are lots of other questions like this, but in my case I am deliberately avoiding work on another thread (to win a bet against myself?)
I have a function that runs for (many) short slices of time on the UI thread: get_IsComplete() checks if the task is complete; DoWork() loops from 0 to 1000 (just to keep the CPU warm). The task is started up by calling control.BeginInvoke(new Action(ContinueWith), control); whereupon it (tail recursively) calls itself until completion, always running a short slice of work on the UI thread.
public void ContinueWith(Control control)
{
if (!IsComplete)
{
DoWork();
OnNext(control);
control.BeginInvoke(new Action(ContinueWith), control);
}
else
{
OnCompleted(control);
}
}
I expected the application to process other events (mouse clicks, control repaints, form moves etc.) but it seems my calls are getting more priority than I'd like.
Any suggestions?
The control.BeginInvoke() call places the delegate you pass in an internal queue and calls PostMessage() to wake up the message loop and pay attention. That's what gets the first BeginInvoke going. Any input events (mouse and keyboard) also go on the message queue, Windows puts them there.
The behavior you didn't count on is in the code that runs when the posted message is retrieved. It doesn't just dequeue one invoke request and executes it, it loops until the entire invoke queue is emptied. The way your code works, that queue is never emptied because invoking ContinueWith() adds another invoke request. So it just keeps looping and processing invoke requests and never gets around to retrieving more messages from the message queue. Or to put it another way: it is pumping the invoke queue, not the message queue.
The input messages stay in the message queue until the your code stops adding more invoke requests and the regular message loop pumping resumes, after your code stops recursing. Your UI will look frozen while this takes place because Paint events won't be delivered either. They only get generated when the message queue is empty.
It is important that it works the way it does, the PostMessage() call isn't guaranteed to work. Windows doesn't allow more than 10,000 message in the message queue. But Control.BeginInvoke() has no such limit. By emptying the invoke queue completely, a lost PostMessage message doesn't cause any problem. This behavior does cause other problems though. A classic one is calling BackgroundWorker.ReportProgress() too often. Same behavior, the UI thread is just flooded with invoke requests and doesn't get around its normal duties anymore. Frown upside down on anybody that runs into this: "I'm using BackgroundWorker but my UI still freezes".
Anyhoo, your experiment is an abysmal failure. Calling Application.DoEvents() would be required to force the message queue to be emptied. Lots of caveats with that, check this answer for details. The upcoming support for the async keyword will provide another way to do this. Not so sure if it treats the message priority any differently. I rather doubt it, Control.BeginInvoke() is pretty core. One hack around the problem is by using a Timer with a very short Interval. Timer messages also go on the message queue (sort of) but they have a very low priority. Input events get processed first. Or a low level hack: calling PostMessage with your own message yourself and overriding WndProc to detect it. That's getting a bit off the straight and narrow. The Application.Idle event is useful to do processing after any input events are retrieved.
Use the begininvoke overload that takes a priority. The 'normal' priority is higher that input and rendering. You need to choose something like applicationidle
Apologies in advance - I'm not the right person to be tackling this issue but there's a big snow storm today, and only the intern (me) was crazy enough to come in from my team.
Keeping it simple - I've got an application where after repeating a certain task (deserializing a file and making certain calls based on the data) about 115 times, there's a threshold where any of several other tasks will crash the application. All three of these actions that can crash the application involve showing new windows.
My best guess (garnered from staring at the Windows task manager thread count as I clicked repeatedly) is that we're not disposing of the threads correctly. The formula seems to be 4 threads spawned that hang around (more are actually created, most go away) each time I load a file. I want to know if theres a way I can step through the code and watch the number of threads as the process proceeds. Right now I really don't even know when or where threads are being started, but if I did I could follow their logic and make sure they aren't continuing to operate needlessly.
Thanks!
You can see all your application threads using IntelliTrace.
Just pause it whenever you want, and you can see the call stack of each running thread.
I think the most likely thing is that you create the new forms or access forms/controls from a background thread.
To debug the issue, subscribe to the following events
AppDomain.UnhandledException and Application.ThreadException:
AppDomain.CurrentDomain.UnhandledException += new UnhandledExceptionEventHandler(CurrentDomain_UnhandledException);
Application.ThreadException += new System.Threading.ThreadExceptionEventHandler(Application_ThreadException);
Put a breakpoint in each eventhandler and lock for the stack trace in the exception in the event args.
If you have access to the source of the method(s) that are being run in each thread, then you can insert some trace statements that append to a List, which you can view in the debugger or dump to a file to get an idea of the thread execution order. Lock around the List so to preserve the order of execution.
What I need to know:
I would like to detect when a the main thread (process?) terminates so that I can ensure certain actions are performed before it is terminated.
What I have found myself:
I found the events AppDomain.DomainUnload and AppDomain.ProcessExit. AppDomain.DomainUnload seems to work with non-applications like MbUnit. AppDomain.ProcessExit seems to work with applications but there is a 3 second time limit which I really don't like. Is there more ways to detect when an AppDomain / process terminates?
Background:
I am looking for such an event to ensure my log is persistet to file when the application terminates. The actual logging runs on another thread using a producer-consumer pattern where it is very likely that log entries might queue up in memory and I need to ensure this queue is saved to file when the application terminates.
Is there anything else I should be aware of?
Update:
Changed the above to reflect what I have found out myself. I am not happy with the 3 second time limit during ProcessExit. The MSDN documentation does say though that it can be extended:
The total execution time of all
ProcessExit event handlers is limited,
just as the total execution time of
all finalizers is limited at process
shutdown. The default is three
seconds, which can be overridden by an
unmanaged host.
Does anyone know how to override the default?
More ideas are also highly appreciated!
Follow up:
I have posted a follow up question to this.
You should have an entry point for your application. Normally you can do there some logging when all tasks are terminated:
static void Main()
{
try
{
Application.Run( .... );
}
finally
{
// logging ...
}
}
What exactly do you want to find out?
When the process terminates? (Just because the AppDomain is unloaded doesn't necessarily mean that the entire process is terminating)
When the main thread terminates (If there are other non-background threads, the main thread can terminate without the process terminating (or AppDomain unloading)
So they're not quite the same thing.
Anyway, it is generally dangerous to have log messages buffered in memory at all. What happens if someone turns off the power? Or if I terminate your process through Task Manager? All your log messages are gone. So often, you'll want unbuffered writes in your log, to get messages pushed to disk immediately.
Anyway, another (more robust) approach might be to run the logger itself in a non-background thread. That way, even if the rest of the application terminates, the logger won't, so the process is kept alive. Then you just have to set some flag when the rest of the app terminates, to let the logger know that it too should close once it has written out all pending log messages.
It still won't save you from the case where the system loses power or someone forcibly termianates the process on the OS-level, but it will handle all cases where the application closes normally, and gives you unlimited time to perform clean-up actions (since the process isn't actually terminating yet, it's still got one live thread)
ie. guaranteed to be called and have unlimited time to finish?
Unfortunately, NO option is going to have unlimited time, and be guaranteed. There is no way to enforce this, as many things can happen. Somebody tripping over the power cord or a forced termination of your program will prevent any option from giving you adequate time to handle things.
In general, putting your logic at the end of the Main routine is probably the most reasonable option, since that gives you complete freedom in handling your termination events. You have no time constraints there, and can have the processing take as much time as needed.
There are no guarantees that this will run, though, since a forceful termination of your program may bypass this entirely.
Based on the documentation, it looks like the default application domain (the one your Main method is probably running in) will not receive the DomainUnload event.
I don't know a built-in event that would do what you expect.
You could define your own custom event, have interested parties register with it, and fire off the event just before you return from Main().
I don't know how old this thread is, but I've had a similar problem whcih was a little tough for me to solve.
I had a WinForms application that was not firing any of the above forementioned events when a user logged out. Wraaping the Application.Run() in a try finally didn't work either.
Now to get around this you would have to using PInvoke into Win32 API's to achieve this. Well you did prior to .NET 2.0 anyways. Luckly MS introduced a new class called SystemEvents. With this class you can catch a SessionEnd event. This event allows you to cleanup when the OS want to terminate your app. There is no .NET time limit o this event it appears, although the OS will eventually kill your app if you take too long. This is a little more than 3 seconds, although 3 seconds should be plenty of time to cleanup.
Secondly my other problem was I wanted my worker thread to terminate the main thread once it was finished its work. With an Application.Run() this was hard to achieve. What I ended up doing was calling Application.Run() with a shared Application context. The thread is then able to call ApplicationContext.ThreadExit() to force the Application.Run to return. This seems to work quite nicely.
Hope this helps someone.
Regards
NozFX
I have a WinForms app written in C# with .NET 3.5. It runs a lengthy batch process. I want the app to update status of what the batch process is doing. What is the best way to update the UI?
The BackgroundWorker sounds like the object you want.
The quick and dirty way is using Application.DoEvents() But this can cause problems with the order events are handled. So it's not recommended
The problem is probably not that you have to yield to the ui thread but that you do the processing on the ui thread blocking it from handling messages. You can use the backgroundworker component to do the batch processing on a different thread without blocking the UI thread.
Run the lengthy process on a background thread. The background worker class is an easy way of doing this - it provides simple support for sending progress updates and completion events for which the event handlers are called on the correct thread for you. This keeps the code clean and concise.
To display the updates, progress bars or status bar text are two of the most common approaches.
The key thing to remember is if you are doing things on a background thread, you must switch to the UI thread in order to update windows controls etc.
To beef out what people are saying about DoEvents, here's a description of what can happen.
Say you have some form with data on it and your long running event is saving it to the database or generating a report based on it. You start saving or generating the report, and then periodically you call DoEvents so that the screen keeps painting.
Unfortunately the screen isn't just painting, it will also react to user actions. This is because DoEvents stops what you're doing now to process all the windows messages waiting to be processed by your Winforms app. These messages include requests to redraw, as well as any user typing, clicking, etc.
So for example, while you're saving the data, the user can do things like making the app show a modal dialog box that's completely unrelated to the long running task (eg Help->About). Now you're reacting to new user actions inside the already running long running task. DoEvents will return when all the events that were waiting when you called it are finished, and then your long running task will continue.
What if the user doesn't close the modal dialog? Your long running task waits forever until this dialog is closed. If you're committing to a database and holding a transaction, now you're holding a transaction open while the user is having a coffee. Either your transaction times out and you lose your persistence work, or the transaction doesn't time out and you potentially deadlock other users of the DB.
What's happening here is that Application.DoEvents makes your code reentrant. See the wikipedia definition here. Note some points from the top of the article, that for code to be reentrant, it:
Must hold no static (or global) non-constant data.
Must work only on the data provided to it by the caller.
Must not rely on locks to singleton resources.
Must not call non-reentrant computer programs or routines.
It's very unlikely that long running code in a WinForms app is working only on data passed to the method by the caller, doesn't hold static data, holds no locks, and calls only other reentrant methods.
As many people here are saying, DoEvents can lead to some very weird scenarios in code. The bugs it can lead to can be very hard to diagnose, and your user is not likely to tell you "Oh, this might have happened because I clicked this unrelated button while I was waiting for it to save".
Use Backgroundworker, and if you are also trying to update the GUI thread by handling the ProgressChanged event(like, for a ProgressBar), be sure to also set WorkerReportsProgress=true, or the thread that is reporting progress will die the first time it tries to call ReportProgress...
an exception is thrown, but you might not see it unless you have 'when thrown' enabled, and the output will just show that the thread exited.
Use the backgroundworker component to run your batch processing in a seperate thread, this will then not impact on the UI thread.
I want to restate what my previous commenters noted: please avoid DoEvents() whenever possible, as this is almost always a form of "hack" and causes maintenance nightmares.
If you go the BackgroundWorker road (which I suggest), you'll have to deal with cross-threading calls to the UI if you want to call any methods or properties of Controls, as these are thread-affine and must be called only from the thread they were created on. Use Control.Invoke() and/or Control.BeginInvoke() as appropriate.
If you are running in a background/worker thread, you can call Control.Invoke on one of your UI controls to run a delegate in the UI thread.
Control.Invoke is synchronous (Waits until the delegate returns). If you don't want to wait you use .BeginInvoke() to only queue the command.
The returnvalue of .BeginInvoke() allows you to check if the method completed or to wait until it completed.
Application.DoEvents() or possibly run the batch on a separate thread?
DoEvents() was what I was looking for but I've also voted up the backgroundworker answers because that looks like a good solution that I will investigate some more.
I'm writing an application to start and monitor other applications in C#. I'm using the System.Diagnostics.Process class to start applications and then monitor the applications using the Process.Responding property to poll the state of the application every 100 milisecs. I use Process.CloseMainWindow to stop the application or Process.Kill to kill it if it's not responding.
I've noticed a weird behaviour where sometimes the process object gets into a state where the responding property always returns true even when the underlying process hangs in a loop and where it doesn't respond to CloseMainWindow.
One way to reproduce it is to poll the Responding property right after starting the process instance. So for example
_process.Start();
bool responding = _process.Responding;
will reproduce the error state while
_process.Start();
Thread.Sleep(1000);
bool responding = _process.Responding;
will work.
Reducing the sleep period to 500 will introduce the error state again.
Something in calling _process.Responding too fast after starting seems to prevent the object from getting the right windows message queue handler. I guess I need to wait for _process.Start to finish doing it's asynchronous work. Is there a better way to wait for this than calling Thread.Sleep ? I'm not too confident that the 1000 ms will always be enough.
Now, I need to check this out later, but I am sure there is a method that tells the thread to wait until it is ready for input. Are you monitoring GUI processes only?
Isn't Process.WaitForInputIdle of any help to you? Or am I missing the point? :)
Update
Following a chit-chat on Twitter (or tweet-tweet?) with Mendelt I thought I should update my answer so the community is fully aware..
WaitForInputIdle will only work on applications that have a GUI.
You specify the time to wait, and the method returns a bool if the process reaches an idle state within that time frame, you can obviously use this to loop if required, or handle as appropriate.
Hope that helps :)
I think it may be better to enhance the check for _process.Responding so that you only try to stop/kill the process if the Responding property returns false for more than 5 seconds (for example).
I think you may find that quite often, applications may be "not responding" for a split second whilst they are doing more intensive processing.
I believe a more lenient approach will work better, allowing a process to be "not responding" for a short amount of time, only taking action if it is repeatedly "not responding" for several seconds (or however long you want).
Further note: The Microsoft documentation indicates that the Responding property specifically relates to the user interface, which is why a newly started process may not have it's UI responding immediately.
Thanks for the answers. This
_process.Start();
_process.WaitForInputIdle();
Seems to solve the problem. It's still strange because Responding and WaitForInputIdle should both be using the same win32 api call under the covers.
Some more background info
GUI applications have a main window with a message queue. Responding and WaitForInputIdle work by checking if the process still processes messages from this message queue. This is why they only work with GUI apps. Somehow it seems that calling Responding too fast interferes with getting the Process getting a handle to that message queue. Calling WaitForInputIdle seems to solve that problem.
I'll have to dive into reflector to see if I can make sense of this.
update
It seems that retrieving the window handle associated with the process just after starting is enough to trigger the weird behaviour. Like this:
_process.Start();
IntPtr mainWindow = _process.MainWindowHandle;
I checked with Reflector and this is what Responding does under the covers. It seems that if you get the MainWindowHandle too soon you get the wrong one and it uses this wrong handle it for the rest of the lifetime of the process or until you call Refresh();
update
Calling WaitForInputIdle() only solves the problem some of the time. Calling Refresh() everytime you read the Responding property seems to work better.
I too noticed that in a project about 2 years ago. I called .Refresh() before requesting certain prop values. IT was a trial-and-error approach to find when I needed to call .Refresh().