I'm working on a project with some event handling code. Basically, I create a timer and then tack on my own event handler for the timeout event. I double-checked by putting a break-point there and yes, the event handler does get added to that Event (yes, I also start the timer). For some reason, though, sometimes the event handler fires and at other times it does not. I'm using multi-threading, and have considered that it might be somehow related to that but am unsure.
I'm aware this is a vague question, but hoping that someone ran into something similar.
Thanks,
PM
EDIT: I have looked into the issue a bit further, and I notice that this is indeed a thread issue. The thread that is responsible for this event, is the one handling the network part of my program, and it blocks immediately after, because it is waiting for input from another instance of the program on the network. How would I get around this?
I have looked into the issue a bit further, and I notice that this is indeed a thread issue. The thread that is responsible for this event, is the one handling the network part of my program, and it blocks immediately after, because it is waiting for input from another instance of the program on the network. How would I get around this?
Sounds like you (a) have a System.Windows.Timers.Timer with a SynchronizingObject set to a UI control, or are using (b) a System.Windows.Forms.Timer. Then, you block the UI thread with a network read - preventing the event from firing.
You have 2 options, either use a different thread for the network read or for the timer event. If you use a System.windows.Timers.Timer, then don't pass a SynchronizingObject, and it will raise the event on a ThreadPool thread. Or, async the network read.
What kind of timer is it, and do you retain a reference to it? Is it possible that the timer is being garbage collected before it's due to fire? There's a warning about this in the docs for System.Windows.Timers.Timer.
Related
I have an application that interacts with HID devices, on slower machines it seems to get hung up on itself when I subscribe to the onReport event raised when a HID report is received and I suspect it is because it is launching a new instance of the event handler possibly before the previous one has finished its tasks.
Is there a way to ensure that reports are ignored until the previous event handler has finished its tasks? I was thinking a static variable that the handler could set as its last action but id like to fins something built into .net if it exists.
You can use threads synchronisations mechanisms to make sure only onle thread can run a particular piece of code at the same time. Have a search in Googles for threads synchronization.
The most simple solutions you could consider is to use a lock mechanism:
lock (lockObject)
{
// The code that you put here can be run only on one thread at the same time...
// ...
}
You can also introduce some fields that will mark if particular parts of code have / haven't been executed already.
I have an event that is raised from a third-party library, which executes on a background thread. This event basically notifies listeners of status updates in the system the library is watching. The handler invokes itself on the UI thread if InvokeRequired is true, and in either case then proceeds to append an entry for the status change to text in a textbox, and pop up a notification in the tray.
Now, the problem is that these status updates can come in very rapidly; the system being monitored can go from its "idle" state through several intermediates to a "ready" state in milliseconds. I need to know that the system has transitioned through all of these intermediate states; however, not all of the state changes are getting to the log. Setting a breakpoint and stepping through the handler shows the oddest behavior; the handler will step through the first couple of lines of code, and will then jump back to the method's entry. It's almost as if either the event or the Windows message pump is aborting the method call because another call to the same method is incoming. Putting the method body in a lock block does not solve it.
I've seen this before in other projects that do not use this third-party library. I wasn't as concerned there, because the rapid-fire event was simply triggering window redraws. If they all happened, great, but if one was short-circuited, there was another in the pipe that would go through. This, however, is a much more application-critical task that must happen every time the event is raised, in order (it doesn't have to happen as fast as the states actually change; definitely not expecting that).
What's the cause of this short-circuiting behavior, and how do I stop it?
What you're seeing is most likely new calls to the event handler from the background thread while the call you first started stepping through is still running.
Rather than doing all of the work synchronously in the thread the event handler fires in it would likely be beneficial to do the work in another thread.
Just wrap everything you're doing in your current event handler in a Task.Factory.StartNew, or use BeginInvoke to marshal to the UI thread instead of Invoke.
I couldn't be sure as I have no knowledge of that library, but it could be that it's unable to fire more events until it finishes executing all of the event handlers for the previous event.
Another option that you may need to do, either to solve this problem, or to prevent your UI from being mad at you for so many updates, is to take the status changes as the come in and just dump them into a collection, and then just periodically check that collection and process all of the changes in a batch. This would be easier on the event handler for the 3rd party object as it just needs to add an item to a collection, and also easier on the UI as it won't need to update several times in the span of time the monitor can even render the changes.
I am working on a Messenger library. The main class has a Login method. When logging in, all contact list data is downloaded and stored until the Login has completed, at which point I raise a UserAdded event for each user that was downloaded.
Currently I raise the events right at the end of the Login method, one by one. This works, but it means if I perform a lengthy operation inside a UserAdded event handler, the library consumer does not get their events in a timely fashion.
One way around this I can see would be to raise each event asynchronously, but this would thrash the threadpool.
Am I doing it the right way currently? Should I simply make a note in the documentation warning against performing lengthy operations inside event handlers?
Perhaps you might want to change your event handler to simply "enqueue" into a threadsafe queue work items. You can then have a single thread which pumps the queue continuously to actually process the messages. That way the raise happens very quickly and there is only one thread actually processing the queue of work items.
However doing this means you now have to deal with the fact that raising your event does not mean it has been immediately processed, which could affect logic you have in your app.
I'm attempting to monitor the status of many HPC jobs running in parallel in a single threaded program, I'm subscribing to events raised by OnJobState and when monitoring as few as three jobs event state changes will go missing and the job is stuck running.
I'm assuming I need a thread per job to catch all the events but I can't find any information about the limits of events subscripton in a single thread program.
I would have thought the .net platform would queue this all up but that doesn't appear to be the case.
Events are synchronous by default. That means that the object that raises an event will continue its execution only after all event handlers finished their work. The event handlers will run on the same thread as the object that raises the event. That leads to the following conclusions:
The .NET framework can't queue anything, because the events are raised one after another
You should not do heavy computing in event handlers. If the events are fired in rapid succession, even moderate computing should be avoided.
If you want queuing, you need to implement it yourself: In your event handler, add the info about the new event to a thread safe queue and process this queue from another thread.
I made this question more general to remove the confusion over HPC, looks like I have no control over how my event hander is executed so I need to make it thread safe.
I'm looking into options for doing asynchronous event dispatching in a component that has many subscribers to its events. In perusing the options, I ran across this example:
public event ValueChangedEvent ValueChanged;
public void FireEventAsync(EventArgs e)
{
Delegate[] delegates = ValueChanged.GetInvocationList();
foreach (Delegate d in delegates)
{
ValueChangedEvent ev = (ValueChangedEvent)d;
ev.BeginInvoke(e, null, null);
}
}
Beyond the older syntax (the sample was from .NET 1.1), it looks to me like this is a serious resource leak. There's no completion method, no polling for completion, or any other way that EndInvoke will be called.
My understanding is that every BeginInvoke must have a corresponding EndInvoke. Otherwise there are pending AsyncResult object instances floating around, along with (potentially) exceptions that were raised during the asynchronous events.
I realize that it's easy enough to change that by supplying a callback and doing an EndInvoke, but if I don't need to . . .
Handling the asynchronous exeptions is another matter entirely, and, combined with the need to synchronize with the UI thread (i.e. InvokeRequired, etc.) could very well tank the whole idea of doing these asynchronous notifications.
So, two questions:
Am I correct in believing that every BeginInvoke requires a corresponding EndInvoke?
Beyond what I've noted above, are there other pitfalls to doing asynchronous event notifications in Windows Forms applications?
A call to BeginInvoke() should be paired with a EndInvoke() but not doing it will not result in a resource leak. The IAsyncResult returned by BeginInvoke() will be garbage collected.
The biggest pitfall in this code is you are highly exposed to exceptions terminating the application. You might want to wrap the delegate invocation in an exception handler and put some thought into how you want to propagate the exceptions that happen (report the first, produce an aggregate exception, etc).
Invoking a deletage using BeginInvoke() will take a thread off the thread queue to start running the event. This means that the event will always fire off the main UI thread. This might make some event handler scenarios harder to handle (e.g. updating the UI). Handlers would need to realize they need to call SynchronizationContext.Send() or .Post() to synchronize with the primary UI thread. Of course all other multi-thread programming pitfalls also apply.
After thinking about this for a while, I came to the conclusion that it's probably a bad idea to do asynchronous events in Windows Forms controls. Windows Forms events should be raised on the UI thread. Doing otherwise presents an undue burden on clients, and possibly makes a mess with AsyncResult objects and asynchronous exceptions.
It's cleaner to let the clients fire off their own asynchronous processing (using BackgroundWorker or some other technique), or handle the event synchronously.
There are exceptions, of course. System.Timers.Timer, for example, raises the Elapsed event on a thread pool thread. But then, the initial notification comes in on a pool thread. It looks like the general rule is: raise the events on the same thread that got the initial notification. At least, that's the rule that works best for me. That way there's no question about leaking objects.
No. EndInvoke is only required if a return type is specified. Check this out:thread. Also, I posted this thread which is semi related.
I really cant help you with that one! :-) sorry.