Timer and IDisposable - Extra protection in Dispose? - c#

I have a class that creates and uses a System.Threading.Timer, something like this:
using System.Threading;
public class MyClass : IDisposable
{
private List<int> ints = new List<int>();
private Timer t;
public MyClass()
{
//Create timer in disabled state
t = new Timer(this.OnTimer, null, Timeout.Infinite, Timeout.Infinite);
}
private void DisableTimer()
{
if (t == null) return;
t.Change(Timeout.Infinite, Timeout.Infinite);
}
private void EnableTimer()
{
if (t == null) return;
//Fire timer in 1 second and every second thereafter.
EnableTimer(1000, 1000);
}
private void EnableTimer(long remainingTime)
{
if (t == null) return;
t.Change(remainingTime, 1000);
}
private void OnTimer(object state)
{
lock (ints) //Added since original post
{
DisableTimer();
DoSomethingWithTheInts();
ints.Clear();
//
//Don't reenable the timer here since ints is empty. No need for timer
//to fire until there is at least one element in the list.
//
}
}
public void Add(int i)
{
lock(ints) //Added since original post
{
DisableTimer();
ints.Add(i);
if (ints.Count > 10)
{
DoSomethingWithTheInts();
ints.Clear();
}
if (ints.Count > 0)
{
EnableTimer(FigureOutHowMuchTimeIsLeft());
}
}
}
bool disposed = false;
public void Dispose()
{
//Should I protect myself from the timer firing here?
Dispose(true);
}
protected virtual void Dispose(bool disposing)
{
//Should I protect myself from the timer firing here?
if (disposed) return;
if (t == null) return;
t.Dispose();
disposed = true;
}
}
EDIT - In my actual code I do have locks on the List in both Add and OnTimer. I accidentally left them out when I simplified my code for posting.
Essentially, I want to accumulate some data, processing it in batches. As I am accumulating, if I get 10 items OR it has been 1 second since I last processed the data, I will process what I have so far and clear my list.
Since Timer is Disposable, I have implemented the Disposable pattern on my class. My question is this: Do I need extra protection in either Dispose method to prevent any side effects of the timer event firing?
I could easily disable the timer in either or both Dispose methods. I understand that this would not necesarily make me 100% safe as the timer could fire between the time that Dispose is called and the timer is disabled. Leaving that issue aside for the moment, would it be considered a best practice to guard the Dispose method(s), to the best of my ability, against the possibility of the timer executing?
Now that I think about it, I should probably also consider what I should do if the List is not empty in Dispose. In the usage pattern of the object is should be empty by then, but I guess that anything is possible. Let's say that there were items left in the List when Dispose is called, would it be good or bad to go ahead and try to process them? I suppose that I could put in this trusty old comment:
public void Dispose()
{
if (ints != null && ints.Count > 0)
{
//Should never get here. Famous last words!
}
}
Whether or not there are any items in the list is secondary. I am really only interested in finding out what the best practice is for dealing with potentially enabled Timers and Dispose.
If it matters, this code is actually in a Silverlight class library. It does not interact with the UI at all.
EDIT:
I found what looks like a pretty good solution here. One answer, by jsw, suggests protecting the OnTimer event with Monitor.TryEnter/Monitor.Exit, effectively putting the OnTimer code in a critical section.
Michael Burr posted what seems to be an even better solution, at least for my situation, of using a one-shot timer by setting the due time to the desired interval and setting the period to Timeout.Infinite.
For my work, I only want the timer to fire if at least one item has been added to the List. So, to begin with, my timer is disabled. Upon entering Add, disable the timer so that it does not fire during the Add. When an item is added, process the list (if necessary). Before leaving Add, if there are any items in the list (i.e. if the list has not been processed), enable the timer with the due time set to the remaining interval and the period set to Timeout.Infinite.
With a one-shot timer, it is not even necessary to disable the timer in the OnTimer event since the timer will not fire again anyway. I also don't have to enable the timer when I leave OnTimer as there will not be any items in the list. I will wait until another item is added to the list before enabling the one-shot timer again.
Thanks!
EDIT - I think that this is the final version. It uses a one-shot timer. The timer is enabled when the list goes from zero members to one member. If we hit the count threshold in Add, the items are processed and the timer is disabled. Access to the list of items is guarded by a lock. I am on the fence as to whether a one-shot timer buys me much over a "normal" timer other than that it will only fire when there are items in the list and only 1 second after the first item was added. If I use a normal timer, then this sequence could happen: Add 11 items in rapid succession. Adding the 11th causes the items to be processed and removed from the list. Assume 0.5 seconds is left on the timer. Add 1 more item. Time will fire in approx 0.5 seconds and the one item will be processed and removed. With one-shot timer the timer is reenabled when the 1 item is added and will not fire until a full 1 second interval (more or less) has elasped. Does it matter? Probably not. Anyway, here is a version that I think is reasonably safe and does what I want it to do.
using System.Threading;
public class MyClass : IDisposable
{
private List<int> ints = new List<int>();
private Timer t;
public MyClass()
{
//Create timer in disabled state
t = new Timer(this.OnTimer, null, Timeout.Infinite, Timeout.Infinite);
}
private void DisableTimer()
{
if (t == null) return;
t.Change(Timeout.Infinite, Timeout.Infinite);
}
private void EnableTimer()
{
if (t == null) return;
//Fire event in 1 second but no events thereafter.
EnableTimer(1000, Timeout.Infinite);
}
private void DoSomethingWithTheInts()
{
foreach (int i in ints)
{
Whatever(i);
}
}
private void OnTimer(object state)
{
lock (ints)
{
if (disposed) return;
DoSomethingWithTheInts();
ints.Clear();
}
}
public void Add(int i)
{
lock(ints)
{
if (disposed) return;
ints.Add(i);
if (ints.Count > 10)
{
DoSomethingWithTheInts();
ints.Clear();
}
if (ints.Count == 0)
{
DisableTimer();
}
else
if (ints.Count == 1)
{
EnableTimer();
}
}
}
bool disposed = false;
public void Dispose()
{
if (disposed) return;
Dispose(true);
}
protected virtual void Dispose(bool disposing)
{
lock(ints)
{
DisableTimer();
if (disposed) return;
if (t == null) return;
t.Dispose();
disposed = true;
}
}
}

I see some serious problems with synchronization in this code. I think what you are trying to solve is a classic reader-writer problem. With current approach it is highly probably You are going to run into some problems, like What if someone tries to modify the list when processing it?
I STRONGLY recommend using parallel extensions for .net or (when using .net 3.5 or earlier) using classes such like ReaderWriterlock or even simple Lock keyword.
Also remember that System.Threading.Timer is a asynchronous call, so the OnTimer is called from SEPARATE thread (from a .net ThreadPool) so YOU definately need some synchronization (like possibly locking the collection).
I'd have a look at concurrent collections in .NEt 4 or use some synchronization primitives in .net 3.5 or earlier. DO not disable the timer in ADD method. Write correct code in OnTimer like
lock(daObject)
{
if (list.Count > 10)
DoSTHWithList;
}
This code is the simplest (although definetely not optimal) that should work. Also similiar code should be added to the Add method (locking the collection).
Hope it helps, if not msg me.
luke

You should not have to disable the timer during OnTimer be cause you have a lock around the call, hence all the threads are waiting for the first one to finish...

Related

Am I implementing this buffer using a C# Timer correctly?

I have a service that subscribes to updates to a repository.
When an update message is received, the service needs to reload some data from the repository.
However many update messages can be received in a short period of time. So I want to create a buffer / time window, that will mean only one reload will happen for that period were many update messages arrived.
I've created a very rough outline:
class TestService
{
private Timer scheduledReloadTimer;
public void AttemptReload()
{
if (scheduledReloadTimer == null)
{
Console.WriteLine("Scheduling reload...");
scheduledReloadTimer = new Timer(Reload, null, 10000, Timeout.Infinite);
}
else
{
Console.WriteLine("Reload already scheduled for this period...");
}
}
private void Reload(object stateInfo)
{
scheduledReloadTimer.Dispose();
scheduledReloadTimer = null;
Console.WriteLine("Doing reload..");
}
}
Is using the null check on the Timer good enough to see if a reload has already been scheduled?
Am I disposing the Timer correctly?
Is there anything else I am missing here, especially around thread safety?
I've seen another stackoverflow answer that suggests using the Reactive Extensions to achieve this: https://stackoverflow.com/a/42887221/67357 but is that overkill?
You do have a potential thread-safety issue here. A quick fix would be to create a thread lock scope around the critical parts of your code, to ensure that while you're inspecting/creating and setting the timer variable, no other thread can get in there and start the same process in parallel:
class TestService
{
private Timer scheduledReloadTimer;
private object timerLock = new object();
public void AttemptReload()
{
lock (timerLock)
{
if (scheduledReloadTimer == null)
{
Console.WriteLine("Scheduling reload...");
scheduledReloadTimer = new Timer(Reload, null, 10000, Timeout.Infinite);
}
else
{
Console.WriteLine("Reload already scheduled for this period...");
}
}
}
private void Reload(object stateInfo)
{
lock (timerLock)
{
scheduledReloadTimer.Dispose();
scheduledReloadTimer = null;
}
Console.WriteLine("Doing reload..");
}
}
Reactive Extensions are a good way to deal with this throttling issue - as the code is already written for you.
Another approach might be to modify the AttemptReload call to simply reset the interval on the timer (if the reloadTimer != null), essentially pushing back the invocation of the timer event with each subsequent call to AttemptReload.
That way, the timer will definitely not fire until after the last call to AttemptReload + 10,000 milliseconds.

Synchronization mechanism for an observable object

Let's imagine we have to synchronize read/write access to shared resources. Multiple threads will access that resource both in read and writing (most of times for reading, sometimes for writing). Let's assume also that each write will always trigger a read operation (object is observable).
For this example I'll imagine a class like this (forgive syntax and style, it's just for illustration purposes):
class Container {
public ObservableCollection<Operand> Operands;
public ObservableCollection<Result> Results;
}
I'm tempted to use a ReadWriterLockSlim for this purpose moreover I'd put it at Container level (imagine object is not so simple and one read/write operation may involve multiple objects):
public ReadWriterLockSlim Lock;
Implementation of Operand and Result has no meaning for this example.
Now let's imagine some code that observes Operands and will produce a result to put in Results:
void AddNewOperand(Operand operand) {
try {
_container.Lock.EnterWriteLock();
_container.Operands.Add(operand);
}
finally {
_container.ExitReadLock();
}
}
Our hypotetical observer will do something similar but to consume a new element and it'll lock with EnterReadLock() to get operands and then EnterWriteLock() to add result (let me omit code for this). This will produce an exception because of recursion but if I set LockRecursionPolicy.SupportsRecursion then I'll just open my code to dead-locks (from MSDN):
By default, new instances of ReaderWriterLockSlim are created with the LockRecursionPolicy.NoRecursion flag and do not allow recursion. This default policy is recommended for all new development, because recursion introduces unnecessary complications and makes your code more prone to deadlocks.
I repeat relevant part for clarity:
Recursion [...] makes your code more prone to deadlocks.
If I'm not wrong with LockRecursionPolicy.SupportsRecursion if from same thread I ask a, let's say, read lock then someone else asks for a write lock then I'll have a dead-lock then what MSDN says makes sense. Moreover recursion will degrade performance too in a measurable way (and it's not what I want if I'm using ReadWriterLockSlim instead of ReadWriterLock or Monitor).
Question(s)
Finally my questions are (please note I'm not searching for a discussion about general synchronization mechanisms, I would know what's wrong for this producer/observable/observer scenario):
What's better in this situation? To avoid ReadWriterLockSlim in favor of Monitor (even if in real world code reads will be much more than writes)?
Give up with such coarse synchronization? This may even yield better performance but it'll make code much more complicated (of course not in this example but in real world).
Should I just make notifications (from observed collection) asynchronous?
Something else I can't see?
I know that there is not a best synchronization mechanism so tool we use must be right one for our case but I wonder if there are some best practice or I just ignore something very important between threads and observers (imagine to use Microsoft Reactive Extensions but question is general, not tied to that framework).
Possible solutions?
What I would try is to make events (somehow) deferred:
1st solution
Each change won't fire any CollectionChanged event, it's kept in a queue. When provider (object that push data) has finished it'll manually force the queue to be flushed (raising each event in sequence). This may be done in another thread or even in the caller thread (but outside the lock).
It may works but it'll make everything less "automatic" (each change notification must be manually triggered by producer itself, more code to write, more bugs all around).
2nd solution
Another solution may be to provide a reference to our lock to the observable collection. If I wrap ReadWriterLockSlim in a custom object (useful to hide it in a easy to use IDisposable object) I may add a ManualResetEvent to notify that all locks has been released in this way collection itself may rise events (again in the same thread or in another thread).
3rd solution
Another idea could be to just make events asynchronous. If event handler will need a lock then it'll be stopped to wait it's time frame. For this I worry about the big thread amount that may be used (especially if from thread pool).
Honestly I don't know if any of these is applicable in real world application (personally - from users point of view - I prefer second one but it implies custom collection for everything and it makes collection aware of threading and I would avoid it, if possible). I wouldn't like to make code more complicated than necessary.
This sounds like quite the multi-threading pickle. It's quite challenging to work with recursion in this chain-of-events pattern, whilst still avoiding deadlocks. You might want to consider designing around the problem entirely.
For example, you could make the addition of an operand asynchronous to the raising of the event:
private readonly BlockingCollection<Operand> _additions
= new BlockingCollection<Operand>();
public void AddNewOperand(Operand operand)
{
_additions.Add(operand);
}
And then have the actual addition happen in a background thread:
private void ProcessAdditions()
{
foreach(var operand in _additions.GetConsumingEnumerable())
{
_container.Lock.EnterWriteLock();
_container.Operands.Add(operand);
_container.Lock.ExitWriteLock();
}
}
public void Initialize()
{
var pump = new Thread(ProcessAdditions)
{
Name = "Operand Additions Pump"
};
pump.Start();
}
This separation sacrifices some consistency - code running after the add method won't actually know when the add has actually happened and maybe that's a problem for your code. If so, this could be re-written to subscribe to the observation and use a Task to signal when the add completes:
public Task AddNewOperandAsync(Operand operand)
{
var tcs = new TaskCompletionSource<byte>();
// Compose an event handler for the completion of this task
NotifyCollectionChangedEventHandler onChanged = null;
onChanged = (sender, e) =>
{
// Is this the event for the operand we have added?
if (e.NewItems.Contains(operand))
{
// Complete the task.
tcs.SetCompleted(0);
// Remove the event-handler.
_container.Operands.CollectionChanged -= onChanged;
}
}
// Hook in the handler.
_container.Operands.CollectionChanged += onChanged;
// Perform the addition.
_additions.Add(operand);
// Return the task to be awaited.
return tcs.Task;
}
The event-handler logic is raised on the background thread pumping the add messages, so there is no possibility of it blocking your foreground threads. If you await the add on the message-pump for the window, the synchronization context is smart enough to schedule the continuation on the message-pump thread as well.
Whether you go down the Task route or not, this strategy means that you can safely add more operands from an observable event without re-entering any locks.
I'm not sure if this is exactly the same issue but when dealing with relatively small amounts of data (2k-3k entries), I have used the below code to facilitate cross thread read/write access to collections bound to UI. This code originally found here.
public class BaseObservableCollection<T> : ObservableCollection<T>
{
// Constructors
public BaseObservableCollection() : base() { }
public BaseObservableCollection(IEnumerable<T> items) : base(items) { }
public BaseObservableCollection(List<T> items) : base(items) { }
// Evnet
public override event NotifyCollectionChangedEventHandler CollectionChanged;
// Event Handler
protected override void OnCollectionChanged(NotifyCollectionChangedEventArgs e)
{
// Be nice - use BlockReentrancy like MSDN said
using (BlockReentrancy())
{
if (CollectionChanged != null)
{
// Walk thru invocation list
foreach (NotifyCollectionChangedEventHandler handler in CollectionChanged.GetInvocationList())
{
DispatcherObject dispatcherObject = handler.Target as DispatcherObject;
// If the subscriber is a DispatcherObject and different thread
if (dispatcherObject != null && dispatcherObject.CheckAccess() == false)
{
// Invoke handler in the target dispatcher's thread
dispatcherObject.Dispatcher.Invoke(DispatcherPriority.DataBind, handler, this, e);
}
else
{
// Execute handler as is
handler(this, e);
}
}
}
}
}
}
I have also used the code below (which inherits from the above code) to support raising the CollectionChanged event when items inside the collection raise the PropertyChanged.
public class BaseViewableCollection<T> : BaseObservableCollection<T>
where T : INotifyPropertyChanged
{
// Constructors
public BaseViewableCollection() : base() { }
public BaseViewableCollection(IEnumerable<T> items) : base(items) { }
public BaseViewableCollection(List<T> items) : base(items) { }
// Event Handlers
private void ItemPropertyChanged(object sender, PropertyChangedEventArgs e)
{
var arg = new NotifyCollectionChangedEventArgs(NotifyCollectionChangedAction.Replace, sender, sender);
base.OnCollectionChanged(arg);
}
protected override void ClearItems()
{
foreach (T item in Items) { if (item != null) { item.PropertyChanged -= ItemPropertyChanged; } }
base.ClearItems();
}
protected override void InsertItem(int index, T item)
{
if (item != null) { item.PropertyChanged += ItemPropertyChanged; }
base.InsertItem(index, item);
}
protected override void RemoveItem(int index)
{
if (Items[index] != null) { Items[index].PropertyChanged -= ItemPropertyChanged; }
base.RemoveItem(index);
}
protected override void SetItem(int index, T item)
{
if (item != null) { item.PropertyChanged += ItemPropertyChanged; }
base.SetItem(index, item);
}
}
Cross-Thread Collection Synchronization
Putting a ListBox binding to a ObservableCollection , when the data changes , you update the ListBox because INotifyCollectionChanged implemented .
The defect dell'ObservableCollection is that the data can be changed only by the thread that created it.
The SynchronizedCollection does not have the problem of Multi-Thread but does not update the ListBox because it is not implemented INotifyCollectionChanged , even if you implement INotifyCollectionChanged , CollectionChanged (this, e) can only be called from the thread that created it .. so it does not work.
Conclusion
-If you want a list that is autoupdated mono-thread use ObservableCollection
-If you want a list that is not autoupdated but multi-threaded use SynchronizedCollection
-If you want both, use Framework 4.5, BindingOperations.EnableCollectionSynchronization and ObservableCollection () in this way :
/ / Creates the lock object somewhere
private static object _lock = new object () ;
...
/ / Enable the cross acces to this collection elsewhere
BindingOperations.EnableCollectionSynchronization ( _persons , _lock )
The Complete Sample
http://10rem.net/blog/2012/01/20/wpf-45-cross-thread-collection-synchronization-redux

C# Wait until the object has been inserted to the list (freeze application)

I have stumbled upon an annoying problem. I have a timer that foreach a list(Generics) every 250ms (1-5 objects in the list). And if it "meets" the criteria, it should add the object to an "ignore list". But the problem is that sometimes it doesn't add the object to the ignore list and therefore letting it through (even if it 250ms ago should have added it).
So my question is, what the best way to make sure that the loop freezes until the object has been added to the list (Now everything is running on the main thread)?
// this method is called every 250ms
void check()
{
foreach (GetAll g in GetAllInList)
{
InIgnoreList(g);
}
}
// InIgnoreList(g)
void InIgnoreList(g)
{
foreach(var b in g.List())
{
if(IgnoreList.Exists(x => x.Name == g.Name))
break;
// New method where I add it, but sometimes the IgnoreList.Exists Check lets it
through
}
}
Thanks :)
It sounds like you're encountering a race condition - it may be that check() is being called before the last call to check() has actually finished.
If you do something like this it would eliminate the race condition...
readonly object lockObject = new Object();
void check()
{
lock (lockObject)
{
foreach (GetAll g in GetAllInList)
{
InIgnoreList(g);
}
}
}
Alternatively, you could use lock in combination with a bool flag to determine if the check() method is still running before calling it when the timer event fires.
Disable the timer before calling your check() code (or at the beginning of check(), as it's not clear what the context is here), and enable it when check() returns (or at the end of check()). This stops the timer from being fired again before the code completes.

c# System.Threading.Timer wait for dispose

I have a class which uses a Timer. This class implements IDispose. I would like to wait in the Dispose method until the timer will not fire again.
I implement it like this:
private void TimerElapsed(object state)
{
// do not execute the callback if one callback is still executing
if (Interlocked.Exchange(ref _timerIsExecuting, 1) == 1)
return;
try
{
_callback();
}
finally
{
Interlocked.Exchange(ref _timerIsExecuting, 0);
}
}
public void Dispose()
{
if (Interlocked.Exchange(ref _isDisposing, 1) == 1)
return;
_timer.Dispose();
// wait until the callback is not executing anymore, if it was
while (_timerIsExecuting == 0)
{ }
_callback = null;
}
Is this implementation correct? I think it mainly depends on the question if _timerIsExecuting == 0 is an atomic operation. Or would I have to use a WaitHandle. For me it seems it would make the code unnecessarily complicated...
I am not an expert in multi-threading, so would be happy about any advice.
Unless you have a reason not to use System.Threading.Timer
This has a Dispose method with a wait handle
And you can do something like,
private readonly Timer Timer;
private readonly ManualResetEvent TimerDisposed;
public Constructor()
{
Timer = ....;
TimerDisposed = new ManualResetEvent(false);
}
public void Dispose()
{
Timer.Dispose(TimerDisposed);
TimerDisposed.WaitOne();
TimerDisposed.Dispose();
}
Generally one can use the Timer.Dispose(WaitHandle) method, but there's a few pitfalls:
Pitfalls
Support for multiple-disposal (see here)
If an object's Dispose method is called more than once, the object must ignore all calls after the first one. The object must not throw an exception if its Dispose method is called multiple times. Instance methods other than Dispose can throw an ObjectDisposedException when resources are already disposed.
Timer.Dispose(WaitHandle) can return false. It does so in case it's already been disposed (i had to look at the source code). In that case it won't set the WaitHandle - so don't wait on it! (Note: multiple disposal should be supported)
not handling a WaitHandle timeout. Seriously - what are you waiting for in case you're not interested in a timeout?
Concurrency issue as mentioned here on msdn where an ObjectDisposedException can occur during (not after) disposal.
Timer.Dispose(WaitHandle) does not work properly with -Slim waithandles, or not as one would expect. For example, the following does not work (it blocks forever):
using(var manualResetEventSlim = new ManualResetEventSlim)
{
timer.Dispose(manualResetEventSlim.WaitHandle);
manualResetEventSlim.Wait();
}
Solution
Well the title is a bit "bold" i guess, but below is my attempt to deal with the issue - a wrapper which handles double-disposal, timeouts, and ObjectDisposedException. It does not provide all of the methods on Timer though - but feel free to add them.
internal class Timer
{
private readonly TimeSpan _disposalTimeout;
private readonly System.Threading.Timer _timer;
private bool _disposeEnded;
public Timer(TimeSpan disposalTimeout)
{
_disposalTimeout = disposalTimeout;
_timer = new System.Threading.Timer(HandleTimerElapsed);
}
public event Signal Elapsed;
public void TriggerOnceIn(TimeSpan time)
{
try
{
_timer.Change(time, Timeout.InfiniteTimeSpan);
}
catch (ObjectDisposedException)
{
// race condition with Dispose can cause trigger to be called when underlying
// timer is being disposed - and a change will fail in this case.
// see
// https://msdn.microsoft.com/en-us/library/b97tkt95(v=vs.110).aspx#Anchor_2
if (_disposeEnded)
{
// we still want to throw the exception in case someone really tries
// to change the timer after disposal has finished
// of course there's a slight race condition here where we might not
// throw even though disposal is already done.
// since the offending code would most likely already be "failing"
// unreliably i personally can live with increasing the
// "unreliable failure" time-window slightly
throw;
}
}
}
private void HandleTimerElapsed(object state)
{
Elapsed.SafeInvoke();
}
public void Dispose()
{
using (var waitHandle = new ManualResetEvent(false))
{
// returns false on second dispose
if (_timer.Dispose(waitHandle))
{
if (!waitHandle.WaitOne(_disposalTimeout))
{
throw new TimeoutException(
"Timeout waiting for timer to stop. (...)");
}
_disposeEnded = true;
}
}
}
}
Why you need to dispose the Timer manually? Isn't there any other
solution. As a rule of thumb, you're better leaving this job to GAC. –
LMB 56 mins ago
I am developing an ASP.NET application. The timer is disposed on the call of Dispose of the HttpApplication. The reason: A callback
could access the logging system. So i have to assure the before
disposing the logging system the timer is disposed. – SACO 50 mins ago
It looks like you have a Producer/Consumer pattern, using the timer as Porducer.
What I'd do in this case, would be to create a ConcurrentQueue() and make the timer enqueue jobs to the queue. And then, use another safe thread to read and execute the jobs.
This would prevent a job from overlapping another, which seems to be a requirement in your code, and also solve the timer disposing problem, since you could yourQueue == null before adding jobs.
This is the best design.
Another simple, but not robust, solution, is running the callbacks in a try block. I don't recommend to dispose the Timer manually.

Reentrant Timer in Windows Service

I want to build a windows Service, which should execute different methods at different times. Its not about accuracy at all.
Im using a system.timers.timer, and regulate the different methods to be executed within the Eventhandler-method with counters. Thats working allright that far.
All of the methods are accessing a COM-port, making it neccessary to grant acceess-rights to only one method at a time. But since the methods can take some time to finish, the timer might tick again and want to execute another method while the COM-port is still being occupied. In this case, the event can and should just be dismissed.
Simplified down to one method, my elapsedEventHandler-method looks something like the following (try-catch and the different methods excluded here)
Note: While this is running perfectly on my Win7 x64, it struggles on a Win7 x86 machine with pretty much the very same software installed, whenever the method to be executed takes a long time. The timer wont tick any more, no Exception is thrown. Nothing! my question now is: Am I doing the part with access-control and the timer right, so that i can focus on other things? Im just not that familiar with timers and especially threading
private static int m_synchPoint=0;
private System.Timers.Timer timerForData = null;
public MyNewService()
{
timerForData = new System.Timers.Timer();
timerForData.Interval = 3000;
timerForData.Elapsed += new ElapsedEventHandler(Timer_tick);
}
//Initialize all the timers, and start them
protected override void OnStart(string[] args)
{
timerForData.AutoReset = true;
timerForData.Enabled = true;
timerForData.Start();
}
//Event-handled method
private void Timer_tick(object sender, System.Timers.ElapsedEventArgs e)
{
////safe to perform event - no other thread is running the event?
if (System.Threading.Interlocked.CompareExchange(ref m_synchPoint, 1, 0) == 0)
{
//via different else-ifs basically always this is happening here, except switching aMethod,bMethod...
processedevent++;
Thread workerThread = new Thread(aMethod);
workerThread.Start();
workerThread.Join();
m_synchPoint=0;
}
else
{
//Just dismiss the event
skippedevent++;
}
}
Thank you very much in advance!
Any help is greatly appreciated!
I would recommend using System.Threading.Timer for this functionality. You can disable the timer when it executes, process your data, then re-enable the timer.
EDIT:
I think it makes more sense to use System.Threading.Timer because there isn't really a reason you need to drop the timer on a design surface, which is pretty much the only reason to use System.Timers.Timer. I really wish MS would remove it anyways, it's wrapping System.Threading.Timer which isn't all that difficult to use in the first place.
Yes, you do risk a problem with re-entrancy which is why I specified to change the timeout toTimeout.Infinite. You won't have this re-entrancy problem if you construct the timer with Timeout.Infinite.
public class MyClass
{
private System.Threading.Timer _MyTimer;
public MyClass()
{
_MyTimer = new Timer(OnElapsed, null, 0, Timeout.Infinite);
}
public void OnElapsed(object state)
{
_MyTimer.Change(Timeout.Infinite, Timeout.Infinite);
Console.WriteLine("I'm working");
_MyTimer.Change(1000, Timeout.Infinite);
}
}
If you want just skip method invocation while previous method didn't finish just use Monitor.TryEnter(lockObject) before calling your method.
EDIT:
Here's an example -
public class OneCallAtATimeClass
{
private object syncObject;
public TimerExample()
{
syncObject = new object();
}
public void CalledFromTimer()
{
if (Monitor.TryEnter(syncObject);)
{
try
{
InternalImplementation();
}
finally
{
Monitor.Exit(syncObject);
}
}
}
private void InternalImplementation()
{
//Do some logic here
}
}
You can try this:
When the timer fires, disable the timer.
When the task is complete, re-enable the timer...possibly in the Finally clause.
You correctly use CompareExchange to test and set the m_synchPoint field when doing the initial check. You incorrectly use direct assignment to reset the value to 0 at the end of the method. You should use Interlocked.Exchange instead to reset the value to 0. As a side note, you should also change m_synchPoint to an instance field -- it should not be static.

Categories

Resources