Let's imagine we have to synchronize read/write access to shared resources. Multiple threads will access that resource both in read and writing (most of times for reading, sometimes for writing). Let's assume also that each write will always trigger a read operation (object is observable).
For this example I'll imagine a class like this (forgive syntax and style, it's just for illustration purposes):
class Container {
public ObservableCollection<Operand> Operands;
public ObservableCollection<Result> Results;
}
I'm tempted to use a ReadWriterLockSlim for this purpose moreover I'd put it at Container level (imagine object is not so simple and one read/write operation may involve multiple objects):
public ReadWriterLockSlim Lock;
Implementation of Operand and Result has no meaning for this example.
Now let's imagine some code that observes Operands and will produce a result to put in Results:
void AddNewOperand(Operand operand) {
try {
_container.Lock.EnterWriteLock();
_container.Operands.Add(operand);
}
finally {
_container.ExitReadLock();
}
}
Our hypotetical observer will do something similar but to consume a new element and it'll lock with EnterReadLock() to get operands and then EnterWriteLock() to add result (let me omit code for this). This will produce an exception because of recursion but if I set LockRecursionPolicy.SupportsRecursion then I'll just open my code to dead-locks (from MSDN):
By default, new instances of ReaderWriterLockSlim are created with the LockRecursionPolicy.NoRecursion flag and do not allow recursion. This default policy is recommended for all new development, because recursion introduces unnecessary complications and makes your code more prone to deadlocks.
I repeat relevant part for clarity:
Recursion [...] makes your code more prone to deadlocks.
If I'm not wrong with LockRecursionPolicy.SupportsRecursion if from same thread I ask a, let's say, read lock then someone else asks for a write lock then I'll have a dead-lock then what MSDN says makes sense. Moreover recursion will degrade performance too in a measurable way (and it's not what I want if I'm using ReadWriterLockSlim instead of ReadWriterLock or Monitor).
Question(s)
Finally my questions are (please note I'm not searching for a discussion about general synchronization mechanisms, I would know what's wrong for this producer/observable/observer scenario):
What's better in this situation? To avoid ReadWriterLockSlim in favor of Monitor (even if in real world code reads will be much more than writes)?
Give up with such coarse synchronization? This may even yield better performance but it'll make code much more complicated (of course not in this example but in real world).
Should I just make notifications (from observed collection) asynchronous?
Something else I can't see?
I know that there is not a best synchronization mechanism so tool we use must be right one for our case but I wonder if there are some best practice or I just ignore something very important between threads and observers (imagine to use Microsoft Reactive Extensions but question is general, not tied to that framework).
Possible solutions?
What I would try is to make events (somehow) deferred:
1st solution
Each change won't fire any CollectionChanged event, it's kept in a queue. When provider (object that push data) has finished it'll manually force the queue to be flushed (raising each event in sequence). This may be done in another thread or even in the caller thread (but outside the lock).
It may works but it'll make everything less "automatic" (each change notification must be manually triggered by producer itself, more code to write, more bugs all around).
2nd solution
Another solution may be to provide a reference to our lock to the observable collection. If I wrap ReadWriterLockSlim in a custom object (useful to hide it in a easy to use IDisposable object) I may add a ManualResetEvent to notify that all locks has been released in this way collection itself may rise events (again in the same thread or in another thread).
3rd solution
Another idea could be to just make events asynchronous. If event handler will need a lock then it'll be stopped to wait it's time frame. For this I worry about the big thread amount that may be used (especially if from thread pool).
Honestly I don't know if any of these is applicable in real world application (personally - from users point of view - I prefer second one but it implies custom collection for everything and it makes collection aware of threading and I would avoid it, if possible). I wouldn't like to make code more complicated than necessary.
This sounds like quite the multi-threading pickle. It's quite challenging to work with recursion in this chain-of-events pattern, whilst still avoiding deadlocks. You might want to consider designing around the problem entirely.
For example, you could make the addition of an operand asynchronous to the raising of the event:
private readonly BlockingCollection<Operand> _additions
= new BlockingCollection<Operand>();
public void AddNewOperand(Operand operand)
{
_additions.Add(operand);
}
And then have the actual addition happen in a background thread:
private void ProcessAdditions()
{
foreach(var operand in _additions.GetConsumingEnumerable())
{
_container.Lock.EnterWriteLock();
_container.Operands.Add(operand);
_container.Lock.ExitWriteLock();
}
}
public void Initialize()
{
var pump = new Thread(ProcessAdditions)
{
Name = "Operand Additions Pump"
};
pump.Start();
}
This separation sacrifices some consistency - code running after the add method won't actually know when the add has actually happened and maybe that's a problem for your code. If so, this could be re-written to subscribe to the observation and use a Task to signal when the add completes:
public Task AddNewOperandAsync(Operand operand)
{
var tcs = new TaskCompletionSource<byte>();
// Compose an event handler for the completion of this task
NotifyCollectionChangedEventHandler onChanged = null;
onChanged = (sender, e) =>
{
// Is this the event for the operand we have added?
if (e.NewItems.Contains(operand))
{
// Complete the task.
tcs.SetCompleted(0);
// Remove the event-handler.
_container.Operands.CollectionChanged -= onChanged;
}
}
// Hook in the handler.
_container.Operands.CollectionChanged += onChanged;
// Perform the addition.
_additions.Add(operand);
// Return the task to be awaited.
return tcs.Task;
}
The event-handler logic is raised on the background thread pumping the add messages, so there is no possibility of it blocking your foreground threads. If you await the add on the message-pump for the window, the synchronization context is smart enough to schedule the continuation on the message-pump thread as well.
Whether you go down the Task route or not, this strategy means that you can safely add more operands from an observable event without re-entering any locks.
I'm not sure if this is exactly the same issue but when dealing with relatively small amounts of data (2k-3k entries), I have used the below code to facilitate cross thread read/write access to collections bound to UI. This code originally found here.
public class BaseObservableCollection<T> : ObservableCollection<T>
{
// Constructors
public BaseObservableCollection() : base() { }
public BaseObservableCollection(IEnumerable<T> items) : base(items) { }
public BaseObservableCollection(List<T> items) : base(items) { }
// Evnet
public override event NotifyCollectionChangedEventHandler CollectionChanged;
// Event Handler
protected override void OnCollectionChanged(NotifyCollectionChangedEventArgs e)
{
// Be nice - use BlockReentrancy like MSDN said
using (BlockReentrancy())
{
if (CollectionChanged != null)
{
// Walk thru invocation list
foreach (NotifyCollectionChangedEventHandler handler in CollectionChanged.GetInvocationList())
{
DispatcherObject dispatcherObject = handler.Target as DispatcherObject;
// If the subscriber is a DispatcherObject and different thread
if (dispatcherObject != null && dispatcherObject.CheckAccess() == false)
{
// Invoke handler in the target dispatcher's thread
dispatcherObject.Dispatcher.Invoke(DispatcherPriority.DataBind, handler, this, e);
}
else
{
// Execute handler as is
handler(this, e);
}
}
}
}
}
}
I have also used the code below (which inherits from the above code) to support raising the CollectionChanged event when items inside the collection raise the PropertyChanged.
public class BaseViewableCollection<T> : BaseObservableCollection<T>
where T : INotifyPropertyChanged
{
// Constructors
public BaseViewableCollection() : base() { }
public BaseViewableCollection(IEnumerable<T> items) : base(items) { }
public BaseViewableCollection(List<T> items) : base(items) { }
// Event Handlers
private void ItemPropertyChanged(object sender, PropertyChangedEventArgs e)
{
var arg = new NotifyCollectionChangedEventArgs(NotifyCollectionChangedAction.Replace, sender, sender);
base.OnCollectionChanged(arg);
}
protected override void ClearItems()
{
foreach (T item in Items) { if (item != null) { item.PropertyChanged -= ItemPropertyChanged; } }
base.ClearItems();
}
protected override void InsertItem(int index, T item)
{
if (item != null) { item.PropertyChanged += ItemPropertyChanged; }
base.InsertItem(index, item);
}
protected override void RemoveItem(int index)
{
if (Items[index] != null) { Items[index].PropertyChanged -= ItemPropertyChanged; }
base.RemoveItem(index);
}
protected override void SetItem(int index, T item)
{
if (item != null) { item.PropertyChanged += ItemPropertyChanged; }
base.SetItem(index, item);
}
}
Cross-Thread Collection Synchronization
Putting a ListBox binding to a ObservableCollection , when the data changes , you update the ListBox because INotifyCollectionChanged implemented .
The defect dell'ObservableCollection is that the data can be changed only by the thread that created it.
The SynchronizedCollection does not have the problem of Multi-Thread but does not update the ListBox because it is not implemented INotifyCollectionChanged , even if you implement INotifyCollectionChanged , CollectionChanged (this, e) can only be called from the thread that created it .. so it does not work.
Conclusion
-If you want a list that is autoupdated mono-thread use ObservableCollection
-If you want a list that is not autoupdated but multi-threaded use SynchronizedCollection
-If you want both, use Framework 4.5, BindingOperations.EnableCollectionSynchronization and ObservableCollection () in this way :
/ / Creates the lock object somewhere
private static object _lock = new object () ;
...
/ / Enable the cross acces to this collection elsewhere
BindingOperations.EnableCollectionSynchronization ( _persons , _lock )
The Complete Sample
http://10rem.net/blog/2012/01/20/wpf-45-cross-thread-collection-synchronization-redux
Related
In my application, some value can change at any time. A lot of components need to do something when this value changes. The number of components changes during the usage of the application (depending on what the user opens). Currently this is implemented with events: when value changes, an event is raised, on which all interested components are subscribed.
Of course, to avoid memory leaks, these components unsubscribe on dispose.
When we stress-test our application with lots of components (a few million), one of the biggest bottle necks (>90% cpu time during high loads) are these subscriptions: MyApp.ValueChanged -= TheValueChanged; takes very long (multiple seconds).
I assume this is, because there are a lot of subscribers, and this unsubscribe needs to find the correct subscriber in the list of subscribers (worst case: a million time searching in a list of a million items).
Now, my question: What would be a good way to improve this?
I was thinking about weak event handlers, so unsubscribing isn't necessary, but there may be better solutions.
GC is the culprit. Period.
First of all, the default implementation of strong event handlers is not optimized for many listeners. To get decent performance, consider implementing the event handler explicitly. Using a hash set we can improve on the default implementation which internally uses a flat list.
The fix
For your event, implement add and remove using a HashSet. Note that this implementation is not thread-safe. You might want to add a locking mechanism if multiple threads will be using the event.
class Publisher : INotifyPropertyChanged
{
private HashSet<PropertyChangedEventHandler> propertyChangedHandlers =
new HashSet<PropertyChangedEventHandler>();
public event PropertyChangedEventHandler PropertyChanged
{
add => propertyChangedHandlers.Add(value);
remove => propertyChangedHandlers.Remove(value);
}
public void Signal()
{
var args = new PropertyChangedEventArgs(null);
foreach (var handler in propertyChangedHandlers)
{
handler(this, new PropertyChangedEventArgs(null));
}
}
public void RemoveSubscribers(IEnumerable<Listener> listeners)
{
foreach (var listener in listeners)
{
listener.Subscribe(this);
}
}
}
This will perform a lot better. But why? Sure, removing items from a hash set is a lot faster than traversing a flat list, but it's not that slow. The extreme CPU time actually comes from GC. When unsubscribing an event, the list of handlers will be re-allocated. The old list will then, quite soon, get collected by the Garbage Collector.
Analysis
In your case, you have 1,000,000 event handlers. So each time you unsubscribe an event handler, a list of N items (where N is near 1000000) is unreferenced and a new list of N-1 handlers is allocated. Do this 100 times, and the GC will have lots of data to memory to collect:
100 * 1000000 * 8 bytes = ~800 MB
We can easily prove this using the GC.TryStartNoGCRegion API. The sample below tries to unsubscribe 20 event handlers, but since this operation will indeed allocate more than the requested 128 MB, it will fail:
Unhandled Exception: System.InvalidOperationException: Allocated
memory exceeds specified memory for NoGCRegion mode at
System.GC.EndNoGCRegionWorker()
class Publisher : INotifyPropertyChanged
{
public event PropertyChangedEventHandler PropertyChanged;
protected virtual void OnPropertyChanged(string propertyName = null)
{
PropertyChanged?.Invoke(this, new PropertyChangedEventArgs(propertyName));
}
}
class Listener
{
public void Subscribe(Publisher publisher)
{
publisher.PropertyChanged += OnPropertyChanged;
}
public void Unsubscribe(Publisher publisher)
{
publisher.PropertyChanged -= OnPropertyChanged;
}
private static void OnPropertyChanged(object sender, PropertyChangedEventArgs e)
{
Console.WriteLine($"OnPropertyChanged called for {nameof(Listener)} {sender}");
}
}
class Program
{
static void Main(string[] args)
{
var publisher = new Publisher();
var listeners = Enumerable.Range(0, 1000000)
.Select(p => new Listener())
.ToList();
foreach (var listener in listeners)
{
listener.Subscribe(publisher);
}
var watch = new System.Diagnostics.Stopwatch();
watch.Start();
const int toRemove = 20;
if (GC.TryStartNoGCRegion(128L * 1024L * 1024L, true))
{
try
{
for (int i = 0; i < toRemove; i++)
{
listeners[i].Unsubscribe(publisher);
}
}
finally
{
watch.Stop();
var time = watch.ElapsedMilliseconds;
Console.WriteLine($"Removing {toRemove} handlers: {time} ms");
GC.EndNoGCRegion();
}
}
}
}
If we run a memory profiler, we can see that removing event handlers does indeed cause allocations and GC.
See MulticastDelegate.cs(360) for more details.
I found this answer, which suggest another possible improvement: https://stackoverflow.com/a/24239037/1851717
However, some experimentation led me to the decision to go with my first thought: use a weak event manager, to skip unsubscribing altogether. I've written a custom weak event manager, as there are some extra optimizations I could do, making use of some application-specifc characteristics.
So in general, summarizing the comments, and solutions, when your event unsubscribing is a bottleneck, check these things:
Is it really the bottleneck? Check if GC'ing could be the issue.
Can a refactor avoid the event/unsubscribe?
Check https://stackoverflow.com/a/24239037/1851717
Consider using a weak event manager, to avoid unsubscribing
Cosider the following sample class:
class MyClass
{
private object syncRoot = new object();
private int value;
public event Action<int> SomethingOccurred;
public void UpdateSomething()
{
int newValue;
lock (syncRoot)
{
// ... Do some stuff that modifies some state of the object.
newValue = ++value;
}
// How to ensure that raising these events are done in the correct order?
SomethingOccurred?.Invoke(newValue);
}
}
In the class above, the events may not occur in the same order that the value was updated apparently, since it's done outside of the lock-statement. The question is, what would be the best way to raise these events outside of the lock statement, but ensuring that they are raised in the correct order (i.e. in this case producing the sequence 1, 2, 3, 4...)?
The best idea I've come up with is to essetially have a ConcurrentQueue or similar to which the values are added, and having a separate thread raise the events based on the values in the queue. But I would prefer to not have a separate thread allocated just for raising these events. Is there a smarter way to accomplish this?
Edit:
My first idea was to have a concurrent queue, and use the following code for raising the event:
int result;
while (m_events.TryDequeue(out result))
SomethingOccurred?.Invoke(result);
The problem with that of course is that it does not guarantee the order either, since multiple threads would dequeue stuff concurrently and the same problem as before persists basically.
I could place another lock around the event-raising, but this would cause the same undesired blocking as raising the events from inside the lock in the first place.
So is there a lock-free way to guarantee only a single thread is dequeueing and raising events in this case? Or is there another way that is better altogether?
Edit 2:
To illustrate a usage, I want to guarantee that the following code would output the sequence 1 through 20 in order:
MyClass myClass = new MyClass();
myClass.SomethingOccurred += (i) =>
{
Thread.Sleep(100); Console.WriteLine(i);
};
Parallel.ForEach(Enumerable.Range(1, 20), i =>
myClass.UpdateSomething());
I don't care if the event handler is called from different threads, but it must not be called concurrently, and it must be called with in the correct order.
The best solution I have so far would be the following which is likely not very efficient use of threading resources:
class MyClass
{
private object syncRoot = new object();
private int value;
private readonly ConcurrentQueue<int> m_events = new ConcurrentQueue<int>();
private object eventRaiserLock = new object();
public event Action<int> SomethingOccurred;
public void UpdateSomething()
{
int newValue;
lock (syncRoot)
{
// ... Do some stuff that modifies some state of the object.
newValue = ++value;
m_events.Enqueue(newValue);
}
// How to ensure that raising these events are done in the correct order?
RaiseEvents();
}
private void RaiseEvents()
{
Task.Run(() =>
{
lock (eventRaiserLock)
{
int result;
while (m_events.TryDequeue(out result))
SomethingOccurred?.Invoke(result);
}
});
}
}
If you need ordering, you need synchronization - it's that simple.
It's not entirely obvious what you're trying to do here - the event you're raising is effectively raised on some random thread. Obviously, that's not going to preserve any ordering, since it's perfectly possible for the events to be running concurrently (since UpdateSomething is called from multiple threads).
A queue is a simple solution, and you don't need to waste any extra threads either - however, you might want to think about the ordering of the UpdateSomething calls anyway - are you sure the items are going to be queued in the proper order in the first place?
Now, ConcurrentQueue is a bit tricky in that it doesn't give you a nice, awaitable interface. One option is to use the Dataflow library - a BufferBlock does pretty much what you want. Otherwise, you can write your own asynchronous concurrent queue - though again, doing this well is quite complicated. You could use something like this as a starting point:
async Task Main()
{
var queue = new AsyncConcurrentQueue<int>();
var task = DequeueAllAsync(queue, i => Console.WriteLine(i));
queue.Enqueue(1);
queue.Enqueue(2);
queue.Enqueue(3);
queue.Enqueue(4);
queue.Finish();
await task;
}
private async Task DequeueAllAsync<T>(AsyncConcurrentQueue<T> queue, Action<T> action)
{
try
{
while (true)
{
var value = await queue.TakeAsync(CancellationToken.None);
action(value);
}
}
catch (OperationCanceledException) { }
}
public class AsyncConcurrentQueue<T>
{
private readonly ConcurrentQueue<T> _internalQueue;
private readonly SemaphoreSlim _newItem;
private int _isFinished;
public AsyncConcurrentQueue()
{
_internalQueue = new ConcurrentQueue<T>();
_newItem = new SemaphoreSlim(0);
}
public void Enqueue(T value)
{
_internalQueue.Enqueue(value);
_newItem.Release();
}
public void Finish()
{
Interlocked.Exchange(ref _isFinished, 1);
_newItem.Release();
}
public async Task<T> TakeAsync(CancellationToken token)
{
while (!token.IsCancellationRequested)
{
await _newItem.WaitAsync(token);
token.ThrowIfCancellationRequested();
T result;
if (_internalQueue.TryDequeue(out result))
{
return result;
}
Interlocked.MemoryBarrier();
if (_isFinished == 1) throw new OperationCanceledException();
}
throw new OperationCanceledException(token);
}
}
This ensures that you have a queue with a global ordering that you can keep filling, and which is emptied continually whenever there are any items. The removal (and execution of the action) is in order of adding, and it happens on a single worker thread. When there are no items to dequeue, that thread is returned to the thread pool, so you're not wasting a thread blocking.
Again, this is still a relatively naïve solution. You want to add more error handling at the very least (according to your needs - e.g. perhaps the action(value) call should be in a try-catch so that a failed action doesn't stop your dequeue loop?).
In my program I use a background worker thread to open files. The main structure of my program is a data bound TreeView. During the file read in process, dynamic TreeView nodes are added to the TreeView as they are read in from the file. These TreeView nodes that I mention are bound to containers called UICollections (a custom class inherited from ObservableCollection<T>). I've created the UICollection<T> class in order to make sure that CollectionViews of this type never have their SourceCollections changed from the background worker thread. I do this by changing a property in a UICollection called IsNotifying to false.
My UICollection<T> class:
public class UICollection<T> : ObservableCollection<T>
{
public UICollection()
{
IsNotifying = true;
}
public UICollection(IEnumerable<T> source)
{
this.Load(source);
}
public bool IsNotifying { get; set; }
protected override void OnPropertyChanged(PropertyChangedEventArgs e)
{
if (IsNotifying)
base.OnPropertyChanged(e);
}
//Does not raise unless IsNotifying = true
protected override void OnCollectionChanged(NotifyCollectionChangedEventArgs e)
{
if (IsNotifying)
base.OnCollectionChanged(e);
}
//Used whenever I re-order a collection
public virtual void Load(IEnumerable<T> items)
{
if (items == null)
throw new ArgumentNullException("items");
this.IsNotifying = false;
foreach (var item in items)
this.Add(item);
//ERROR created on this line because IsNotifying is always set to true
this.IsNotifying = true;
this.Refresh();
}
public Action<T> OnSelectedItemChanged { get; set; }
public Func<T, bool> GetDefaultItem { get; set; }
public void Refresh()
{
OnCollectionChanged(new NotifyCollectionChangedEventArgs(NotifyCollectionChangedAction.Reset));
}
}
With that being said, I am having problems implementing this UICollection<T> with my control structure, which involves adding UICollections from a background worker thread.
For clarity, my program moves as follows:
Is a file being opened?: YES -> go into Background worker thread
In background worker thread: Do we need to create new UICollections?: YES -> go to method in UIThread that does so (iterate as needed)
Close thread.
The main concept that needs to be understood is that UICollection.IsNotifying has to be set to false if the background worker thread is open. I have no problem doing this for collections that are already known about, but for the dynamic ones I run into problems.
Sample of what my Background worker thread does:
private void openFile()
{
//Turn off CollectionChanged for known Collections
KnownCollections1.IsNotifying = false;
KnownCollections2.IsNotifying = false; //... and so on
//Do we need to create new collections? YES -> Call to AddCollection in UIThread
//Refresh known collections
App.Current.Dispatcher.BeginInvoke((Action)(() => KnownCollections1.Refresh()));
App.Current.Dispatcher.BeginInvoke((Action)(() => KnownCollections2.Refresh())); //... and so on
//If new collections exist find them and refresh them...
}
Method in UIThread that adds collections to TreeView:
public void AddCollection(string DisplayName, int locationValue)
{
node.Children.Add(CreateLocationNode(displayName, locationValue)); //Add to parent node
for (int i = 0; i < node.Children.Count(); i++)
{
//make sure IsNotifying = false for newly added collection
if (node.Children[i].locationValue == locationValue)
node.Children[i].Children.IsNotifying = false;
}
//Order the collection based on numerical value
var ordered = node.Children.OrderBy(n => n.TreeView_LocValue).ToList();
node.Children.Clear();
node.Children.Load(ordered); //Pass to UICollection class -- *RUNS INTO ERROR*
}
With all of that, one of two things will happen... if I comment out the line this.IsNotifying = true;, an exception will blow in OnCollectionChanged because it gets raised while the backround thread is open. If I leave the line as is, the collection will never reflect in the view because OnCollectionChanged never gets raised, notifying the view. What do I need to do to allow the creation of these collections without running into these errors? I'm guessing the problem is either in my AddCollection() function, or in the UICollection<T> class.
If I understand you correctly, you are manipulating a collection (or nested collection) on a background thread while the same collection (or a 'parent' collection) is being used as an items source in your UI. This isn't safe, even if you disable change notifications. There are other things, such as user-initiated sorting, expanding a tree node, container recycling due to virtualization, etc., that can cause a collection to be requeried. If that happens while you are updating the collection on another thread, the behavior is undefined. For example, you could trigger a collection to be iterated at the same time an insertion on another thread causes underlying list to be resized, potentially resulting in null or duplicate entries being read. Whenever you share mutable data between two threads, you need to synchronize reads and writes, and since you don't control the WPF internals doing the reading, you can't assume it's safe to do concurrent writing of any kind. That includes modify objects within a UI-bound collection from another thread.
If you need to manipulate a collection on a background thread, take a snapshot of the original, perform whatever modifications you need, then marshal yourself back onto the UI thread to commit the changes (either by replacing the original entirely, or clearing and repopulating the collection). I use this technique to safely perform background sorting, grouping, and filtering on grid views with large data sets. But if you do this, be careful to avoid modifying items contained within the collection, as they may still be referenced by your UI. It may also be necessary to detect any changes that occur on the UI thread which may invalidate your background updates, in which case you will need to discard your changes when you marshal yourself back to the UI thread, take another snapshot, and begin again (or come up with a more elaborate way to reconcile the two sets of changes).
I have a class that creates and uses a System.Threading.Timer, something like this:
using System.Threading;
public class MyClass : IDisposable
{
private List<int> ints = new List<int>();
private Timer t;
public MyClass()
{
//Create timer in disabled state
t = new Timer(this.OnTimer, null, Timeout.Infinite, Timeout.Infinite);
}
private void DisableTimer()
{
if (t == null) return;
t.Change(Timeout.Infinite, Timeout.Infinite);
}
private void EnableTimer()
{
if (t == null) return;
//Fire timer in 1 second and every second thereafter.
EnableTimer(1000, 1000);
}
private void EnableTimer(long remainingTime)
{
if (t == null) return;
t.Change(remainingTime, 1000);
}
private void OnTimer(object state)
{
lock (ints) //Added since original post
{
DisableTimer();
DoSomethingWithTheInts();
ints.Clear();
//
//Don't reenable the timer here since ints is empty. No need for timer
//to fire until there is at least one element in the list.
//
}
}
public void Add(int i)
{
lock(ints) //Added since original post
{
DisableTimer();
ints.Add(i);
if (ints.Count > 10)
{
DoSomethingWithTheInts();
ints.Clear();
}
if (ints.Count > 0)
{
EnableTimer(FigureOutHowMuchTimeIsLeft());
}
}
}
bool disposed = false;
public void Dispose()
{
//Should I protect myself from the timer firing here?
Dispose(true);
}
protected virtual void Dispose(bool disposing)
{
//Should I protect myself from the timer firing here?
if (disposed) return;
if (t == null) return;
t.Dispose();
disposed = true;
}
}
EDIT - In my actual code I do have locks on the List in both Add and OnTimer. I accidentally left them out when I simplified my code for posting.
Essentially, I want to accumulate some data, processing it in batches. As I am accumulating, if I get 10 items OR it has been 1 second since I last processed the data, I will process what I have so far and clear my list.
Since Timer is Disposable, I have implemented the Disposable pattern on my class. My question is this: Do I need extra protection in either Dispose method to prevent any side effects of the timer event firing?
I could easily disable the timer in either or both Dispose methods. I understand that this would not necesarily make me 100% safe as the timer could fire between the time that Dispose is called and the timer is disabled. Leaving that issue aside for the moment, would it be considered a best practice to guard the Dispose method(s), to the best of my ability, against the possibility of the timer executing?
Now that I think about it, I should probably also consider what I should do if the List is not empty in Dispose. In the usage pattern of the object is should be empty by then, but I guess that anything is possible. Let's say that there were items left in the List when Dispose is called, would it be good or bad to go ahead and try to process them? I suppose that I could put in this trusty old comment:
public void Dispose()
{
if (ints != null && ints.Count > 0)
{
//Should never get here. Famous last words!
}
}
Whether or not there are any items in the list is secondary. I am really only interested in finding out what the best practice is for dealing with potentially enabled Timers and Dispose.
If it matters, this code is actually in a Silverlight class library. It does not interact with the UI at all.
EDIT:
I found what looks like a pretty good solution here. One answer, by jsw, suggests protecting the OnTimer event with Monitor.TryEnter/Monitor.Exit, effectively putting the OnTimer code in a critical section.
Michael Burr posted what seems to be an even better solution, at least for my situation, of using a one-shot timer by setting the due time to the desired interval and setting the period to Timeout.Infinite.
For my work, I only want the timer to fire if at least one item has been added to the List. So, to begin with, my timer is disabled. Upon entering Add, disable the timer so that it does not fire during the Add. When an item is added, process the list (if necessary). Before leaving Add, if there are any items in the list (i.e. if the list has not been processed), enable the timer with the due time set to the remaining interval and the period set to Timeout.Infinite.
With a one-shot timer, it is not even necessary to disable the timer in the OnTimer event since the timer will not fire again anyway. I also don't have to enable the timer when I leave OnTimer as there will not be any items in the list. I will wait until another item is added to the list before enabling the one-shot timer again.
Thanks!
EDIT - I think that this is the final version. It uses a one-shot timer. The timer is enabled when the list goes from zero members to one member. If we hit the count threshold in Add, the items are processed and the timer is disabled. Access to the list of items is guarded by a lock. I am on the fence as to whether a one-shot timer buys me much over a "normal" timer other than that it will only fire when there are items in the list and only 1 second after the first item was added. If I use a normal timer, then this sequence could happen: Add 11 items in rapid succession. Adding the 11th causes the items to be processed and removed from the list. Assume 0.5 seconds is left on the timer. Add 1 more item. Time will fire in approx 0.5 seconds and the one item will be processed and removed. With one-shot timer the timer is reenabled when the 1 item is added and will not fire until a full 1 second interval (more or less) has elasped. Does it matter? Probably not. Anyway, here is a version that I think is reasonably safe and does what I want it to do.
using System.Threading;
public class MyClass : IDisposable
{
private List<int> ints = new List<int>();
private Timer t;
public MyClass()
{
//Create timer in disabled state
t = new Timer(this.OnTimer, null, Timeout.Infinite, Timeout.Infinite);
}
private void DisableTimer()
{
if (t == null) return;
t.Change(Timeout.Infinite, Timeout.Infinite);
}
private void EnableTimer()
{
if (t == null) return;
//Fire event in 1 second but no events thereafter.
EnableTimer(1000, Timeout.Infinite);
}
private void DoSomethingWithTheInts()
{
foreach (int i in ints)
{
Whatever(i);
}
}
private void OnTimer(object state)
{
lock (ints)
{
if (disposed) return;
DoSomethingWithTheInts();
ints.Clear();
}
}
public void Add(int i)
{
lock(ints)
{
if (disposed) return;
ints.Add(i);
if (ints.Count > 10)
{
DoSomethingWithTheInts();
ints.Clear();
}
if (ints.Count == 0)
{
DisableTimer();
}
else
if (ints.Count == 1)
{
EnableTimer();
}
}
}
bool disposed = false;
public void Dispose()
{
if (disposed) return;
Dispose(true);
}
protected virtual void Dispose(bool disposing)
{
lock(ints)
{
DisableTimer();
if (disposed) return;
if (t == null) return;
t.Dispose();
disposed = true;
}
}
}
I see some serious problems with synchronization in this code. I think what you are trying to solve is a classic reader-writer problem. With current approach it is highly probably You are going to run into some problems, like What if someone tries to modify the list when processing it?
I STRONGLY recommend using parallel extensions for .net or (when using .net 3.5 or earlier) using classes such like ReaderWriterlock or even simple Lock keyword.
Also remember that System.Threading.Timer is a asynchronous call, so the OnTimer is called from SEPARATE thread (from a .net ThreadPool) so YOU definately need some synchronization (like possibly locking the collection).
I'd have a look at concurrent collections in .NEt 4 or use some synchronization primitives in .net 3.5 or earlier. DO not disable the timer in ADD method. Write correct code in OnTimer like
lock(daObject)
{
if (list.Count > 10)
DoSTHWithList;
}
This code is the simplest (although definetely not optimal) that should work. Also similiar code should be added to the Add method (locking the collection).
Hope it helps, if not msg me.
luke
You should not have to disable the timer during OnTimer be cause you have a lock around the call, hence all the threads are waiting for the first one to finish...
I'm working on this Silverlight 3.0 Project that is completely filled with Async Calls to a Web Service. At the Main Page it fetches some needed data in order for the application to work correctly using 3 Async Calls. Right now the application does not disable any controls while executing those calls meaning a user can interact with it without that needed data. I need to disable the whole grid and only after all 3 Async calls are finished then and only then enable the grid.
What's the best practice for doing this.
These are my calls:
client.GetAllAsync();
client.GetAllCompleted += new EventHandler<GetAllCompletedEventArgs>(client_GetAllCompleted);
client.GetActualAsync();
client.GetActualCompleted += new EventHandler<GetActualCompletedEventArgs>(client_GetActualCompleted);
client.GetSomeAsync();
client.GetSomeCompleted += new EventHandler<GetSomeCompletedEventArgs>(client_GetSomeCompleted);
Seems lot a lot of work just to queue some user interactivity.
I typically provide an application-wide interface:
public interface IAppGlobalState
{
void BeginAsync();
void EndAsync();
}
In the implementation, I'll do this:
public partial class MainShell : UserControl, IAppGlobalState
{
private int _queue;
private object _mutex = new Object();
public void BeginASync()
{
Monitor.Enter(_mutex);
if (_queue++ == 0)
{
VisualStateManager.GoToState(this, "BusyState", true);
}
Monitor.Exit(_mutex);
}
public void EndAsync()
{
Monitor.Enter(_mutex);
if (--_queue == 0)
{
VisualStateManager.GoToState(this, "IdleState", true);
}
Monitor.Exit(_mutex);
}
}
Of course if you have heavy multi-threading then you'll use Interlocked on those but most of the time you'll be invoking it from the same thread. Then, you simply do:
GlobalState.BeginAsync();
client.FirstAsyncCall();
GlobalState.BeginAsync();
client.FirstAsyncCall();
And then on the return, just:
GlobalState.EndAsync();
If you have nested calls, no problem, this method unwinds them.
Make a unit of work item in your silverlight
This UOW raises an event when finished
Bind that event to your GUI code
Use signaling or a simple counter in your unit of work to raise the 'finished' event.