Suggest data structure/synchronization method - c#

I have a data source that generates ~1Million events per second from 15-20 threads.
The event callback handler implements a caching strategy, to record changes to objects from the events (it is guaranteed that updates for individual objects always originate from the same thread)
Every 100ms I want to pause/lock the event handler and publish a snapshot of the latest state of all modified objects.
A mock implementation of what I currently have looks like:
private static void OnHandleManyEvents(FeedHandlerSource feedHandlerSource, MyObject myObject, ChangeFlags flags)
{
if (objectsWithChangeFlags[myObject.ID] == ChangeFlags.None)
{
UpdateStorage updateStorage = feedHandlerSourceToUpdateStorage[(int)feedHandlerSource];
lock (updateStorage.MyOjectUpdateLock)
{
objectsWithChangeFlags[myObject.ID] = objectsWithChangeFlags[myObject.ID] | flags;
updateStorage.MyUpdateObjects.Add(myObject);
}
} else
objectsWithChangeFlags[myObject.ID] = objectsWithChangeFlags[myObject.ID] | flags;
}
// runs on separate thread
private static void MyObjectPump()
{
while (true)
{
foreach (UpdateStorage updateStorage in feedHandlerSourceToUpdateStorage)
{
lock (updateStorage.MyOjectUpdateLock)
{
if (updateStorage.MyUpdateObjects.Count == 0)
continue;
foreach (MyObject myObject in updateStorage.MyUpdateObjects)
{
// do some stuff
objectsWithChangeFlags[myObject.ID] = ChangeFlags.None;
}
updateStorage.MyUpdateObjects.Clear();
}
}
Thread.Sleep(100);
}
}
The problem with this code, while it shows good performance is a potential race condition.
Specifically, it is possibly for the ChangeFlags to be set to None for an object in the Pump thread while an event callback sets it back to an altered state without locking the resource (in which case the object would never be added to the MyObjectUpdates list and would forever remain stale).
The alternative is to lock on every event callback, which induces too much of a performance hit.
How would you solve this problem?
--- UPDATE ---
I believe I solved this problem now by introducing a "CacheItem" that is stored in the objectsWithChangeFlags array that tracks if an object is currently "Enqueued".
I've also tested ConcurrentQueue for enqueuing/dequeuing as Holger suggested below but it shows slightly lower throughput than just using a lock (I'm guessing because the contention rate is not very high and the overhead for a lock without contention is very low)
private class CacheItem
{
public ChangeFlags Flags;
public bool IsEnqueued;
}
private static void OnHandleManyEvents(MyObject myObject, ChangeFlags flags)
{
Interlocked.Increment(ref _countTotalEvents);
Interlocked.Increment(ref _countTotalEventsForInterval);
CacheItem f = objectsWithChangeFlags[myObject.Id];
if (!f.IsEnqueued)
{
Interlocked.Increment(ref _countEnqueue);
f.Flags = f.Flags | flags;
f.IsEnqueued = true;
lock (updateStorage.MyObjectUpdateLock)
updateStorage.MyObjectUpdates.Add(myObject);
}
else
{
Interlocked.Increment(ref _countCacheHits);
f.Flags = f.Flags | flags;
}
}
private static void QuotePump()
{
while (true)
{
lock (updateStorage.MyObjectUpdateLock)
{
foreach (var obj in updateStorage.MyObjectUpdates)
{
Interlocked.Increment(ref _countDequeue);
CacheItem f = objectsWithChangeFlags[obj.Id];
f.Flags = ChangeFlags.None;
f.IsEnqueued = false;
}
updateStorage.MyObjectUpdates.Clear();
}
_countQuotePumpRuns++;
Thread.Sleep(75);
}
}

In similiar szenarios (logging thread) I used the following strategy:
The events where enqueued to a ConcurrentQueue. The Snapshot thread looks once a while if the queue is not empty. If not it reads everythink out of it until it is empty, executes the changes and then takes the snapshot. After that it could sleep for a while or check again immediatly if there is something more to process and only if not sleep for a while.
With this approach your events are executed in batches and your snapshot is taken after every batch.
About Caching:
I could imagine a (Concurrent)Dictionary where you lookup the object in the event handler. If its not found, its loaded (or whereever it comes from). AFTER event processing its added (even if it was found already in there). The Snapshot method removes all objects it snapshots from the dictionary BEFORE it snapshots them. Then either the event will be in the snapshot, or the object will still be in the Dictionary after the event.
This should work with your premise that all changes to one object come from the same thread. The Dictionary will only contain the objects that are changed since the last snapshot run.

Could you have two objectsWithChangeFlags collections, and switch the reference every 100ms? That way you wouldn't have to lock anything as the pump thread would be working on an "offline" collection.

Related

Ensure events raised in correct order from outside a critical section

Cosider the following sample class:
class MyClass
{
private object syncRoot = new object();
private int value;
public event Action<int> SomethingOccurred;
public void UpdateSomething()
{
int newValue;
lock (syncRoot)
{
// ... Do some stuff that modifies some state of the object.
newValue = ++value;
}
// How to ensure that raising these events are done in the correct order?
SomethingOccurred?.Invoke(newValue);
}
}
In the class above, the events may not occur in the same order that the value was updated apparently, since it's done outside of the lock-statement. The question is, what would be the best way to raise these events outside of the lock statement, but ensuring that they are raised in the correct order (i.e. in this case producing the sequence 1, 2, 3, 4...)?
The best idea I've come up with is to essetially have a ConcurrentQueue or similar to which the values are added, and having a separate thread raise the events based on the values in the queue. But I would prefer to not have a separate thread allocated just for raising these events. Is there a smarter way to accomplish this?
Edit:
My first idea was to have a concurrent queue, and use the following code for raising the event:
int result;
while (m_events.TryDequeue(out result))
SomethingOccurred?.Invoke(result);
The problem with that of course is that it does not guarantee the order either, since multiple threads would dequeue stuff concurrently and the same problem as before persists basically.
I could place another lock around the event-raising, but this would cause the same undesired blocking as raising the events from inside the lock in the first place.
So is there a lock-free way to guarantee only a single thread is dequeueing and raising events in this case? Or is there another way that is better altogether?
Edit 2:
To illustrate a usage, I want to guarantee that the following code would output the sequence 1 through 20 in order:
MyClass myClass = new MyClass();
myClass.SomethingOccurred += (i) =>
{
Thread.Sleep(100); Console.WriteLine(i);
};
Parallel.ForEach(Enumerable.Range(1, 20), i =>
myClass.UpdateSomething());
I don't care if the event handler is called from different threads, but it must not be called concurrently, and it must be called with in the correct order.
The best solution I have so far would be the following which is likely not very efficient use of threading resources:
class MyClass
{
private object syncRoot = new object();
private int value;
private readonly ConcurrentQueue<int> m_events = new ConcurrentQueue<int>();
private object eventRaiserLock = new object();
public event Action<int> SomethingOccurred;
public void UpdateSomething()
{
int newValue;
lock (syncRoot)
{
// ... Do some stuff that modifies some state of the object.
newValue = ++value;
m_events.Enqueue(newValue);
}
// How to ensure that raising these events are done in the correct order?
RaiseEvents();
}
private void RaiseEvents()
{
Task.Run(() =>
{
lock (eventRaiserLock)
{
int result;
while (m_events.TryDequeue(out result))
SomethingOccurred?.Invoke(result);
}
});
}
}
If you need ordering, you need synchronization - it's that simple.
It's not entirely obvious what you're trying to do here - the event you're raising is effectively raised on some random thread. Obviously, that's not going to preserve any ordering, since it's perfectly possible for the events to be running concurrently (since UpdateSomething is called from multiple threads).
A queue is a simple solution, and you don't need to waste any extra threads either - however, you might want to think about the ordering of the UpdateSomething calls anyway - are you sure the items are going to be queued in the proper order in the first place?
Now, ConcurrentQueue is a bit tricky in that it doesn't give you a nice, awaitable interface. One option is to use the Dataflow library - a BufferBlock does pretty much what you want. Otherwise, you can write your own asynchronous concurrent queue - though again, doing this well is quite complicated. You could use something like this as a starting point:
async Task Main()
{
var queue = new AsyncConcurrentQueue<int>();
var task = DequeueAllAsync(queue, i => Console.WriteLine(i));
queue.Enqueue(1);
queue.Enqueue(2);
queue.Enqueue(3);
queue.Enqueue(4);
queue.Finish();
await task;
}
private async Task DequeueAllAsync<T>(AsyncConcurrentQueue<T> queue, Action<T> action)
{
try
{
while (true)
{
var value = await queue.TakeAsync(CancellationToken.None);
action(value);
}
}
catch (OperationCanceledException) { }
}
public class AsyncConcurrentQueue<T>
{
private readonly ConcurrentQueue<T> _internalQueue;
private readonly SemaphoreSlim _newItem;
private int _isFinished;
public AsyncConcurrentQueue()
{
_internalQueue = new ConcurrentQueue<T>();
_newItem = new SemaphoreSlim(0);
}
public void Enqueue(T value)
{
_internalQueue.Enqueue(value);
_newItem.Release();
}
public void Finish()
{
Interlocked.Exchange(ref _isFinished, 1);
_newItem.Release();
}
public async Task<T> TakeAsync(CancellationToken token)
{
while (!token.IsCancellationRequested)
{
await _newItem.WaitAsync(token);
token.ThrowIfCancellationRequested();
T result;
if (_internalQueue.TryDequeue(out result))
{
return result;
}
Interlocked.MemoryBarrier();
if (_isFinished == 1) throw new OperationCanceledException();
}
throw new OperationCanceledException(token);
}
}
This ensures that you have a queue with a global ordering that you can keep filling, and which is emptied continually whenever there are any items. The removal (and execution of the action) is in order of adding, and it happens on a single worker thread. When there are no items to dequeue, that thread is returned to the thread pool, so you're not wasting a thread blocking.
Again, this is still a relatively naïve solution. You want to add more error handling at the very least (according to your needs - e.g. perhaps the action(value) call should be in a try-catch so that a failed action doesn't stop your dequeue loop?).

"If two threads are using Pulse and Wait to interact, this could result in a deadlock."

Basically the load() is for the producer(there's one and only one dispatcher thread that loads the _tickQueue) and the Unload is for the consumer(there's one and only one dedicated thread executing the function). _tickQueue is a regular queue protected by a lock(I'm using itself as the argument to lock()). Surprisingly, it caused deadlock.
public void Load(Tick tick)
{
lock (_tickQueue)
{
while (_tickQueue.Count >= CapSize)
{
Monitor.Wait(_tickQueue);
}
_tickQueue.Enqueue(tick);
if (!_receivedTickCounts.ContainsKey(tick.Underlier))
{
_receivedTickCounts.Add(tick.Underlier, 0);
}
Console.WriteLine("Received {1} ticks for {0}", tick.Underlier, ++_receivedTickCounts[tick.Underlier]);
Monitor.Pulse(_tickQueue);
}
}
private void Unload()
{
while (true)
{
try
{
Tick tick;
lock (_tickQueue)
{
while (_tickQueue.Count == 0)
{
Monitor.Wait(_tickQueue);
}
tick = _tickQueue.Dequeue();
Monitor.Pulse(_tickQueue);
}
Persist(tick);
}
catch (Exception e)
{
Console.WriteLine(e);
}
}
}
The comment in the title was found here:
https://msdn.microsoft.com/en-us/library/system.threading.monitor.pulse%28v=vs.110%29.aspx
My understanding of the "Important" paragraph is: Monitor class not maintaining state (in the way ResetEvent does) implies deadlock. A specific example was given: when two threads interact using Pulse and Wait, if one thread pulses when the other thread is not on the wait queue, then deadlock happens.
Can someone SPECIFICALLY(e.g. give a scenario for deadlock to happen) point out where I did wrong in my program? I don't see any scenario that can possibly lead to deadlock.
Thanks.
===================EDIT====================
Specifically, I'm interested to know why the following coding pattern for monitor suddenly doesn't work - must be related to the monitor implementation in .net?
lock
while(wait condition is met)
{
wait()
}
// critical section: doing work
signal();// or broadcast()
unlock
I suspect you are imposing an un-ending wait upon both methods. You are surrounding your Monitor method calls with While loops continually checking a condition. For certain values of CapSize and _tickQueue.Count, both of your Load() and Unload() methods will be forever waiting. What isn't evident here is the value of CapSize, is it constant, or does it change? Is _tickQueue thread-safe?
What if we reach an error on tick = _tickQueue.Dequeue(); in Unload(), _tickQueue.Count reaches 0, and the Load() method was Waiting()'ing? Load() will be waiting forever.
I would avoid having your consumer method Pulse to notify that Producer method it's ready for more work. Your consumer should only be waiting when there is no more work for it to do (queue is empty). Your Producer would be better suited controlling it's own work schedule, and pulsing the consumer when new work has been queued. Why not put the Producer on a Timer?
In the end, I believe the supplied code simply provides too many points of failure. Could I suggest an alternate implementation? This uses the thread-safe ConcurrentQueue collection and eliminates the discussed issues.
public class StackOverflowMonitorExample
{
ConcurrentQueue<Tick> _tickQueue = new ConcurrentQueue<Tick>();
object locker = new object();
bool stopCondition = false;
public void Load(Tick tick)
{
_tickQueue.Enqueue(tick);
lock (locker)
{
Monitor.Pulse(locker);
}
}
private void Unload()
{
while (!stopCondition)
{
try
{
Tick nextWorkItem = null;
_tickQueue.TryDequeue(out nextWorkItem);
if (nextWorkItem != null)
{
Persist(nextWorkItem);
}
else
{
lock (locker)
{
Monitor.Wait(locker);
}
}
}
catch (Exception e)
{
Console.WriteLine(e);
}
}
}
}
This eliminates the large locking sections, and removes most of the signals between the consumer and producer. The Producer will only ever add new items to the queue, and Pulse() to notify that new work is available. The Consumer will loop and continue to work as long as items remain in the queue, and stop condition has not been met. If queue count reaches 0, then the consumer will wait for new queue entries.

.NET GC Accessing a synchronised object from a finalizer

I recently read this article Safe Thread Synchronization as I was curious about the thread safety of calls made from a finaliser. I wrote the following code to test access to a static thread safe collection from a finaliser.
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
namespace GCThreadTest
{
class Program
{
static class FinaliserCollection
{
private static Queue<int> s_ItemQueue = new Queue<int>();
private static System.Object s_Lock = new System.Object();
public static void AddItem(int itemValue)
{
lock(s_Lock)
{
s_ItemQueue.Enqueue(itemValue);
}
}
public static bool TryGetItem(out int item)
{
lock(s_Lock)
{
if (s_ItemQueue.Count <= 0)
{
item = -1;
return false;
}
item = s_ItemQueue.Dequeue();
return true;
}
}
}
class FinaliserObject
{
private int m_ItemValue;
public FinaliserObject(int itemValue)
{
m_ItemValue = itemValue;
}
~FinaliserObject()
{
FinaliserCollection.AddItem(m_ItemValue);
}
}
static void Main(string[] args)
{
int itemValueIn = 0;
int itemValueOut = 0;
while (itemValueOut < 10000)
{
System.Threading.ThreadPool.QueueUserWorkItem
(delegate(object value)
{
new FinaliserObject((int)value);
System.Threading.Thread.Sleep(5);
}, itemValueIn);
itemValueIn = itemValueIn + 1;
// This seems to stop finaliser from
// being called?
// System.Threading.Thread.Sleep(5);
int tempItemValueOut = -1;
if (FinaliserCollection.TryGetItem(out tempItemValueOut))
itemValueOut = tempItemValueOut;
}
System.Console.WriteLine("Finished after {0} items created", itemValueOut);
System.Console.ReadLine();
}
}
}
Without the 'Sleep' call in the while loop this code seems to run fine but is it really safe from deadlocking? Would it ever be possible for a finaliser call to be made while a queued thread pool item is accessing the static collection? Why does adding the 'Sleep' to the main threads while loop appear to stop all finalisers from being called?
Wow. What the... This is the most bizarre piece of code I've ever seen. #.#
First of all, what finalizer call are you referring to? The only finalizer I see is the finalizer for the FinaliserObject, which will be called 10,000 times, and can be called independently of whatever's going on on the static collection. I.E. yes, those objects can be destroyed while other objects are being dequeued from the collection. This isn't an issue.
The static collection itself won't be cleaned up until the app itself exits.
Keep in mind that there's absolutely no guarantee when or if those finalizers will be called before the app itself exits. Your static collection could be completely empty when you exit.
Worse, you're assigning itemValueOut to whatever the last value you pull out of the queue is... which is NOT the number of items created, as you imply in your WriteLine(). Because those destructors are called in any possible order, you could theoretically add to the queue 10,000, 9,999, 9,998, ... 2, 1, in that order.
Which is further an issue, because you're removing from the queue 10,000 times, but on the last loop, it's very possible there won't be an object to dequeue, in which case you're guaranteed to get -1 for the number of items returned (even if the other 9,999 items worked successfully).
To answer your question, this code cannot deadlock. A deadlock would happen if AddItem() called TryGetItem(), but those locks are pretty much guaranteed to keep each other out of the static collection while adding or removing items.
Where you're tempting fate is that you can exit your app without all of the FinaliserObjects having added themselves to the queue. Meaning one of the finalizers could fire and try to add to the FinaliserCollection, but the FinaliserCollection has already been disposed. What you're doing in the finaliser is terrible.
But yes, a finalizer call can happen while you're calling FinaliserCollection.TryGetItem(). The finalizer will block and wait until TryGetItem() emerges from the lock(), at which point it will add another item. This is not an issue.
As for the sleep() command, you're probably just throwing the timing of the garbage collection off. Remember, your objects won't be collected/finalized until the GC decides it needs the resources.
Sorry for being so emphatic... I know you're just trying to test a concept but I really don't understand why you would want to do what you're trying to do in the finalizer. If there's really a legitimate goal here, doing it in the finalizer is not the correct answer.
Edit
From what I'm reading and what Sasha is saying, no you will not have a deadlock. The finalizer thread may be blocked waiting for the lock, but the GC will not wait for the finalizer, and will thus unsuspend the threads, allowing the locks to be released.
In any case, this is a very strong argument for why you shouldn't be making calls like this in a finalizer... the finalizer is only for releasing unmanaged resources. Anything else is playing roulette.

Timer and IDisposable - Extra protection in Dispose?

I have a class that creates and uses a System.Threading.Timer, something like this:
using System.Threading;
public class MyClass : IDisposable
{
private List<int> ints = new List<int>();
private Timer t;
public MyClass()
{
//Create timer in disabled state
t = new Timer(this.OnTimer, null, Timeout.Infinite, Timeout.Infinite);
}
private void DisableTimer()
{
if (t == null) return;
t.Change(Timeout.Infinite, Timeout.Infinite);
}
private void EnableTimer()
{
if (t == null) return;
//Fire timer in 1 second and every second thereafter.
EnableTimer(1000, 1000);
}
private void EnableTimer(long remainingTime)
{
if (t == null) return;
t.Change(remainingTime, 1000);
}
private void OnTimer(object state)
{
lock (ints) //Added since original post
{
DisableTimer();
DoSomethingWithTheInts();
ints.Clear();
//
//Don't reenable the timer here since ints is empty. No need for timer
//to fire until there is at least one element in the list.
//
}
}
public void Add(int i)
{
lock(ints) //Added since original post
{
DisableTimer();
ints.Add(i);
if (ints.Count > 10)
{
DoSomethingWithTheInts();
ints.Clear();
}
if (ints.Count > 0)
{
EnableTimer(FigureOutHowMuchTimeIsLeft());
}
}
}
bool disposed = false;
public void Dispose()
{
//Should I protect myself from the timer firing here?
Dispose(true);
}
protected virtual void Dispose(bool disposing)
{
//Should I protect myself from the timer firing here?
if (disposed) return;
if (t == null) return;
t.Dispose();
disposed = true;
}
}
EDIT - In my actual code I do have locks on the List in both Add and OnTimer. I accidentally left them out when I simplified my code for posting.
Essentially, I want to accumulate some data, processing it in batches. As I am accumulating, if I get 10 items OR it has been 1 second since I last processed the data, I will process what I have so far and clear my list.
Since Timer is Disposable, I have implemented the Disposable pattern on my class. My question is this: Do I need extra protection in either Dispose method to prevent any side effects of the timer event firing?
I could easily disable the timer in either or both Dispose methods. I understand that this would not necesarily make me 100% safe as the timer could fire between the time that Dispose is called and the timer is disabled. Leaving that issue aside for the moment, would it be considered a best practice to guard the Dispose method(s), to the best of my ability, against the possibility of the timer executing?
Now that I think about it, I should probably also consider what I should do if the List is not empty in Dispose. In the usage pattern of the object is should be empty by then, but I guess that anything is possible. Let's say that there were items left in the List when Dispose is called, would it be good or bad to go ahead and try to process them? I suppose that I could put in this trusty old comment:
public void Dispose()
{
if (ints != null && ints.Count > 0)
{
//Should never get here. Famous last words!
}
}
Whether or not there are any items in the list is secondary. I am really only interested in finding out what the best practice is for dealing with potentially enabled Timers and Dispose.
If it matters, this code is actually in a Silverlight class library. It does not interact with the UI at all.
EDIT:
I found what looks like a pretty good solution here. One answer, by jsw, suggests protecting the OnTimer event with Monitor.TryEnter/Monitor.Exit, effectively putting the OnTimer code in a critical section.
Michael Burr posted what seems to be an even better solution, at least for my situation, of using a one-shot timer by setting the due time to the desired interval and setting the period to Timeout.Infinite.
For my work, I only want the timer to fire if at least one item has been added to the List. So, to begin with, my timer is disabled. Upon entering Add, disable the timer so that it does not fire during the Add. When an item is added, process the list (if necessary). Before leaving Add, if there are any items in the list (i.e. if the list has not been processed), enable the timer with the due time set to the remaining interval and the period set to Timeout.Infinite.
With a one-shot timer, it is not even necessary to disable the timer in the OnTimer event since the timer will not fire again anyway. I also don't have to enable the timer when I leave OnTimer as there will not be any items in the list. I will wait until another item is added to the list before enabling the one-shot timer again.
Thanks!
EDIT - I think that this is the final version. It uses a one-shot timer. The timer is enabled when the list goes from zero members to one member. If we hit the count threshold in Add, the items are processed and the timer is disabled. Access to the list of items is guarded by a lock. I am on the fence as to whether a one-shot timer buys me much over a "normal" timer other than that it will only fire when there are items in the list and only 1 second after the first item was added. If I use a normal timer, then this sequence could happen: Add 11 items in rapid succession. Adding the 11th causes the items to be processed and removed from the list. Assume 0.5 seconds is left on the timer. Add 1 more item. Time will fire in approx 0.5 seconds and the one item will be processed and removed. With one-shot timer the timer is reenabled when the 1 item is added and will not fire until a full 1 second interval (more or less) has elasped. Does it matter? Probably not. Anyway, here is a version that I think is reasonably safe and does what I want it to do.
using System.Threading;
public class MyClass : IDisposable
{
private List<int> ints = new List<int>();
private Timer t;
public MyClass()
{
//Create timer in disabled state
t = new Timer(this.OnTimer, null, Timeout.Infinite, Timeout.Infinite);
}
private void DisableTimer()
{
if (t == null) return;
t.Change(Timeout.Infinite, Timeout.Infinite);
}
private void EnableTimer()
{
if (t == null) return;
//Fire event in 1 second but no events thereafter.
EnableTimer(1000, Timeout.Infinite);
}
private void DoSomethingWithTheInts()
{
foreach (int i in ints)
{
Whatever(i);
}
}
private void OnTimer(object state)
{
lock (ints)
{
if (disposed) return;
DoSomethingWithTheInts();
ints.Clear();
}
}
public void Add(int i)
{
lock(ints)
{
if (disposed) return;
ints.Add(i);
if (ints.Count > 10)
{
DoSomethingWithTheInts();
ints.Clear();
}
if (ints.Count == 0)
{
DisableTimer();
}
else
if (ints.Count == 1)
{
EnableTimer();
}
}
}
bool disposed = false;
public void Dispose()
{
if (disposed) return;
Dispose(true);
}
protected virtual void Dispose(bool disposing)
{
lock(ints)
{
DisableTimer();
if (disposed) return;
if (t == null) return;
t.Dispose();
disposed = true;
}
}
}
I see some serious problems with synchronization in this code. I think what you are trying to solve is a classic reader-writer problem. With current approach it is highly probably You are going to run into some problems, like What if someone tries to modify the list when processing it?
I STRONGLY recommend using parallel extensions for .net or (when using .net 3.5 or earlier) using classes such like ReaderWriterlock or even simple Lock keyword.
Also remember that System.Threading.Timer is a asynchronous call, so the OnTimer is called from SEPARATE thread (from a .net ThreadPool) so YOU definately need some synchronization (like possibly locking the collection).
I'd have a look at concurrent collections in .NEt 4 or use some synchronization primitives in .net 3.5 or earlier. DO not disable the timer in ADD method. Write correct code in OnTimer like
lock(daObject)
{
if (list.Count > 10)
DoSTHWithList;
}
This code is the simplest (although definetely not optimal) that should work. Also similiar code should be added to the Add method (locking the collection).
Hope it helps, if not msg me.
luke
You should not have to disable the timer during OnTimer be cause you have a lock around the call, hence all the threads are waiting for the first one to finish...

Explain the code: c# locking feature and threads

I used this pattern in a few projects, (this snipped of code is from CodeCampServer), I understand what it does, but I'm really interesting in an explanation about this pattern. Specifically:
Why is the double check of _dependenciesRegistered.
Why to use lock (Lock){}.
Thanks.
public class DependencyRegistrarModule : IHttpModule
{
private static bool _dependenciesRegistered;
private static readonly object Lock = new object();
public void Init(HttpApplication context)
{
context.BeginRequest += context_BeginRequest;
}
public void Dispose() { }
private static void context_BeginRequest(object sender, EventArgs e)
{
EnsureDependenciesRegistered();
}
private static void EnsureDependenciesRegistered()
{
if (!_dependenciesRegistered)
{
lock (Lock)
{
if (!_dependenciesRegistered)
{
new DependencyRegistrar().ConfigureOnStartup();
_dependenciesRegistered = true;
}
}
}
}
}
This is the Double-checked locking pattern.
The lock statement ensures that the code inside the block will not run on two threads simultaneously.
Since a lock statement is somewhat expensive, the code checks whether it's already been initialized before entering the lock.
However, because a different thread might have initialized it just after the outer check, it needs to check again inside the lock.
Note that this is not the best way to do it.
The double-check is because two threads could hit EnsureDependenciesRegistered at the same time, both find it isn't registered, and thus both attempt to get the lock.
lock(Lock) is essentially a form of mutex; only one thread can have the lock - the other must wait until the lock is released (at the end of the lock(...) {...} statement).
So in this scenario, a thread might (although unlikely) have been the second thread into the lock - so each must double-check in case it was the second, and the work has already been done.
It's a matter of performance.
The initial test lets it bail out quickly if the job is already done. At this point it does the potentially expensive lock but it has to check it again as another thread could have already registered it.
The double checked locking pattern is roughly:
you have an operation that you want to conditionally perform once
if (needsToDoSomething) {
DoSomething();
needsToDoSomething = false;
}
however, if you're running on two threads, both threads might check the flag, and perform the action, before they both set the flag to false. Therefore, you add a lock.
lock (Lock) {
if (needsToDoSomething) {
DoSomething();
needsToDoSomething = false;
}
}
however, taking a lock every time you run this code might be slow, so you decide, lets only try to take the lock when we actually need to.
if (needsToDoSomething)
lock (Lock) {
if (needsToDoSomething) {
DoSomething();
needsToDoSomething = false;
}
}
You can't remove the inner check, because once again, you have the problem that any check performed outside of a lock can possibly turn out to be true twice on two different threads.
The lock prevents two threads from running ConfigureOnStartup(). Between the if (!_dependenciesRegistered) and the point that ConfigureOnStartup() sets _dependenciesRegistered = true, another thread could check if it's registered. In other words:
Thread 1: _dependenciesRegistered == false
Thread 2: _dependenciesRegistered == false
Thread 1: ConfigureOnStartup() / _dependenciesRegistered = true;
Thread 2: Doesn't "see" that it's already registered, so runs ConfigureOnStartup() again.

Categories

Resources