Shared Object Pool without Thread.Sleep? - c#

I have developed an "object pool" and cannot seem to do it without using Thread.Sleep() which is "bad practice" I believe.
This relates to my other question "Is there a standard way of implementing a proprietary connection pool in .net?". The idea behind the object pool is similar to the one behind the connection pool used for database connections. However, in my case I am using it to share a limited resource in a standard ASP.NET Web Service (running in IIS6). This means that many threads will be requesting access to this limited resource. The pool would dish out these objects (a "Get) and once all the available pool objects have been used, the next thread requesting one would simply waits a set amount of time for one of these object to become available again (a thread would do a "Put" once done with the object). If an object does not become available in this set time, a timeout error will occur.
Here is the code:
public class SimpleObjectPool
{
private const int cMaxGetTimeToWaitInMs = 60000;
private const int cMaxGetSleepWaitInMs = 10;
private object fSyncRoot = new object();
private Queue<object> fQueue = new Queue<object>();
private SimpleObjectPool()
{
}
private static readonly SimpleObjectPool instance = new SimpleObjectPool();
public static SimpleObjectPool Instance
{
get
{
return instance;
}
}
public object Get()
{
object aObject = null;
for (int i = 0; i < (cMaxGetTimeToWaitInMs / cMaxGetSleepWaitInMs); i++)
{
lock (fSyncRoot)
{
if (fQueue.Count > 0)
{
aObject = fQueue.Dequeue();
break;
}
}
System.Threading.Thread.Sleep(cMaxGetSleepWaitInMs);
}
if (aObject == null)
throw new Exception("Timout on waiting for object from pool");
return aObject;
}
public void Put(object aObject)
{
lock (fSyncRoot)
{
fQueue.Enqueue(aObject);
}
}
}
To use use it, one would do the following:
public void ExampleUse()
{
PoolObject lTestObject = (PoolObject)SimpleObjectPool.Instance.Get();
try
{
// Do something...
}
finally
{
SimpleObjectPool.Instance.Put(lTestObject);
}
}
Now the question I have is: How do I write this so I get rid of the Thread.Sleep()?
(Why I want to do this is because I suspect that it is responsible for the "false" timeout I am getting in my testing. My test application has a object pool with 3 objects in it. It spins up 12 threads and each thread gets an object from the pool 100 times. If the thread gets an object from the pool, it holds on to if for 2,000 ms, if it does not, it goes to the next iteration. Now logic dictates that 9 threads will be waiting for an object at any point in time. 9 x 2,000 ms is 18,000 ms which is the maximum time any thread should have to wait for an object. My get timeout is set to 60,000 ms so no thread should ever timeout. However some do so something is wrong and I suspect its the Thread.Sleep)

Since you are already using lock, consider using Monitor.Wait and Monitor.Pulse
In Get():
lock (fSyncRoot)
{
while (fQueue.Count < 1)
Monitor.Wait(fSyncRoot);
aObject = fQueue.Dequeue();
}
And in Put():
lock (fSyncRoot)
{
fQueue.Enqueue(aObject);
if (fQueue.Count == 1)
Monitor.Pulse(fSyncRoot);
}

you should be using a semaphore.
http://msdn.microsoft.com/en-us/library/system.threading.semaphore.aspx
UPDATE:
Semaphores are one of the basic constructs of multi-threaded programming.
A semaphore can be used different ways, but the basic idea is when you have a limited resource and many clients who want to use that resource, you can limit the number of clients that can access the resource at any given time.
below is a very crude example. I didn't add any error checking or try/finally blocks but you should.
You can also check:
http://en.wikipedia.org/wiki/Semaphore_(programming)
Say you have 10 buckets and 100 people who want to use those buckets.
We can represent the buckets in a queue.
At the start, add all of your buckets to the queue
for(int i=0;i<10;i++)
{
B.Push(new Bucket());
}
Now create a semaphore to guard your bucket queue. This semaphore is created with no items triggered and a capacity of 10.
Semaphore s = new Semaphore(0, 10);
All clients should check the semaphore before accessing the queue. You might have 100 threads running the thread method below. The first 10 will pass the semaphore. All others will wait.
void MyThread()
{
while(true)
{
// thread will wait until the semaphore is triggered once
// there are other ways to call this which allow you to pass a timeout
s.WaitOne();
// after being triggered once, thread is clear to get an item from the queue
Bucket b = null;
// you still need to lock because more than one thread can pass the semaphore at the sam time.
lock(B_Lock)
{
b = B.Pop();
}
b.UseBucket();
// after you finish using the item, add it back to the queue
// DO NOT keep the queue locked while you are using the item or no other thread will be able to get anything out of it
lock(B_Lock)
{
B.Push(b);
}
// after adding the item back to the queue, trigger the semaphore and allow
// another thread to enter
s.Release();
}
}

Related

Can a second thread enter the same critical section just because the first thread called Monitor.Wait using the same sync lock?

Please tell me if I am thinking it alright.
A different thread cannot enter the same critical section using
the same lock just because the first thread called Monitor.Wait, right? The Wait method only allows a different thread to acquire
the same monitor, i.e. the same synchronization lock but only for a different critical section and never for the same critical
section.
Is my understanding correct?
Because if the Wait method meant that anyone can now enter this
same critical section using this same lock, then that would defeat
the whole purpose of synchronization, right?
So, in the code below (written in notepad, so please forgive any
typos), ThreadProc2 can only use syncLock to enter the code in
ThreadProc2 and not in ThreadProc1 while the a previous thread
that held and subsequently relinquished the lock was executing
ThreadProc1, right?
Two or more threads can use the same synchronization lock to run
different pieces of code at the same time, right? Same question as
above, basically, but just confirming for the sake of symmetry with
point 3 below.
Two or more threads can use a different synchronization lock to
run the same piece of code, i.e. to enter the same critical section.
Boilerplate text to correct the formatting.
class Foo
{
private static object syncLock = new object();
public void ThreadProc1()
{
try
{
Monitor.Enter(syncLock);
Monitor.Wait(syncLock);
Thread.Sleep(1000);
}
finally
{
if (Monitor.IsLocked(syncLock))
{
Monitor.Exit(syncLock);
}
}
}
public void ThreadProc2()
{
bool acquired = false;
try
{
// Calling TryEnter instead of
// Enter just for the sake of variety
Monitor.TryEnter(syncLock, ref acquired);
if (acquired)
{
Thread.Sleep(200);
Monitor.Pulse(syncLock);
}
}
finally
{
if (acquired)
{
Monitor.Exit(syncLock);
}
}
}
}
Update
The following illustration confirms that #3 is correct although I don't think it will be a nice thing to do.
using System;
using System.Collections.Generic;
using System.Threading.Tasks;
namespace DifferentSyncLockSameCriticalSection
{
class Program
{
static void Main(string[] args)
{
var sathyaish = new Person { Name = "Sathyaish Chakravarthy" };
var superman = new Person { Name = "Superman" };
var tasks = new List<Task>();
// Must not lock on string so I am using
// an object of the Person class as a lock
tasks.Add(Task.Run( () => { Proc1(sathyaish); } ));
tasks.Add(Task.Run(() => { Proc1(superman); }));
Task.WhenAll(tasks);
Console.WriteLine("Press any key to exit.");
Console.ReadKey();
}
static void Proc1(object state)
{
// Although this would be a very bad practice
lock(state)
{
try
{
Console.WriteLine((state.ToString()).Length);
}
catch(Exception ex)
{
Console.WriteLine(ex.Message);
}
}
}
}
class Person
{
public string Name { get; set; }
public override string ToString()
{
return Name;
}
}
}
When a thread calls Monitor.Wait it is suspended and the lock released. This will allow another thread to acquire the lock, update some state, and then call Monitor.Pulse in order to communicate to other threads that something has happened. You must have acquired the lock in order to call Pulse. Before Monitor.Wait returns the framework will reacquire the lock for the thread that called Wait.
In order for two threads to communicate with each other they need to use the same synchronization primitive. In your example you've used a monitor, but you usually need to combine this with some kind of test that the Wait returned in response to a Pulse. This is because it is technically possible to Wait to return even if Pulse wasn't called (although this doesn't happen in practice).
It's also worth remembering that a call to Pulse isn't "sticky", so if nobody is waiting on the monitor then Pulse does nothing and a subsequent call to Wait will miss the fact that Pulse was called. This is another reason why you tend to record the fact that something has been done before calling Pulse (see the example below).
It's perfectly valid for two different threads to use the same lock to run different bits of code - in fact this is the typical use-case. For example, one thread acquires the lock to write some data and another thread acquires the lock to read the data. However, it's important to realize that they don't run at the same time. The act of acquiring the lock prevents another thread from acquiring the same lock, so any thread attempting to acquire the lock when it is already locked will block until the other thread releases the lock.
In point 3 you ask:
Two or more threads can use a different synchronization lock to run
the same piece of code, i.e. to enter the same critical section.
However, if two threads are using different locks then they are not entering the same critical section. The critical section is denoted by the lock that protects it - if they're different locks then they are different sections that just happen to access some common data within the section. You should avoid doing this as it can lead to some difficult to debug data race conditions.
Your code is a bit over-complicated for what you're trying to accomplish. For example, let's say we've got 2 threads, and one will signal when there is data available for another to process:
class Foo
{
private readonly object syncLock = new object();
private bool dataAvailable = false;
public void ThreadProc1()
{
lock(syncLock)
{
while(!dataAvailable)
{
// Release the lock and suspend
Monitor.Wait(syncLock);
}
// Now process the data
}
}
public void ThreadProc2()
{
LoadData();
lock(syncLock)
{
dataAvailable = true;
Monitor.Pulse(syncLock);
}
}
private void LoadData()
{
// Gets some data
}
}
}

Condition Variables C#/.NET

In my quest to build a condition variable class I stumbled on a trivially simple way of doing it and I'd like to share this with the stack overflow community. I was googling for the better part of an hour and was unable to actually find a good tutorial or .NET-ish example that felt right, hopefully this can be of use to other people out there.
It's actually incredibly simple, once you know about the semantics of lock and Monitor.
But first, you do need an object reference. You can use this, but remember that this is public, in the sense that anyone with a reference to your class can lock on that reference. If you are uncomfortable with this, you can create a new private reference, like this:
readonly object syncPrimitive = new object(); // this is legal
Somewhere in your code where you'd like to be able to provide notifications, it can be accomplished like this:
void Notify()
{
lock (syncPrimitive)
{
Monitor.Pulse(syncPrimitive);
}
}
And the place where you'd do the actual work is a simple looping construct, like this:
void RunLoop()
{
lock (syncPrimitive)
{
for (;;)
{
// do work here...
Monitor.Wait(syncPrimitive);
}
}
}
Yes, this looks incredibly deadlock-ish, but the locking protocol for Monitor is such that it will release the lock during the Monitor.Wait. In fact, it's a requirement that you have obtained the lock before you call either Monitor.Pulse, Monitor.PulseAll or Monitor.Wait.
There's one caveat with this approach that you should know about. Since the lock is required to be held before calling the communication methods of Monitor you should really only hang on to the lock for an as short duration as possible. A variation of the RunLoop that's more friendly towards long running background tasks would look like this:
void RunLoop()
{
for (;;)
{
// do work here...
lock (syncPrimitive)
{
Monitor.Wait(syncPrimitive);
}
}
}
But now we've changed up the problem a bit, because the lock is no longer protecting the shared resource throughout the processing. So, if some of your code in the do work here... bit needs to access a shared resource you'll need an separate lock managing access to that.
We can leverage the above to create a simple thread-safe producer consumer collection (although .NET already provides an excellent ConcurrentQueue<T> implementation; this is just to illustrate the simplicity of using Monitor in implementing such mechanisms).
class BlockingQueue<T>
{
// We base our queue on the (non-thread safe) .NET 2.0 Queue collection
readonly Queue<T> q = new Queue<T>();
public void Enqueue(T item)
{
lock (q)
{
q.Enqueue(item);
System.Threading.Monitor.Pulse(q);
}
}
public T Dequeue()
{
lock (q)
{
for (;;)
{
if (q.Count > 0)
{
return q.Dequeue();
}
System.Threading.Monitor.Wait(q);
}
}
}
}
Now the point here is not to build a blocking collection, that also available in the .NET framework (see BlockingCollection). The point is to illustrate how simple it is to build an event driven message system using the Monitor class in .NET to implement conditional variable. Hope you find this useful.
Use ManualResetEvent
The class that is similar to conditional variable is the ManualResetEvent, just that the method name is slightly different.
The notify_one() in C++ would be named Set() in C#.
The wait() in C++ would be named WaitOne() in C#.
Moreover, ManualResetEvent also provides a Reset() method to set the state of the event to non-signaled.
The accepted answer is not a good one.
According to the Dequeue() code, Wait() gets called in each loop, which causes unnecessary waiting thus excessive context switches. The correct paradigm should be, wait() is called when the waiting condition is met. In this case, the waiting condition is q.Count() == 0.
Here's a better pattern to follow when it comes to using a Monitor.
https://msdn.microsoft.com/en-us/library/windows/desktop/ms682052%28v=vs.85%29.aspx
Another comment on C# Monitor is, it does not make use of a condition variable(which will essentially wake up all threads waiting for that lock, regardless of the conditions in which they went to wait; consequently, some threads may grab the lock and immediately return to sleep when they find the waiting condition hasn't been changed). It does not provide you with as find-grained threading control as pthreads. But it's .Net anyway, so not completely unexpected.
=============upon the request of John, here's an improved version=============
class BlockingQueue<T>
{
readonly Queue<T> q = new Queue<T>();
public void Enqueue(T item)
{
lock (q)
{
while (false) // condition predicate(s) for producer; can be omitted in this particular case
{
System.Threading.Monitor.Wait(q);
}
// critical section
q.Enqueue(item);
}
// generally better to signal outside the lock scope
System.Threading.Monitor.Pulse(q);
}
public T Dequeue()
{
T t;
lock (q)
{
while (q.Count == 0) // condition predicate(s) for consumer
{
System.Threading.Monitor.Wait(q);
}
// critical section
t = q.Dequeue();
}
// this can be omitted in this particular case; but not if there's waiting condition for the producer as the producer needs to be woken up; and here's the problem caused by missing condition variable by C# monitor: all threads stay on the same waiting queue of the shared resource/lock.
System.Threading.Monitor.Pulse(q);
return t;
}
}
A few things I'd like to point out:
1, I think my solution captures the requirements & definitions more precisely than yours. Specifically, the consumer should be forced to wait if and only if there's nothing left in the queue; otherwise it's up to the OS/.Net runtime to schedule threads. In your solution, however, the consumer is forced to wait in each loop, regardless whether it has actually consumed anything or not - this is the excessive waiting/context switches I was talking about.
2, My solution is symmetric in the sense that both the consumer and the producer code share the same pattern while yours is not. If you did know the pattern and just omitted for this particular case, then I take back this point.
3, Your solution signals inside the lock scope, while my solutions signals outside the lock scope. Please refer to this answer as to why your solution is worse.
why should we signal outside the lock scope
I was talking about the flaw of missing condition variables in C# monitor, and here's its impact: there's simply no way for C# to implemented the solution of moving the waiting thread from the condition queue to the lock queue. Therefore, the excessive context switch is doomed to take place in the three-thread scenario proposed by the answer in the link.
Also, the lack of condition variable makes it impossible to distinguish between the various cases where threads wait on the same shared resource/lock, but for different reasons. All waiting threads are place on a big waiting queue for that shared resource, which undermines efficiency.
"But it's .Net anyway, so not completely unexpected" --- it's understandable that .Net does not pursue as high efficiency as C++, it's understandable. But it does not imply programmers should not know the differences and their impacts.
Go to deadlockempire.github.io/. They have an amazing tutorial that will help you understand the condition variable as well as locks and will cetainly help you write your desired class.
You can step through the following code at deadlockempire.github.io and trace it. Here is the code snippet
while (true) {
Monitor.Enter(mutex);
if (queue.Count == 0) {
Monitor.Wait(mutex);
}
queue.Dequeue();
Monitor.Exit(mutex);
}
while (true) {
Monitor.Enter(mutex);
if (queue.Count == 0) {
Monitor.Wait(mutex);
}
queue.Dequeue();
Monitor.Exit(mutex);
}
while (true) {
Monitor.Enter(mutex);
queue.Enqueue(42);
Monitor.PulseAll(mutex);
Monitor.Exit(mutex);
}
As has been pointed out by h9uest's answer and comments the Monitor's Wait interface does not allow for proper condition variables (i.e. it does not allow for waiting on multiple conditions per shared lock).
The good news is that the other synchronization primitives (e.g. SemaphoreSlim, lock keyword, Monitor.Enter/Exit) in .NET can be used to implement a proper condition variable.
The following ConditionVariable class will allow you to wait on multiple conditions using a shared lock.
class ConditionVariable
{
private int waiters = 0;
private object waitersLock = new object();
private SemaphoreSlim sema = new SemaphoreSlim(0, Int32.MaxValue);
public ConditionVariable() {
}
public void Pulse() {
bool release;
lock (waitersLock)
{
release = waiters > 0;
}
if (release) {
sema.Release();
}
}
public void Wait(object cs) {
lock (waitersLock) {
++waiters;
}
Monitor.Exit(cs);
sema.Wait();
lock (waitersLock) {
--waiters;
}
Monitor.Enter(cs);
}
}
All you need to do is create an instance of the ConditionVariable class for each condition you want to be able to wait on.
object queueLock = new object();
private ConditionVariable notFullCondition = new ConditionVariable();
private ConditionVariable notEmptyCondition = new ConditionVariable();
And then just like in the Monitor class, the ConditionVariable's Pulse and Wait methods must be invoked from within a synchronized block of code.
T Take() {
lock(queueLock) {
while(queue.Count == 0) {
// wait for queue to be not empty
notEmptyCondition.Wait(queueLock);
}
T item = queue.Dequeue();
if(queue.Count < 100) {
// notify producer queue not full anymore
notFullCondition.Pulse();
}
return item;
}
}
void Add(T item) {
lock(queueLock) {
while(queue.Count >= 100) {
// wait for queue to be not full
notFullCondition.Wait(queueLock);
}
queue.Enqueue(item);
// notify consumer queue not empty anymore
notEmptyCondition.Pulse();
}
}
Below is a link to the full source code of a proper Condition Variable class using 100% managed code in C#.
https://github.com/CodeExMachina/ConditionVariable
i think i found "The WAY" on the tipical problem of a
List<string> log;
used by multiple thread, one tha fill it and the other processing and the other one empting
avoiding empty
while(true){
//stuff
Thread.Sleep(100)
}
variables used in Program
public static readonly List<string> logList = new List<string>();
public static EventWaitHandle evtLogListFilled = new AutoResetEvent(false);
the processor work like
private void bw_DoWorkLog(object sender, DoWorkEventArgs e)
{
StringBuilder toFile = new StringBuilder();
while (true)
{
try
{
{
//waiting form a signal
Program.evtLogListFilled.WaitOne();
try
{
//critical section
Monitor.Enter(Program.logList);
int max = Program.logList.Count;
for (int i = 0; i < max; i++)
{
SetText(Program.logList[0]);
toFile.Append(Program.logList[0]);
toFile.Append("\r\n");
Program.logList.RemoveAt(0);
}
}
finally
{
Monitor.Exit(Program.logList);
// end critical section
}
try
{
if (toFile.Length > 0)
{
Logger.Log(toFile.ToString().Substring(0, toFile.Length - 2));
toFile.Clear();
}
}
catch
{
}
}
}
catch (Exception ex)
{
Logger.Log(System.Reflection.MethodBase.GetCurrentMethod(), ex);
}
Thread.Sleep(100);
}
}
On the filler thread we have
public static void logList_add(string str)
{
try
{
try
{
//critical section
Monitor.Enter(Program.logList);
Program.logList.Add(str);
}
finally
{
Monitor.Exit(Program.logList);
//end critical section
}
//set start
Program.evtLogListFilled.Set();
}
catch{}
}
this solution is fully tested, the istruction Program.evtLogListFilled.Set(); may release the lock on Program.evtLogListFilled.WaitOne() and also the next future lock.
I think this is the simpliest way.

How to check the state of a semaphore

I want to check the state of a Semaphore to see if it is signalled or not (so if t is signalled, I can release it). How can I do this?
EDIT1:
I have two threads, one would wait on semaphore and the other should release a Semaphore. The problem is that the second thread may call Release() several times when the first thread is not waiting on it. So the second thread should detect that if it calls Release() it generate any error or not (it generate an error if you try to release a semaphore if nobody waiting on it). How can I do this? I know that I can use a flag to do this, but it is ugly. Is there any better way?
You can check to see if a Semaphore is signaled by calling WaitOne and passing a timeout value of 0 as a parameter. This will cause WaitOne to return immediately with a true or false value indicating whether the semaphore was signaled. This, of course, could change the state of the semaphore which makes it cumbersome to use.
Another reason why this trick will not help you is because a semaphore is said to be signaled when at least one count is available. It sounds like you want to know when the semaphore has all counts available. The Semaphore class does not have that exact ability. You can use the return value from Release to infer what the count is, but that causes the semaphore to change its state and, of course, it will still throw an exception if the semaphore already had all counts available prior to making the call.
What we need is a semaphore with a release operation that does not throw. This is not terribly difficult. The TryRelease method below will return true if a count became available or false if the semaphore was already at the maximumCount. Either way it will never throw an exception.
public class Semaphore
{
private int count = 0;
private int limit = 0;
private object locker = new object();
public Semaphore(int initialCount, int maximumCount)
{
count = initialCount;
limit = maximumCount;
}
public void Wait()
{
lock (locker)
{
while (count == 0)
{
Monitor.Wait(locker);
}
count--;
}
}
public bool TryRelease()
{
lock (locker)
{
if (count < limit)
{
count++;
Monitor.PulseAll(locker);
return true;
}
return false;
}
}
}
Looks like you need an other synchronization object because Semaphore does not provide such functionality to check whether it is signalled or not in specific moment of time.
Semaphore allows automatic triggering of code which awaiting for signalled state using WaitOne()/Release() methods for instance.
You can take a look at the new .NET 4 class SemaphoreSlim which exposes CurrentCount property perhaps you can leverage it.
CurrentCount
Gets the number of threads that will be allowed to enter
the SemaphoreSlim.
EDIT: Updated due to updated question
As a quick solution you can wrap semaphore.Release() by try/catch and handle SemaphoreFullException , does it work as you expected?
Using SemaphoreSlim you can check CurrentCount in such way:
int maxCount = 5;
SemaphoreSlim slim = new SemaphoreSlim(5, maxCount);
if (slim.CurrentCount == maxCount)
{
// generate error
}
else
{
slim.Release();
}
The way to implement semaphore using signalling is as follows. It doesn't make sense to be able to query the state outside of this, as it wouldn't be threadsafe.
Create an instance with maxThreads slots, initially all available:
var threadLimit = new Semaphore(maxThreads, maxThreads);
Use the following to wait (block) for a spare slot (in case maxThreads have already been taken):
threadLimit.WaitOne();
Use the following to release a slot:
threadLimit.Release(1);
There's a full example here.
Knowing when all counts are available in a semaphore is useful. I have used the following logic/solution. I am sharing here because I haven't seen any other solutions addressing this.
//List to add a variable number of handles
private List<WaitHandle> waitHandles;
//Using a mutex to make sure that only one thread/process enters this section
using (new Mutex(....))
{
waitHandles = new List<WaitHandle>();
int x = [Maximum number of slots available in the semaphore];
//In this for loop we spin a thread per each slot of the semaphore
//The idea is to consume all the slots in this process
//not allowing anything else to enter the code protected by the semaphore
for (int i = 0; i < x; i++)
{
Thread t = new Thread(new ParameterizedThreadStart(TWorker));
ManualResetEvent mre = new ManualResetEvent(false);
waitHandles.Add(mre);
t.Start(mre);
}
WaitHandle.WaitAll(waitHandles.ToArray());
[... do stuff here, all semaphore slots are blocked now ...]
//Release all slots
semaphore.Release(x);
}
private void TWorker(object sObject)
{
ManualResetEvent mre = (ManualResetEvent)sObject;
//This is an static Semaphore declared and instantiated somewhere else
semaphore.WaitOne();
mre.Set();
}

Threadpool/WaitHandle resource leak/crash

I think I may need to re-think my design. I'm having a hard time narrowing down a bug that is causing my computer to completely hang, sometimes throwing an HRESULT 0x8007000E from VS 2010.
I have a console application (that I will later convert to a service) that handles transferring files based on a database queue.
I am throttling the threads allowed to transfer. This is because some systems we are connecting to can only contain a certain number of connections from certain accounts.
For example, System A can only accept 3 simultaneous connections (which means 3 separate threads). Each one of these threads has their own unique connection object, so we shouldn't run in to any synchronization problems since they aren't sharing a connection.
We want to process the files from those systems in cycles. So, for example, we will allow 3 connections that can transfer up to 100 files per connection. This means, to move 1000 files from System A, we can only process 300 files per cycle, since 3 threads are allowed with 100 files each. Therefore, over the lifetime of this transfer, we will have 10 threads. We can only run 3 at a time. So, there will be 3 cycles, and the last cycle will only use 1 thread to transfer the last 100 files. (3 threads x 100 files = 300 files per cycle)
The current architecture by example is:
A System.Threading.Timer checks the queue every 5 seconds for something to do by calling GetScheduledTask()
If there's nothing to, GetScheduledTask() simply does nothing
If there is work, create a ThreadPool thread to process the work [Work Thread A]
Work Thread A sees that there are 1000 files to transfer
Work Thread A sees that it can only have 3 threads running to the system it is getting files from
Work Thread A starts three new work threads [B,C,D] and transfers
Work Thread A waits for B,C,D [WaitHandle.WaitAll(transfersArray)]
Work Thread A sees that there are still more files in the queue (should be 700 now)
Work Thread A creates a new array to wait on [transfersArray = new TransferArray[3] which is the max for System A, but could vary on system
Work Thread A starts three new work threads [B,C,D] and waits for them [WaitHandle.WaitAll(transfersArray)]
The process repeats until there are no more files to move.
Work Thread A signals that it is done
I am using ManualResetEvent to handle the signaling.
My questions are:
Is there any glaring circumstance which would cause a resource leak or problem that I am experiencing?
Should I loop thru the array after every WaitHandle.WaitAll(array) and call array[index].Dispose()?
The Handle count under the Task Manager for this process slowly creeps up
I am calling the initial creation of Worker Thread A from a System.Threading.Timer. Is there going to be any problems with this? The code for that timer is:
(Some class code for scheduling)
private ManualResetEvent _ResetEvent;
private void Start()
{
_IsAlive = true;
ManualResetEvent transferResetEvent = new ManualResetEvent(false);
//Set the scheduler timer to 5 second intervals
_ScheduledTasks = new Timer(new TimerCallback(ScheduledTasks_Tick), transferResetEvent, 200, 5000);
}
private void ScheduledTasks_Tick(object state)
{
ManualResetEvent resetEvent = null;
try
{
resetEvent = (ManualResetEvent)state;
//Block timer until GetScheduledTasks() finishes
_ScheduledTasks.Change(Timeout.Infinite, Timeout.Infinite);
GetScheduledTasks();
}
finally
{
_ScheduledTasks.Change(5000, 5000);
Console.WriteLine("{0} [Main] GetScheduledTasks() finished", DateTime.Now.ToString("MMddyy HH:mm:ss:fff"));
resetEvent.Set();
}
}
private void GetScheduledTask()
{
try
{
//Check to see if the database connection is still up
if (!_IsAlive)
{
//Handle
_ConnectionLostNotification = true;
return;
}
//Get scheduled records from the database
ISchedulerTask task = null;
using (DataTable dt = FastSql.ExecuteDataTable(
_ConnectionString, "hidden for security", System.Data.CommandType.StoredProcedure,
new List<FastSqlParam>() { new FastSqlParam(ParameterDirection.Input, SqlDbType.VarChar, "#ProcessMachineName", Environment.MachineName) })) //call to static class
{
if (dt != null)
{
if (dt.Rows.Count == 1)
{ //Only 1 row is allowed
DataRow dr = dt.Rows[0];
//Get task information
TransferParam.TaskType taskType = (TransferParam.TaskType)Enum.Parse(typeof(TransferParam.TaskType), dr["TaskTypeId"].ToString());
task = ScheduledTaskFactory.CreateScheduledTask(taskType);
task.Description = dr["Description"].ToString();
task.IsEnabled = (bool)dr["IsEnabled"];
task.IsProcessing = (bool)dr["IsProcessing"];
task.IsManualLaunch = (bool)dr["IsManualLaunch"];
task.ProcessMachineName = dr["ProcessMachineName"].ToString();
task.NextRun = (DateTime)dr["NextRun"];
task.PostProcessNotification = (bool)dr["NotifyPostProcess"];
task.PreProcessNotification = (bool)dr["NotifyPreProcess"];
task.Priority = (TransferParam.Priority)Enum.Parse(typeof(TransferParam.SystemType), dr["PriorityId"].ToString());
task.SleepMinutes = (int)dr["SleepMinutes"];
task.ScheduleId = (int)dr["ScheduleId"];
task.CurrentRuns = (int)dr["CurrentRuns"];
task.TotalRuns = (int)dr["TotalRuns"];
SchedulerTask scheduledTask = new SchedulerTask(new ManualResetEvent(false), task);
//Queue up task to worker thread and start
ThreadPool.QueueUserWorkItem(new WaitCallback(this.ThreadProc), scheduledTask);
}
}
}
}
catch (Exception ex)
{
//Handle
}
}
private void ThreadProc(object taskObject)
{
SchedulerTask task = (SchedulerTask)taskObject;
ScheduledTaskEngine engine = null;
try
{
engine = SchedulerTaskEngineFactory.CreateTaskEngine(task.Task, _ConnectionString);
engine.StartTask(task.Task);
}
catch (Exception ex)
{
//Handle
}
finally
{
task.TaskResetEvent.Set();
task.TaskResetEvent.Dispose();
}
}
0x8007000E is an out-of-memory error. That and the handle count seem to point to a resource leak. Ensure you're disposing of every object that implements IDisposable. This includes the arrays of ManualResetEvents you're using.
If you have time, you may also want to convert to using the .NET 4.0 Task class; it was designed to handle complex scenarios like this much more cleanly. By defining child Task objects, you can reduce your overall thread count (threads are quite expensive not only because of scheduling but also because of their stack space).
I'm looking for answers to a similar problem (Handles Count increasing over time).
I took a look at your application architecture and like to suggest you something that could help you out:
Have you heard about IOCP (Input Output Completion Ports).
I'm not sure of the dificulty to implement this using C# but in C/C++ it is a piece of cake.
By using this you create a unique thread pool (The number of threads in that pool is in general defined as 2 x the number of processors or processors cores in the PC or server)
You associate this pool to a IOCP Handle and the pool does the work.
See the help for these functions:
CreateIoCompletionPort();
PostQueuedCompletionStatus();
GetQueuedCompletionStatus();
In General creating and exiting threads on the fly could be time consuming and leads to performance penalties and memory fragmentation.
There are thousands of literature about IOCP in MSDN and in google.
I think you should reconsider your architecture altogether. The fact that you can only have 3 simultaneously connections is almost begging you to use 1 thread to generate the list of files and 3 threads to process them. Your producer thread would insert all files into a queue and the 3 consumer threads will dequeue and continue processing as items arrive in the queue. A blocking queue can significantly simplify the code. If you are using .NET 4.0 then you can take advantage of the BlockingCollection class.
public class Example
{
private BlockingCollection<string> m_Queue = new BlockingCollection<string>();
public void Start()
{
var threads = new Thread[]
{
new Thread(Producer),
new Thread(Consumer),
new Thread(Consumer),
new Thread(Consumer)
};
foreach (Thread thread in threads)
{
thread.Start();
}
}
private void Producer()
{
while (true)
{
Thread.Sleep(TimeSpan.FromSeconds(5));
ScheduledTask task = GetScheduledTask();
if (task != null)
{
foreach (string file in task.Files)
{
m_Queue.Add(task);
}
}
}
}
private void Consumer()
{
// Make a connection to the resource that is assigned to this thread only.
while (true)
{
string file = m_Queue.Take();
// Process the file.
}
}
}
I have definitely oversimplified things in the example above, but I hope you get the general idea. Notice how this is much simpler as there is not much in the way of thread synchronization (most will be embedded in the blocking queue) and of course there is no use of WaitHandle objects. Obviously you would have to add in the correct mechanisms to shut down the threads gracefully, but that should be fairly easy.
It turns out the source of this strange problem was not related to architecture but rather because of converting the solution from 3.5 to 4.0. I re-created the solution, performing no code changes, and the problem never occurred again.

How to effectively log asynchronously?

I am using Enterprise Library 4 on one of my projects for logging (and other purposes). I've noticed that there is some cost to the logging that I am doing that I can mitigate by doing the logging on a separate thread.
The way I am doing this now is that I create a LogEntry object and then I call BeginInvoke on a delegate that calls Logger.Write.
new Action<LogEntry>(Logger.Write).BeginInvoke(le, null, null);
What I'd really like to do is add the log message to a queue and then have a single thread pulling LogEntry instances off the queue and performing the log operation. The benefit of this would be that logging is not interfering with the executing operation and not every logging operation results in a job getting thrown on the thread pool.
How can I create a shared queue that supports many writers and one reader in a thread safe way? Some examples of a queue implementation that is designed to support many writers (without causing synchronization/blocking) and a single reader would be really appreciated.
Recommendation regarding alternative approaches would also be appreciated, I am not interested in changing logging frameworks though.
I wrote this code a while back, feel free to use it.
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading;
namespace MediaBrowser.Library.Logging {
public abstract class ThreadedLogger : LoggerBase {
Queue<Action> queue = new Queue<Action>();
AutoResetEvent hasNewItems = new AutoResetEvent(false);
volatile bool waiting = false;
public ThreadedLogger() : base() {
Thread loggingThread = new Thread(new ThreadStart(ProcessQueue));
loggingThread.IsBackground = true;
loggingThread.Start();
}
void ProcessQueue() {
while (true) {
waiting = true;
hasNewItems.WaitOne(10000,true);
waiting = false;
Queue<Action> queueCopy;
lock (queue) {
queueCopy = new Queue<Action>(queue);
queue.Clear();
}
foreach (var log in queueCopy) {
log();
}
}
}
public override void LogMessage(LogRow row) {
lock (queue) {
queue.Enqueue(() => AsyncLogMessage(row));
}
hasNewItems.Set();
}
protected abstract void AsyncLogMessage(LogRow row);
public override void Flush() {
while (!waiting) {
Thread.Sleep(1);
}
}
}
}
Some advantages:
It keeps the background logger alive, so it does not need to spin up and spin down threads.
It uses a single thread to service the queue, which means there will never be a situation where 100 threads are servicing the queue.
It copies the queues to ensure the queue is not blocked while the log operation is performed
It uses an AutoResetEvent to ensure the bg thread is in a wait state
It is, IMHO, very easy to follow
Here is a slightly improved version, keep in mind I performed very little testing on it, but it does address a few minor issues.
public abstract class ThreadedLogger : IDisposable {
Queue<Action> queue = new Queue<Action>();
ManualResetEvent hasNewItems = new ManualResetEvent(false);
ManualResetEvent terminate = new ManualResetEvent(false);
ManualResetEvent waiting = new ManualResetEvent(false);
Thread loggingThread;
public ThreadedLogger() {
loggingThread = new Thread(new ThreadStart(ProcessQueue));
loggingThread.IsBackground = true;
// this is performed from a bg thread, to ensure the queue is serviced from a single thread
loggingThread.Start();
}
void ProcessQueue() {
while (true) {
waiting.Set();
int i = ManualResetEvent.WaitAny(new WaitHandle[] { hasNewItems, terminate });
// terminate was signaled
if (i == 1) return;
hasNewItems.Reset();
waiting.Reset();
Queue<Action> queueCopy;
lock (queue) {
queueCopy = new Queue<Action>(queue);
queue.Clear();
}
foreach (var log in queueCopy) {
log();
}
}
}
public void LogMessage(LogRow row) {
lock (queue) {
queue.Enqueue(() => AsyncLogMessage(row));
}
hasNewItems.Set();
}
protected abstract void AsyncLogMessage(LogRow row);
public void Flush() {
waiting.WaitOne();
}
public void Dispose() {
terminate.Set();
loggingThread.Join();
}
}
Advantages over the original:
It's disposable, so you can get rid of the async logger
The flush semantics are improved
It will respond slightly better to a burst followed by silence
Yes, you need a producer/consumer queue. I have one example of this in my threading tutorial - if you look my "deadlocks / monitor methods" page you'll find the code in the second half.
There are plenty of other examples online, of course - and .NET 4.0 will ship with one in the framework too (rather more fully featured than mine!). In .NET 4.0 you'd probably wrap a ConcurrentQueue<T> in a BlockingCollection<T>.
The version on that page is non-generic (it was written a long time ago) but you'd probably want to make it generic - it would be trivial to do.
You would call Produce from each "normal" thread, and Consume from one thread, just looping round and logging whatever it consumes. It's probably easiest just to make the consumer thread a background thread, so you don't need to worry about "stopping" the queue when your app exits. That does mean there's a remote possibility of missing the final log entry though (if it's half way through writing it when the app exits) - or even more if you're producing faster than it can consume/log.
Here is what I came up with... also see Sam Saffron's answer. This answer is community wiki in case there are any problems that people see in the code and want to update.
/// <summary>
/// A singleton queue that manages writing log entries to the different logging sources (Enterprise Library Logging) off the executing thread.
/// This queue ensures that log entries are written in the order that they were executed and that logging is only utilizing one thread (backgroundworker) at any given time.
/// </summary>
public class AsyncLoggerQueue
{
//create singleton instance of logger queue
public static AsyncLoggerQueue Current = new AsyncLoggerQueue();
private static readonly object logEntryQueueLock = new object();
private Queue<LogEntry> _LogEntryQueue = new Queue<LogEntry>();
private BackgroundWorker _Logger = new BackgroundWorker();
private AsyncLoggerQueue()
{
//configure background worker
_Logger.WorkerSupportsCancellation = false;
_Logger.DoWork += new DoWorkEventHandler(_Logger_DoWork);
}
public void Enqueue(LogEntry le)
{
//lock during write
lock (logEntryQueueLock)
{
_LogEntryQueue.Enqueue(le);
//while locked check to see if the BW is running, if not start it
if (!_Logger.IsBusy)
_Logger.RunWorkerAsync();
}
}
private void _Logger_DoWork(object sender, DoWorkEventArgs e)
{
while (true)
{
LogEntry le = null;
bool skipEmptyCheck = false;
lock (logEntryQueueLock)
{
if (_LogEntryQueue.Count <= 0) //if queue is empty than BW is done
return;
else if (_LogEntryQueue.Count > 1) //if greater than 1 we can skip checking to see if anything has been enqueued during the logging operation
skipEmptyCheck = true;
//dequeue the LogEntry that will be written to the log
le = _LogEntryQueue.Dequeue();
}
//pass LogEntry to Enterprise Library
Logger.Write(le);
if (skipEmptyCheck) //if LogEntryQueue.Count was > 1 before we wrote the last LogEntry we know to continue without double checking
{
lock (logEntryQueueLock)
{
if (_LogEntryQueue.Count <= 0) //if queue is still empty than BW is done
return;
}
}
}
}
}
I suggest to start with measuring actual performance impact of logging on the overall system (i.e. by running profiler) and optionally switching to something faster like log4net (I've personally migrated to it from EntLib logging a long time ago).
If this does not work, you can try using this simple method from .NET Framework:
ThreadPool.QueueUserWorkItem
Queues a method for execution. The method executes when a thread pool thread becomes available.
MSDN Details
If this does not work either then you can resort to something like John Skeet has offered and actually code the async logging framework yourself.
In response to Sam Safrons post, I wanted to call flush and make sure everything was really finished writting. In my case, I am writing to a database in the queue thread and all my log events were getting queued up but sometimes the application stopped before everything was finished writing which is not acceptable in my situation. I changed several chunks of your code but the main thing I wanted to share was the flush:
public static void FlushLogs()
{
bool queueHasValues = true;
while (queueHasValues)
{
//wait for the current iteration to complete
m_waitingThreadEvent.WaitOne();
lock (m_loggerQueueSync)
{
queueHasValues = m_loggerQueue.Count > 0;
}
}
//force MEL to flush all its listeners
foreach (MEL.LogSource logSource in MEL.Logger.Writer.TraceSources.Values)
{
foreach (TraceListener listener in logSource.Listeners)
{
listener.Flush();
}
}
}
I hope that saves someone some frustration. It is especially apparent in parallel processes logging lots of data.
Thanks for sharing your solution, it set me into a good direction!
--Johnny S
I wanted to say that my previous post was kind of useless. You can simply set AutoFlush to true and you will not have to loop through all the listeners. However, I still had crazy problem with parallel threads trying to flush the logger. I had to create another boolean that was set to true during the copying of the queue and executing the LogEntry writes and then in the flush routine I had to check that boolean to make sure something was not already in the queue and the nothing was getting processed before returning.
Now multiple threads in parallel can hit this thing and when I call flush I know it is really flushed.
public static void FlushLogs()
{
int queueCount;
bool isProcessingLogs;
while (true)
{
//wait for the current iteration to complete
m_waitingThreadEvent.WaitOne();
//check to see if we are currently processing logs
lock (m_isProcessingLogsSync)
{
isProcessingLogs = m_isProcessingLogs;
}
//check to see if more events were added while the logger was processing the last batch
lock (m_loggerQueueSync)
{
queueCount = m_loggerQueue.Count;
}
if (queueCount == 0 && !isProcessingLogs)
break;
//since something is in the queue, reset the signal so we will not keep looping
Thread.Sleep(400);
}
}
Just an update:
Using enteprise library 5.0 with .NET 4.0 it can easily be done by:
static public void LogMessageAsync(LogEntry logEntry)
{
Task.Factory.StartNew(() => LogMessage(logEntry));
}
See:
http://randypaulo.wordpress.com/2011/07/28/c-enterprise-library-asynchronous-logging/
An extra level of indirection may help here.
Your first async method call can put messages onto a synchonized Queue and set an event -- so the locks are happening in the thread-pool, not on your worker threads -- and then have yet another thread pulling messages off the queue when the event is raised.
If you log something on a separate thread, the message may not be written if the application crashes, which makes it rather useless.
The reason goes why you should always flush after every written entry.
If what you have in mind is a SHARED queue, then I think you are going to have to synchronize the writes to it, the pushes and the pops.
But, I still think it's worth aiming at the shared queue design. In comparison to the IO of logging and probably in comparison to the other work your app is doing, the brief amount of blocking for the pushes and the pops will probably not be significant.

Categories

Resources