BlockingCollection that doesn't try again within 10 seconds - c#

I am using a Blockingcollection as a FIFO queue but I am doing a lot of operations on files, where the consumer may easily encounter a file lock, so what I have done is created a simple try catch where the consumer re-queue's itself, but in a long FIFO queue with lots of other Items in the queue this is enough of a pause, but in an empty or very short FIFO queue it means the consumer perpetually hammers the queue with repeating re-occurrences of itself that are probably going to be still file locked.
i.e.
consumer busy -> requeue -> consumer busy -> requeue (ad infinitum)
is there a way to get the BlockingCollection to not attempt to run the new consumer if it is less than 10 seconds old? i.e. potentially get the net one in the queue and carry on and only take the next consumer if it's createdDateTime is null (default for first attempt) or if it is > 10 seconds?

There's nothing built-in to help with that. Store with each work item the DateTime when it was last attempted (could be null if this is the first attempt). Then, in your processing function wait for TimeSpan.FromSeconds(10) - (DateTime.UtcNow - lastAttemptDateTime) seconds before making the next attempt.
Consider switching to a priority queue that stores items in the order of earliest next attempt datetime.

You could keep two blocking collections: the main one and the "delayed" one. One worker thread would only work on the delayed one, readding them to the main collection. The signature of the rejected collection would be something like:
BlockingCollection<Tuple<DateTime, YourObject>>
now... If the time is fixed at 10 seconds, the delayed collection will nearly be DateTime sorted (in case of items added nearly at the same time this could be not-true, but we are speaking of milliseconds difference... not a problem)
public class MainClass
{
// The "main" BlockingCollection
// (the one you are already using)
BlockingCollection<Work> Works = new BlockingCollection<Work>();
// The "delayed" BlockingCollection
BlockingCollection<Tuple<DateTime, Work>> Delayed = new BlockingCollection<Tuple<DateTime, Work>>();
// This is a single worker that will work on the Delayed collection
// in a separate thread
public void DelayedWorker()
{
Tuple<DateTime, Work> tuple;
while (Delayed.TryTake(out tuple, -1))
{
var dt = DateTime.Now;
if (tuple.Item1 > dt)
{
Thread.Sleep(tuple.Item1 - dt);
}
Works.Add(tuple.Item2);
}
}
}

Related

C# Blocking collection data handoff time intermittent

Data handoff in blocking collection is taking too much time sometimes...
Example code:
Producer:
Blockingcollection<byte[]> collection = new Blockingcollection (5000);
{
while (condition)
{
byte[] data = new byte[10240]
// fill data here.. Read from external source
collection.add(data);
}
collection.CompleteAdding();
}
Consumer:
{
while(!collection.IsCompleteAdding)
{
byte[] data = collection.Take();
// write data to disk..
}
}
Both producer and consumer are running on different task. It runs perfectly but sometime when adding array to collection take around 50 milliseconds which is deal breaker and usually it takes less then 1 millisecond to hand off data. Theoretically the consumer thread should not block when writing as writing to disk is on a separate thread.
It's the boundedCapacity value you're passing to the constructor:
Blockingcollection<byte[]> collection =
new Blockingcollection (5000 /* <--- boundedCapacity */ );
You are initializing a blocking collection with a queue limited to 5000 items. When there are 5000 items in the queue, any producer will get blocked until there's an empty slot again. This limit makes sure your queue satisfies Little's Law. You'll need to analyse your system to get the optimal bound value, or you can leave the queue unbounded and write some unit tests to make sure it doesn't overflow.

job incomplete during when by using timer event completes in windowservice c#.net

here i have written a window service, it job is to read files from one folder and sending the same content to database and sending readed files to some other folder
now my service having timer event has sets it was about of 10000 means ten seconds,
now if a process a files between 100 - 1000 ,with in 10 sec it was doing that job processing good output, case if process the files 6000 - 9000 at that particular situation my service is not producing exact out, it was not able to do that job in 10000 (ten seconds), so i need when service in middle of the job it should get interrupted since by timer completed but real scenario it should completed the particular job.
kindly give some suggestions, it would be appreciated
Different approaches that can work:
Have the method called by the timer safe for re-entrance and then not worry about it. This tends to be either very easy to do (the task done is inherently re-entrant) or pretty tricky (if it's not naturally re-entrant you have to consider the effects of multiple threads upon every single thing hit during the task).
Have a lock in the operation, so different threads from different timer events just wait on each other. Note that you must know that there will not be several tasks in a row that take longer than the time interval, as otherwise you will have an ever-growing queue of threads waiting for their chance to run, and the amount of resources consumed just by waiting to do something will grown with it.
Have the timer not set to have a recurring interval, but rather re-set it at the end of each task, so the next task will happen X seconds after the current one finishes.
Have a lock and obtain it only if you don't have to block. If you would have to block then a current task is still running, and we just let this time slot go by to stay out of it's ways.
After all, there'll be another timer event along in 10 seconds:
private static void TimerHandler(object state)
{
if(!Monitor.TryEnter(LockObject))
return;//last timer still running
try
{
//do useful stuff here.
}
finally
{
Monitor.Exit(LockObject);
}
}
Use a static boolean variable named something like IsProcessing.
When you start working on the file you set it to true.
When the timer is fired next check if the file is still in processing.
If it's still processing, do nothing.

Are there any implementations of concurrent lock-free blocking queues?

I'm aware of blocking-queues and lock-free queues, a great example of those implementations being provided by Scott et al., but are there any implementations of a lock-free-blocking-queue?
In a lock-free-blocking-queue the dequeue will require no locking, but if there are no items in the queue it will block the consumer. Are there any implementations of such a beast? I prefer if they're C# implementations, but any implementation would technically work.
Update:
I think I end up with a race condition on line D14.1:
initialize(Q: pointer to queue t)
node = new node() // Allocate a free node
node–>next.ptr = NULL // Make it the only node in the linked list
Q–>Head = Q–>Tail = node // Both Head and Tail point to it
signal = new ManualResetEvent() // create a manual reset event
enqueue(Q: pointer to queue t, value: data type)
E1: node = new node() // Allocate a new node from the free list
E2: node–>value = value // Copy enqueued value into node
E3: node–>next.ptr = NULL // Set next pointer of node to NULL
E4: loop // Keep trying until Enqueue is done
E5: tail = Q–>Tail // Read Tail.ptr and Tail.count together
E6: next = tail.ptr–>next // Read next ptr and count fields together
E7: if tail == Q–>Tail // Are tail and next consistent?
E8: if next.ptr == NULL // Was Tail pointing to the last node?
E9: if CAS(&tail.ptr–>next, next, <node, next.count+1>) // Try to link node at the end of the linked list
E10.1: signal.Set() // Signal to the blocking dequeues
E10.2: break // Enqueue is done. Exit loop
E11: endif
E12: else // Tail was not pointing to the last node
E13: CAS(&Q–>Tail, tail, <next.ptr, tail.count+1>) // Try to swing Tail to the next node
E14: endif
E15: endif
E16: endloop
E17: CAS(&Q–>Tail, tail, <node, tail.count+1>) // Enqueue is done. Try to swing Tail to the inserted node
dequeue(Q: pointer to queue t, pvalue: pointer to data type): boolean
D1: loop // Keep trying until Dequeue is done
D2: head = Q–>Head // Read Head
D3: tail = Q–>Tail // Read Tail
D4: next = head–>next // Read Head.ptr–>next
D5: if head == Q–>Head // Are head, tail, and next consistent?
D6: if head.ptr == tail.ptr // Is queue empty or Tail falling behind?
D7: if next.ptr == NULL // Is queue empty?
D8.1: signal.WaitOne() // Block until an enqueue
D8.X: // remove the return --- return FALSE // Queue is empty, couldn’t dequeue
D9: endif
D10: CAS(&Q–>Tail, tail, <next.ptr, tail.count+1>) // Tail is falling behind. Try to advance it
D11: else // No need to deal with Tail
// Read value before CAS, otherwise another dequeue might free the next node
D12: *pvalue = next.ptr–>value
D13: if CAS(&Q–>Head, head, <next.ptr, head.count+1>) // Try to swing Head to the next node
D14.1: if(head.ptr == tail.ptr && next.ptr==NULL) // Is queue empty? <--- POSSIBLE RACE CONDITION???
D14.2: signal.Reset()
D14.3: break // Dequeue is done. Exit loop
D15: endif
D16: endif
D17: endif
D18: endloop
D19: free(head.ptr) // It is safe now to free the old dummy node
D20: return TRUE // Queue was not empty, dequeue succeeded
EDIT:
SIMPLER:
I suggest you don't need a head and tail for your queue. Just have a head. If the head = NULL, the list is empty. Add items to head. Remove items from head. Simpler, fewer CAS ops.
HELPER:
I suggested in the comments that you need to think of a helper scheme to handle the race. In my version of what "lock free" means, it's ok to have rare race conditions if they don't cause problems. I like the extra performance vs having an idle thread sleep a couple ms too long.
Helper ideas. When a consumer grabs work it could check to see if there is a thread in a coma. When a producer adds work, it could look for threads in comas.
So track sleepers. Use a linked list of sleepers. When a thread decides there is no work, it marks itself as !awake and CAS's itself to head of the sleeper list. When a signal is received to wake up, the thread marks self as awake. Then the newly awakened thread cleans up the sleeper list. To clean up a concurrent single linked list, you have to be careful. You can only CAS to the head. So while the head of the sleeper list is marked awake, you can CAS the head off. If the head is not awake, continue to scan the list and "lazy unlink" (I made that term up) the remaining awake items. Lazy unlink is simple...just set next ptr of prev item over the awake item. A concurrent scan will still make it to the end of the list even if it gets to items that are !awake. Subsequent scans see a shorter list. Finally, any time you add work or pull off work, scan the sleeper list for !awake items. If a consumer notices work remains after grabbing some work (.next work != NULL), the consumer can scan sleeper list and signal the first thread that is !awake. After a producer adds work, the producer can scan the sleeper list and do the same.
If you have a broadcast scenario and cant signal a single thread, then just keep a count of asleep threads. While that count is still > 0, a consumer noticing remaining work and a consumer adding work would broadcast the signal to wake up.
In our environment, we have 1 thread per SMT, so the sleeper list can never be that large (well unless I get my hands on one of those new 128 concurrent thread machines!) We generate work items early in a transaction. In the first sec we might generate 10,000 work items, and this production rapidly tapers off. Threads work for a couple sec on those work items. So, we rarely have a thread on the idle pool.
YOU CAN STILL USE LOCKS
If you have 1 thread only and generate work rarely...this wont work for you. In that case the performance of mutexes is of no concern and you should just use them. Use a lock on the sleeper queue in this scenario. Think of lock-free as being "no locks where it counts".
PREVIOUS POST:
Are you saying:
There is a queue of work.
There are many consumer threads.
A consumer needs to pull of work and do it if there is any work
A consumer thread needs to sleep until there is work.
If you are, we do this using only atomic operations this way:
The queue of work is a linked list. There is also a linked list of sleeping threads.
To add work: CAS the head of the list to the new work. When work is added,we check to see if there are any threads on the sleeper list. If there are, before adding the work, we CAS a sleeper off the sleeper list, set its work = the new work, and then signal the sleeper to wake up. The we add the work to the work queue.
To consume work: CAS the head of the list to head->next. If the head of the work list is NULL, we CAS the thread to a list of sleepers.
Once a thread has a work item, the thread must CAS the work item's state to WORK_INPROGRESS or some such. If that fails, it means the work is being performed by another, so the consumer thread goes back to search for work. If a thread wakes up and has a work item, it still has to CAS the state.
So if work is added, a sleeping consumer is always woken up and handed the work. pthread_kill() always wakes a thread at sigwait(), because even If the thread gets to sigwait after the signal, the signal is received. This solves the problem of a thread putting itself on the sleeper list but getting signaled before going to sleep. All that happens is the thread tries to own its ->work if there is one. Failure to own work or not having work sends the thread back to consume-start. If a thread fails to CAS to the sleeper list, it means that either another thread beat it, or that the producer pulled off a sleeper. For safety, we have the thread act as if it were just woken up.
We get no race conditions doing this and have multiple producers and consumers. We also have been able to expand this to allow threads to sleep on individual work items as well.
.NET parallel extensions: (Built in, for .NET 4.0+):
http://blogs.msdn.com/b/pfxteam/archive/2010/01/26/9953725.aspx
Someone from StackOverflow's implementation:
Lock free constructs in .net
Response to clarification in comments:
If the blocking when empty is not busy (waits for signal), then it seems like you need a counting-semaphore to wait on.
An alternative approach could be using a regular queue, together with atomic compare and exchange or spin lock to prevent simultaneous access,
then if a consumer thread tries to enter when queue is empty, lock binary semaphore,
if a provider thread tries to enter when queue is empty, unlock binary semaphore to awaken all sleeper consumers (and return them to spin-lock, so that multiple threads can only enter if there are enough items in queue for them).
E.g. // pseudo code
/// Non-blocking lock (busy wait)
void SpinLock()
{
While (CompareAndExchange(myIntegerLock, -1, 0) != 0)
{
// wait
}
}
void UnSpinLock()
{
Exchange(myIntegerLock, 0);
}
void AddItem(item)
{
// Use CAS for synchronization
SpinLock(); // Non-blocking lock (busy wait)
queue.Push(item);
// Unblock any blocked consumers
if (queue.Count() == 1)
{
semaphore.Increase();
}
// End of CAS synchronization block
UnSpinLock();
}
Item RemoveItem()
{
// Use CAS for synchronization
SpinLock(); // Non-blocking lock (busy wait)
// If empty then block
if (queue.Count() == 0)
{
// End of CAS synchronization block
UnSpinLock();
// Block until queue is not empty
semaphore.Decrease();
// Try again (may fail again if there is more than one consumer)
return RemoveItem();
}
result = queue.Pop();
// End of CAS synchronization block
UnSpinLock();
return result;
}

Output individual data from structure based on timer

I'am creating a "man-in-middle" style application that applies a network latency to the transmissions, not for malicious use I should declare.
However I'm having difficulty with the correct output mechanisms on the data structure (LinkedList<string> buffer = new LinkedList<string>();).
What should happen:
Read data into structure from clientA.
if (buffer.First != null && buffer.Last != null)
{
buffer.AddAfter(buffer.Last, ServerRead.ReadLine().ToString());
}
else
buffer.AddFirst(ServerRead.ReadLine().ToString());
Using an individual or overall timer to track when to release the data to ClientB. (adjustable timer to adjust latency)
Timer on item in structure triggers, thus releasing the packet to clientB.
Clean up free data structure node
if (buffer.First != null)
{
clientWrite.WriteLine(buffer.First.Value.ToString());
clientWrite.Flush();
buffer.RemoveFirst();
}
However I have been trying to use the System.Windows.Forms.Timer to create a global timer that triggers a thread which handles the data output to clientB. However I'am finding this technique to be too slow, even when setting the myTimer.Interval = 1; This creates a concurrency problem with when clearing up the list and adding to it, the temporary solution is by locking the resource but I feel this is adding to the slow performance of data output.
Question:
I need some ideas on a solution that can store data into a data structure and apply a timer (like an egg timer effect) on the data stored and when that timer runs out it will be sent on its way to the other clients.
Regards, House.
The linked list will work, and it's unlikely that locking it (if done properly) will cause poor performance. You'd probably be much better off using ConcurrentQueue. It's thread-safe, so you don't have to do any explicit blocking.
I would suggest using System.Threading.Timer rather than the Windows Forms timer. Note, though, that you're still going to be limited to about 15 ms resolution. That is, even with a timer interval of 1, your effective delay times will be in the range of 15 to 25 ms rather than 1 ms. It's just the way the timers are implemented.
Also, since you want to delay each item for a specified period of time (which I assume is constant), you need some notion of "current time." I don't recommend using DateTime.Now or any of its variants, because the time can change. Rather, I use Stopwatch to get an application-specific time.
Also, you'll need some way to keep track of release times for the items. A class to hold the item, and the time it will be sent. Something like:
class BufferItem
{
public string Data { get; private set; }
public TimeSpan ReleaseTime { get; private set; }
public BufferItem(string d, TimeSpan ts)
{
data = d;
ReleaseTime = ts;
}
}
Okay. Let's put it all together.
// the application clock
Stopwatch AppTime = Stopwatch.StartNew();
// Amount of time to delay an item
TimeSpan DelayTime = TimeSpan.FromSeconds(1.0);
ConcurrentQueue<BufferItem> ItemQueue = new ConcurrentQueue<BufferItem>();
// Timer will check items for release every 15 ms.
System.ThreadingTimer ReleaseTimer = new System.Threading.Timer(CheckRelease, null, 15, 15);
Receiving an item:
// When an item is received:
// Compute release time and add item to buffer.
var item = new BufferItem(data, AppTime.Elapsed + DelayTime);
ItemQueue.Add(item);
The timer proc.
void CheckRelease(object state)
{
BufferItem item;
while (ItemQueue.TryPeek(out item) && item.ReleaseTime >= AppTime)
{
if (ItemQueue.TryDequeue(out item))
{
// send the item
}
}
}
That should perform well and you shouldn't have any concurrency issues.
If you don't like that 15 ms timer ticking all the time even when there aren't any items, you could make the timer a one-shot and have the CheckRelease method re-initialize it with the next release time after dequeing items. Of course, you'll also have to make the receive code initialize it the first time, or when there aren't any items in the queue. You'll need a lock to synchronize access to updating the timer.

C# Threading and Queues

This isn't about the different methods I could or should be using to utilize the queues in the best manner, rather something I have seen happening that makes no sense to me.
void Runner() {
// member variable
queue = Queue.Synchronized(new Queue());
while (true) {
if (0 < queue.Count) {
queue.Dequeue();
}
}
}
This is run in a single thread:
var t = new Thread(Runner);
t.IsBackground = true;
t.Start();
Other events are "Enqueue"ing else where. What I've seen happen is over a period of time, the Dequeue will actually throw InvalidOperationException, queue empty. This should be impossible seeing as how the count guarantees there is something there, and I'm positive that nothing else is "Dequeue"ing.
The question(s):
Is it possible that the Enqueue actually increases the count before the item is fully on the queue (whatever that means...)?
Is it possible that the thread is somehow restarting (expiring, reseting...) at the Dequeue statement, but immediately after it already removed an item?
Edit (clarification):
These code pieces are part of a Wrapper class that implements the background helper thread. The Dequeue here is the only Dequeue, and all Enqueue/Dequeue are on the Synchronized member variable (queue).
Using Reflector, you can see that no, the count does not get increased until after the item is added.
As Ben points out, it does seem as you do have multiple people calling dequeue.
You say you are positive that nothing else is calling dequeue. Is that because you only have the one thread calling dequeue? Is dequeue called anywhere else at all?
EDIT:
I wrote a little sample code, but could not get the problem to reproduce. It just kept running and running without any exceptions.
How long was it running before you got errors? Maybe you can share a bit more of the code.
class Program
{
static Queue q = Queue.Synchronized(new Queue());
static bool running = true;
static void Main()
{
Thread producer1 = new Thread(() =>
{
while (running)
{
q.Enqueue(Guid.NewGuid());
Thread.Sleep(100);
}
});
Thread producer2 = new Thread(() =>
{
while (running)
{
q.Enqueue(Guid.NewGuid());
Thread.Sleep(25);
}
});
Thread consumer = new Thread(() =>
{
while (running)
{
if (q.Count > 0)
{
Guid g = (Guid)q.Dequeue();
Console.Write(g.ToString() + " ");
}
else
{
Console.Write(" . ");
}
Thread.Sleep(1);
}
});
consumer.IsBackground = true;
consumer.Start();
producer1.Start();
producer2.Start();
Console.ReadLine();
running = false;
}
}
Here is what I think the problematic sequence is:
(0 < queue.Count) evaluates to true, the queue is not empty.
This thread gets preempted and another thread runs.
The other thread removes an item from the queue, emptying it.
This thread resumes execution, but is now within the if block, and attempts to dequeue an empty list.
However, you say nothing else is dequeuing...
Try outputting the count inside the if block. If you see the count jump numbers downwards, someone else is dequeuing.
Here's a possible answer from the MSDN page on this topic:
Enumerating through a collection is
intrinsically not a thread-safe
procedure. Even when a collection is
synchronized, other threads can still
modify the collection, which causes
the enumerator to throw an exception.
To guarantee thread safety during
enumeration, you can either lock the
collection during the entire
enumeration or catch the exceptions
resulting from changes made by other
threads.
My guess is that you're correct - at some point, there's a race condition happening, and you end up dequeuing something that isn't there.
A Mutex or Monitor.Lock is probably appropriate here.
Good luck!
Are the other areas that are "Enqueuing" data also using the same synchronized queue object? In order for the Queue.Synchronized to be thread-safe, all Enqueue and Dequeue operations must use the same synchronized queue object.
From MSDN:
To guarantee the thread safety of the
Queue, all operations must be done
through this wrapper only.
Edited:
If you are looping over many items that involve heavy computation or if you are using a long-term thread loop (communications, etc.), you should consider having a wait function such as System.Threading.Thread.Sleep, System.Threading.WaitHandle.WaitOne, System.Threading.WaitHandle.WaitAll, or System.Threading.WaitHandle.WaitAny in the loop, otherwise it might kill system performance.
question 1: If you're using a synchronized queue, then: no, you're safe! But you'll need to use the synchronized instance on both sides, the supplier and the feeder.
question 2: Terminating your worker thread when there is no work to do, is a simple job. However, you either way need a monitoring thread or have the queue start a background worker thread whenever the queue has something to do. The last one sounds more like the ActiveObject Pattern, than a simple queue (which's Single-Responsibily-Pattern says that it should only do queueing).
In addition, I'd go for a blocking queue instead of your code above. The way your code works requires CPU processing power even if there is no work to do. A blocking queue lets your worker thread sleep whenever there is nothing to do. You can have multiple sleeping threads running without using CPU processing power.
C# doesn't come with a blocking queue implementation, but there a many out there. See this example and this one.
Another option for making thread-safe use of queues is the ConcurrentQueue<T> class that has been introduced since 2009 (the year of this question). This may help avoid having to write your own synchronization code or at least help making it much simpler.
From .NET Framework 4.6 onward, ConcurrentQueue<T> also implements the interface IReadOnlyCollection<T>.

Categories

Resources