There is a decimal parameter. Suppose it is equal to 100. There is a Task that reduces it by 0.1 every 100ms. As soon as the parameter becomes equal to 1, the task should end and the parameter should not decrease any more. Works without problems if there is only one Task. But if there are 2, 3, 100... then the parameter will eventually become less than 1. I try to use CancellationToken to end all tasks, but the result is still the same. My code:
class Program
{
static decimal param = 100;
static CancellationTokenSource cancelTokenSource = new CancellationTokenSource();
static CancellationToken token;
static void Main(string[] args)
{
int tasksCount = 16;
token = cancelTokenSource.Token;
Console.WriteLine("Start Param = {0}", param);
Console.WriteLine("Tasks Count = {0}", tasksCount);
var tasksList = new List<Task>();
for (var i = 0; i < tasksCount; i++)
{
Task task = new Task(Decrementation, token);
tasksList.Add(task);
}
tasksList.ForEach(x => x.Start());
Task.WaitAny(tasksList.ToArray());
Console.WriteLine("Result = {0}", param);
Console.Read();
}
private static void Decrementation()
{
while (true)
{
if (token.IsCancellationRequested)
{
break;
}
if (CanTakeMore())
{
Task.Delay(100);
param = param - 0.1m;
}
else
{
cancelTokenSource.Cancel();
return;
}
}
}
private static bool CanTakeMore()
{
if (param > 1)
{
return true;
}
else
{
return false;
}
}
}
The output is different, but it is always less than 1. How to fix ?
Your tasks are checking and modifying the same shared value in parallel, to some degree as allowed by your CPU architecture and/or Operating System.
Any number of your tasks can encounter a CanTakeMore() result of true "at the same time" (when they call it with the shared value being 1.1m), and then all of them that received true from that call will proceed to decrease the shared value.
This problem can usually be avoided by using a lock statement:
private static object _lockObj = new object();
private static void Decrementation()
{
while (true)
{
if (token.IsCancellationRequested)
{
break;
}
lock (_lockObj)
{
if (CanTakeMore())
{
Task.Delay(100); // Note: this needs an `await`, but the method we're in is NOT `async`...!
param = param - 0.1m;
}
else
{
cancelTokenSource.Cancel();
return;
}
}
}
}
Here is a working .NET Fiddle: https://dotnetfiddle.net/BA6tHX
While your code has other issues (such as the fact that you must await Task.Delay), the fundamental problem is that you must either lock your entire read/write operation, or modify your implementation to enable atomic read and writes.
One option is to take your incoming decimal and convert it to a 32 bit integer, multiplying the param by the number of places of precision you need. In this case, it would be 100 * 10 since you have 1 place of precision.
This enables you to use Thread.VolatileRead in conjunction with Interlocked.CompareExchange to produce the behavior you are looking for (working example).
void Main()
{
int tasksCount = 16;
token = cancelTokenSource.Token;
Console.WriteLine("Start Param = {0}", param);
Console.WriteLine("Tasks Count = {0}", tasksCount);
var tasksList = new List<Task>();
for (var i = 0; i < tasksCount; i++)
{
Task task = new Task(Decrementation, token);
tasksList.Add(task);
}
tasksList.ForEach(x => x.Start());
Task.WaitAny(tasksList.ToArray());
Console.WriteLine("Result = {0}", param);
Console.Read();
}
static int param = 1000;
static CancellationTokenSource cancelTokenSource = new CancellationTokenSource();
static CancellationToken token;
private static void Decrementation()
{
while (true)
{
if (token.IsCancellationRequested)
{
break;
}
int temp = Thread.VolatileRead(ref param);
if (temp == 1)
{
cancelTokenSource.Cancel();
return;
}
int updatedValue = temp - 1;
if (Interlocked.CompareExchange(ref param, updatedValue, temp) == temp)
{
// the update was successful. Delay (or do additional work)
// this still does nothing
// You might want to make your method async or switch to a timer
Task.Delay(100);
}
}
}
The advantage of Thread.VolatileRead + Interlocked.CompareExchange over a straight lock is that if there is any significant work being done in the lock, this approach will perform significantly better. When benchmarked against the following reasonable locking implementation using decimal subtraction:
private static object _locker = new object();
private static decimal param2 = 100.0m;
private static void DecrementationLock()
{
while (true)
{
if (token.IsCancellationRequested)
{
break;
}
lock (_locker)
{
if (param2 > 1)
{
param2 = param2 - 0.1m;
Task.Delay(100);
}
else
{
cancelTokenSource.Cancel();
return;
}
}
}
}
even though the Task.Delay is not awaited in either case, the lock code is over 2.5x slower. That said, in cases where no work is being done, there is essentially no execution time difference between the two approaches.
Related
I have some code that runs thousands of URLs through a third party library. Occasionally the method in the library hangs which takes up a thread. After a while all threads are taken up by processes doing nothing and it grinds to a halt.
I am using a SemaphoreSlim to control adding new threads so I can have an optimal number of tasks running. I need a way to identify tasks that have been running too long and then to kill them but also release a thread from the SemaphoreSlim so a new task can be created.
I am struggling with the approach here so I made some test code that immitates what I am doing. It create tasks that have a 10% chance of hanging so very quickly all threads have hung.
How should I be checking for these and killing them off?
Here is the code:
class Program
{
public static SemaphoreSlim semaphore;
public static List<Task> taskList;
static void Main(string[] args)
{
List<string> urlList = new List<string>();
Console.WriteLine("Generating list");
for (int i = 0; i < 1000; i++)
{
//adding random strings to simulate a large list of URLs to process
urlList.Add(Path.GetRandomFileName());
}
Console.WriteLine("Queueing tasks");
semaphore = new SemaphoreSlim(10, 10);
Task.Run(() => QueueTasks(urlList));
Console.ReadLine();
}
static void QueueTasks(List<string> urlList)
{
taskList = new List<Task>();
foreach (var url in urlList)
{
Console.WriteLine("{0} tasks can enter the semaphore.",
semaphore.CurrentCount);
semaphore.Wait();
taskList.Add(DoTheThing(url));
}
}
static async Task DoTheThing(string url)
{
Random rand = new Random();
// simulate the IO process
await Task.Delay(rand.Next(2000, 10000));
// add a 10% chance that the thread will hang simulating what happens occasionally with http request
int chance = rand.Next(1, 100);
if (chance <= 10)
{
while (true)
{
await Task.Delay(1000000);
}
}
semaphore.Release();
Console.WriteLine(url);
}
}
As people have already pointed out, Aborting threads in general is bad and there is no guaranteed way of doing it in C#. Using a separate process to do the work and then kill it is a slightly better idea than attempting Thread.Abort; but still not the best way to go. Ideally, you want co-operative threads/processes, which use IPC to decide when to bail out themselves. This way the cleanup is done properly.
With all that said, you can use code like below to do what you intend to do. I have written it assuming your task will be done in a thread. With slight changes, you can use the same logic to do your task in a process
The code is by no means bullet-proof and is meant to be illustrative. The concurrent code is not really tested well. Locks are held for longer than needed and some places I am not locking (like the Log function)
class TaskInfo {
public Thread Task;
public DateTime StartTime;
public TaskInfo(ParameterizedThreadStart startInfo, object startArg) {
Task = new Thread(startInfo);
Task.Start(startArg);
StartTime = DateTime.Now;
}
}
class Program {
const int MAX_THREADS = 1;
const int TASK_TIMEOUT = 6; // in seconds
const int CLEANUP_INTERVAL = TASK_TIMEOUT; // in seconds
public static SemaphoreSlim semaphore;
public static List<TaskInfo> TaskList;
public static object TaskListLock = new object();
public static Timer CleanupTimer;
static void Main(string[] args) {
List<string> urlList = new List<string>();
Log("Generating list");
for (int i = 0; i < 2; i++) {
//adding random strings to simulate a large list of URLs to process
urlList.Add(Path.GetRandomFileName());
}
Log("Queueing tasks");
semaphore = new SemaphoreSlim(MAX_THREADS, MAX_THREADS);
Task.Run(() => QueueTasks(urlList));
CleanupTimer = new Timer(CleanupTasks, null, CLEANUP_INTERVAL * 1000, CLEANUP_INTERVAL * 1000);
Console.ReadLine();
}
// TODO: Guard against re-entrancy
static void CleanupTasks(object state) {
Log("CleanupTasks started");
lock (TaskListLock) {
var now = DateTime.Now;
int n = TaskList.Count;
for (int i = n - 1; i >= 0; --i) {
var task = TaskList[i];
Log($"Checking task with ID {task.Task.ManagedThreadId}");
// kill processes running for longer than anticipated
if (task.Task.IsAlive && now.Subtract(task.StartTime).TotalSeconds >= TASK_TIMEOUT) {
Log("Cleaning up hung task");
task.Task.Abort();
}
// remove task if it is not alive
if (!task.Task.IsAlive) {
Log("Removing dead task from list");
TaskList.RemoveAt(i);
continue;
}
}
if (TaskList.Count == 0) {
Log("Disposing cleanup thread");
CleanupTimer.Dispose();
}
}
Log("CleanupTasks done");
}
static void QueueTasks(List<string> urlList) {
TaskList = new List<TaskInfo>();
foreach (var url in urlList) {
Log($"Trying to schedule url = {url}");
semaphore.Wait();
Log("Semaphore acquired");
ParameterizedThreadStart taskRoutine = obj => {
try {
DoTheThing((string)obj);
} finally {
Log("Releasing semaphore");
semaphore.Release();
}
};
var task = new TaskInfo(taskRoutine, url);
lock (TaskListLock)
TaskList.Add(task);
}
Log("All tasks queued");
}
// simulate all processes get hung
static void DoTheThing(string url) {
while (true)
Thread.Sleep(5000);
}
static void Log(string msg) {
Console.WriteLine("{0:HH:mm:ss.fff} Thread {1,2} {2}", DateTime.Now, Thread.CurrentThread.ManagedThreadId.ToString(), msg);
}
}
I would like to write a method which accept several parameters, including an action and a retry amount and invoke it.
So I have this code:
public static IEnumerable<Task> RunWithRetries<T>(List<T> source, int threads, Func<T, Task<bool>> action, int retries, string method)
{
object lockObj = new object();
int index = 0;
return new Action(async () =>
{
while (true)
{
T item;
lock (lockObj)
{
if (index < source.Count)
{
item = source[index];
index++;
}
else
break;
}
int retry = retries;
while (retry > 0)
{
try
{
bool res = await action(item);
if (res)
retry = -1;
else
//sleep if not success..
Thread.Sleep(200);
}
catch (Exception e)
{
LoggerAgent.LogException(e, method);
}
finally
{
retry--;
}
}
}
}).RunParallel(threads);
}
RunParallel is an extention method for Action, its look like this:
public static IEnumerable<Task> RunParallel(this Action action, int amount)
{
List<Task> tasks = new List<Task>();
for (int i = 0; i < amount; i++)
{
Task task = Task.Factory.StartNew(action);
tasks.Add(task);
}
return tasks;
}
Now, the issue: The thread is just disappearing or collapsing without waiting for the action to finish.
I wrote this example code:
private static async Task ex()
{
List<int> ints = new List<int>();
for (int i = 0; i < 1000; i++)
{
ints.Add(i);
}
var tasks = RetryComponent.RunWithRetries(ints, 100, async (num) =>
{
try
{
List<string> test = await fetchSmthFromDb();
Console.WriteLine("#" + num + " " + test[0]);
return test[0] == "test";
}
catch (Exception e)
{
Console.WriteLine(e.StackTrace);
return false;
}
}, 5, "test");
await Task.WhenAll(tasks);
}
The fetchSmthFromDb is a simple Task> which fetches something from the db and works perfectly fine when invoked outside of this example.
Whenever the List<string> test = await fetchSmthFromDb(); row is invoked, the thread seems to be closing and the Console.WriteLine("#" + num + " " + test[0]); not even being triggered, also when debugging the breakpoint never hit.
The Final Working Code
private static async Task DoWithRetries(Func<Task> action, int retryCount, string method)
{
while (true)
{
try
{
await action();
break;
}
catch (Exception e)
{
LoggerAgent.LogException(e, method);
}
if (retryCount <= 0)
break;
retryCount--;
await Task.Delay(200);
};
}
public static async Task RunWithRetries<T>(List<T> source, int threads, Func<T, Task<bool>> action, int retries, string method)
{
Func<T, Task> newAction = async (item) =>
{
await DoWithRetries(async ()=>
{
await action(item);
}, retries, method);
};
await source.ParallelForEachAsync(newAction, threads);
}
The problem is in this line:
return new Action(async () => ...
You start an async operation with the async lambda, but don't return a task to await on. I.e. it runs on worker threads, but you'll never find out when it's done. And your program terminates before the async operation is complete -that's why you don't see any output.
It needs to be:
return new Func<Task>(async () => ...
UPDATE
First, you need to split responsibilities of methods, so you don't mix retry policy (which should not be hardcoded to a check of a boolean result) with running tasks in parallel.
Then, as previously mentioned, you run your while (true) loop 100 times instead of doing things in parallel.
As #MachineLearning pointed out, use Task.Delay instead of Thread.Sleep.
Overall, your solution looks like this:
using System.Collections.Async;
static async Task DoWithRetries(Func<Task> action, int retryCount, string method)
{
while (true)
{
try
{
await action();
break;
}
catch (Exception e)
{
LoggerAgent.LogException(e, method);
}
if (retryCount <= 0)
break;
retryCount--;
await Task.Delay(millisecondsDelay: 200);
};
}
static async Task Example()
{
List<int> ints = new List<int>();
for (int i = 0; i < 1000; i++)
ints.Add(i);
Func<int, Task> actionOnItem =
async item =>
{
await DoWithRetries(async () =>
{
List<string> test = await fetchSmthFromDb();
Console.WriteLine("#" + item + " " + test[0]);
if (test[0] != "test")
throw new InvalidOperationException("unexpected result"); // will be re-tried
},
retryCount: 5,
method: "test");
};
await ints.ParallelForEachAsync(actionOnItem, maxDegreeOfParalellism: 100);
}
You need to use the AsyncEnumerator NuGet Package in order to use the ParallelForEachAsync extension method from the System.Collections.Async namespace.
Besides the final complete reengineering, I think it's very important to underline what was really wrong with the original code.
0) First of all, as #Serge Semenov immediately pointed out, Action has to be replaced with
Func<Task>
But there are still other two essential changes.
1) With an async delegate as argument it is necessary to use the more recent Task.Run instead of the older pattern new TaskFactory.StartNew (or otherwise you have to add Unwrap() explicitly)
2) Moreover the ex() method can't be async since Task.WhenAll must be waited with Wait() and without await.
At that point, even though there are logical errors that need reengineering, from a pure technical standpoint it does work and the output is produced.
A test is available online: http://rextester.com/HMMI93124
I have this "simple" test code... (Don't bother the strange use of the Class methods...)
I am trying to grasp the Task<> intricacies... I think I have a little understanding of Task<>.Start()/Task<>.Result pattern (maybe as it resembles more the 'old' Thread.Start()?) but as soon as it seems to me to grasp something (and so I throw in the await keyword)... then all entangles again :-(
Why my code returns immediately after the first task completes? Why it doesn't wait on the Task.WhenAll()?
static BigInteger Factorial(BigInteger factor)
{
BigInteger factorial = 1;
for (BigInteger i = 1; i <= factor; i++)
{
factorial *= i;
}
return factorial;
}
private class ChancesToWin
{
private int _n, _r;
public ChancesToWin(int n, int r)
{
_n = n;
_r = r;
}
private Task<BigInteger> CalculateFactAsync(int value)
{
return Task.Factory.StartNew<BigInteger>(() => Factorial(value));
}
public async Task<BigInteger> getFactN()
{
BigInteger result = await CalculateFactAsync(_n);
return result;
}
public async Task<BigInteger> getFactN_R()
{
BigInteger result = await CalculateFactAsync(_n - _r);
return result;
}
public async Task<BigInteger> getFactR()
{
BigInteger result = await CalculateFactAsync(_r);
return result;
}
}
private async static void TaskBasedChancesToWin_UseClass()
{
int n = 69000;
int r = 600;
List<Task<BigInteger>> tasks = new List<Task<BigInteger>>();
ChancesToWin ctw = new ChancesToWin(n, r);
tasks.Add(ctw.getFactN());
tasks.Add(ctw.getFactN_R());
tasks.Add(ctw.getFactR());
// The getFactR() returns first of the other two tasks... and the code exit!
BigInteger[] results = await Task.WhenAll(tasks);
// I don't get here !!!!
BigInteger chances = results[0] / results[1] * results[2];
//Debug.WriteLine(chances);
}
static void Main(string[] args)
{
TaskBasedChancesToWin_UseClass();
}
Async methods run synchronously until the first await when they return control to the calling method, usually returning a task representing the rest of the asynchronous operation. TaskBasedChancesToWin_UseClass doesn't return a task so the caller can't wait for it to complete. That's why you shouldn't use async void outside of event handlers.
Since Main doesn't wait for the operation your application ends before the operation had a chance to complete.
You would usually wait with await but since you Main can't be an async method you can block synchronously with Wait on the task returned from TaskBasedChancesToWin_UseClass:
async static Task TaskBasedChancesToWin_UseClass()
{
// ...
}
static void Main()
{
TaskBasedChancesToWin_UseClass().Wait();
}
In my Producer-Consumer scenario, I have multiple consumers, and each of the consumers send an action to external hardware, which may take some time. My Pipeline looks somewhat like this:
BatchBlock --> TransformBlock --> BufferBlock --> (Several) ActionBlocks
I have assigned BoundedCapacity of my ActionBlocks to 1.
What I want in theory is, I want to trigger the Batchblock to send a group of items to the Transformblock only when one of my Actionblocks are available for operation. Till then the Batchblock should just keep buffering elements and not pass them on to the Transformblock. My batch-sizes are variable. As Batchsize is mandatory, I do have a really high upper-limit for BatchBlock batch size, however I really don't wish to reach upto that limit, I would like to trigger my batches depending upon the availability of the Actionblocks permforming the said task.
I have achieved this with the help of the Triggerbatch() method. I am calling the Batchblock.Triggerbatch() as the last action in my ActionBlock.However interestingly after several days of working properly the pipeline has come to a hault. Upon checking I found out that sometimes the inputs to the batchblock come in after the ActionBlocks are done with their work. In this case the ActionBlocks do actually call Triggerbatch at the end of their work, however since at this point there is no input to the Batchblock at all, the call to TriggerBatch is fruitless. And after a while when inputs do flow in to the Batchblock, there is no one left to call TriggerBatch and restart the Pipeline. I was looking for something where I could just check if something is infact present in the inputbuffer of the Batchblock, however there is no such feature available, I could also not find a way to check if the TriggerBatch was fruitful.
Could anyone suggest a possible solution to my problem. Unfortunately using a Timer to triggerbatches is not an option for me. Except for the start of the Pipeline, the throttling should be governed only by the availability of one of the ActionBlocks.
The example code is here:
static BatchBlock<int> _groupReadTags;
static void Main(string[] args)
{
_groupReadTags = new BatchBlock<int>(1000);
var bufferOptions = new DataflowBlockOptions{BoundedCapacity = 2};
BufferBlock<int> _frameBuffer = new BufferBlock<int>(bufferOptions);
var consumerOptions = new ExecutionDataflowBlockOptions { BoundedCapacity = 1};
int batchNo = 1;
TransformBlock<int[], int> _workingBlock = new TransformBlock<int[], int>(list =>
{
Console.WriteLine("\n\nWorking on Batch Number {0}", batchNo);
//_groupReadTags.TriggerBatch();
int sum = 0;
foreach (int item in list)
{
Console.WriteLine("Elements in batch {0} :: {1}", batchNo, item);
sum += item;
}
batchNo++;
return sum;
});
ActionBlock<int> _worker1 = new ActionBlock<int>(async x =>
{
Console.WriteLine("Number from ONE :{0}",x);
await Task.Delay(500);
Console.WriteLine("BatchBlock Output Count : {0}", _groupReadTags.OutputCount);
_groupReadTags.TriggerBatch();
},consumerOptions);
ActionBlock<int> _worker2 = new ActionBlock<int>(async x =>
{
Console.WriteLine("Number from TWO :{0}", x);
await Task.Delay(2000);
_groupReadTags.TriggerBatch();
}, consumerOptions);
_groupReadTags.LinkTo(_workingBlock);
_workingBlock.LinkTo(_frameBuffer);
_frameBuffer.LinkTo(_worker1);
_frameBuffer.LinkTo(_worker2);
_groupReadTags.Post(10);
_groupReadTags.Post(20);
_groupReadTags.TriggerBatch();
Task postingTask = new Task(() => PostStuff());
postingTask.Start();
Console.ReadLine();
}
static void PostStuff()
{
for (int i = 0; i < 10; i++)
{
_groupReadTags.Post(i);
Thread.Sleep(100);
}
Parallel.Invoke(
() => _groupReadTags.Post(100),
() => _groupReadTags.Post(200),
() => _groupReadTags.Post(300),
() => _groupReadTags.Post(400),
() => _groupReadTags.Post(500),
() => _groupReadTags.Post(600),
() => _groupReadTags.Post(700),
() => _groupReadTags.Post(800)
);
}
Here is an alternative BatchBlock implementation with some extra features. It includes a TriggerBatch method with this signature:
public int TriggerBatch(int nextMinBatchSizeIfEmpty);
Invoking this method will either trigger a batch immediately if the input queue is not empty, otherwise it will set a temporary MinBatchSize that will affect only the next batch. You could invoke this method with a small value for nextMinBatchSizeIfEmpty to ensure that in case a batch cannot be currently produced, the next batch will occur sooner than the configured BatchSize at the block's constructor.
This method returns the size of the batch produced. It returns 0 in case that the input queue is empty, or the output queue is full, or the block has completed.
public class BatchBlockEx<T> : ITargetBlock<T>, ISourceBlock<T[]>
{
private readonly ITargetBlock<T> _input;
private readonly IPropagatorBlock<T[], T[]> _output;
private readonly Queue<T> _queue;
private readonly object _locker = new object();
private int _nextMinBatchSize = Int32.MaxValue;
public Task Completion { get; }
public int InputCount { get { lock (_locker) return _queue.Count; } }
public int OutputCount => ((BufferBlock<T[]>)_output).Count;
public int BatchSize { get; }
public BatchBlockEx(int batchSize, DataflowBlockOptions dataflowBlockOptions = null)
{
if (batchSize < 1) throw new ArgumentOutOfRangeException(nameof(batchSize));
dataflowBlockOptions = dataflowBlockOptions ?? new DataflowBlockOptions();
if (dataflowBlockOptions.BoundedCapacity != DataflowBlockOptions.Unbounded &&
dataflowBlockOptions.BoundedCapacity < batchSize)
throw new ArgumentOutOfRangeException(nameof(batchSize),
"Number must be no greater than the value specified in BoundedCapacity.");
this.BatchSize = batchSize;
_output = new BufferBlock<T[]>(dataflowBlockOptions);
_queue = new Queue<T>(batchSize);
_input = new ActionBlock<T>(async item =>
{
T[] batch = null;
lock (_locker)
{
_queue.Enqueue(item);
if (_queue.Count == batchSize || _queue.Count >= _nextMinBatchSize)
{
batch = _queue.ToArray(); _queue.Clear();
_nextMinBatchSize = Int32.MaxValue;
}
}
if (batch != null) await _output.SendAsync(batch).ConfigureAwait(false);
}, new ExecutionDataflowBlockOptions()
{
BoundedCapacity = 1,
CancellationToken = dataflowBlockOptions.CancellationToken
});
var inputContinuation = _input.Completion.ContinueWith(async t =>
{
try
{
T[] batch = null;
lock (_locker)
{
if (_queue.Count > 0)
{
batch = _queue.ToArray(); _queue.Clear();
}
}
if (batch != null) await _output.SendAsync(batch).ConfigureAwait(false);
}
finally
{
if (t.IsFaulted)
{
_output.Fault(t.Exception.InnerException);
}
else
{
_output.Complete();
}
}
}, TaskScheduler.Default).Unwrap();
this.Completion = Task.WhenAll(inputContinuation, _output.Completion);
}
public void Complete() => _input.Complete();
void IDataflowBlock.Fault(Exception ex) => _input.Fault(ex);
public int TriggerBatch(Func<T[], bool> condition, int nextMinBatchSizeIfEmpty)
{
if (nextMinBatchSizeIfEmpty < 1)
throw new ArgumentOutOfRangeException(nameof(nextMinBatchSizeIfEmpty));
int count = 0;
lock (_locker)
{
if (_queue.Count > 0)
{
T[] batch = _queue.ToArray();
if (condition == null || condition(batch))
{
bool accepted = _output.Post(batch);
if (accepted) { _queue.Clear(); count = batch.Length; }
}
_nextMinBatchSize = Int32.MaxValue;
}
else
{
_nextMinBatchSize = nextMinBatchSizeIfEmpty;
}
}
return count;
}
public int TriggerBatch(Func<T[], bool> condition)
=> TriggerBatch(condition, Int32.MaxValue);
public int TriggerBatch(int nextMinBatchSizeIfEmpty)
=> TriggerBatch(null, nextMinBatchSizeIfEmpty);
public int TriggerBatch() => TriggerBatch(null, Int32.MaxValue);
DataflowMessageStatus ITargetBlock<T>.OfferMessage(
DataflowMessageHeader messageHeader, T messageValue,
ISourceBlock<T> source, bool consumeToAccept)
{
return _input.OfferMessage(messageHeader, messageValue, source,
consumeToAccept);
}
T[] ISourceBlock<T[]>.ConsumeMessage(DataflowMessageHeader messageHeader,
ITargetBlock<T[]> target, out bool messageConsumed)
{
return _output.ConsumeMessage(messageHeader, target, out messageConsumed);
}
bool ISourceBlock<T[]>.ReserveMessage(DataflowMessageHeader messageHeader,
ITargetBlock<T[]> target)
{
return _output.ReserveMessage(messageHeader, target);
}
void ISourceBlock<T[]>.ReleaseReservation(DataflowMessageHeader messageHeader,
ITargetBlock<T[]> target)
{
_output.ReleaseReservation(messageHeader, target);
}
IDisposable ISourceBlock<T[]>.LinkTo(ITargetBlock<T[]> target,
DataflowLinkOptions linkOptions)
{
return _output.LinkTo(target, linkOptions);
}
}
Another overload of the TriggerBatch method allows to examine the batch that can be currently produced, and decide if it should be triggered or not:
public int TriggerBatch(Func<T[], bool> condition);
The BatchBlockEx class does not support the Greedy and MaxNumberOfGroups options of the built-in BatchBlock.
I have found that using TriggerBatch in this way is unreliable:
_groupReadTags.Post(10);
_groupReadTags.Post(20);
_groupReadTags.TriggerBatch();
Apparently TriggerBatch is intended to be used inside the block, not outside it like this. I have seen this result in odd timing issues, like items from next batch batch being included in the current batch, even though TriggerBatch was called first.
Please see my answer to this question for an alternative using DataflowBlock.Encapsulate: BatchBlock produces batch with elements sent after TriggerBatch()
I have following code which throws SemaphoreFullException, I don't understand why ?
If I change _semaphore = new SemaphoreSlim(0, 2) to
_semaphore = new SemaphoreSlim(0, int.MaxValue)
then all works fine.
Can anyone please find fault with this code and explain to me.
class BlockingQueue<T>
{
private Queue<T> _queue = new Queue<T>();
private SemaphoreSlim _semaphore = new SemaphoreSlim(0, 2);
public void Enqueue(T data)
{
if (data == null) throw new ArgumentNullException("data");
lock (_queue)
{
_queue.Enqueue(data);
}
_semaphore.Release();
}
public T Dequeue()
{
_semaphore.Wait();
lock (_queue)
{
return _queue.Dequeue();
}
}
}
public class Test
{
private static BlockingQueue<string> _bq = new BlockingQueue<string>();
public static void Main()
{
for (int i = 0; i < 100; i++)
{
_bq.Enqueue("item-" + i);
}
for (int i = 0; i < 5; i++)
{
Thread t = new Thread(Produce);
t.Start();
}
for (int i = 0; i < 100; i++)
{
Thread t = new Thread(Consume);
t.Start();
}
Console.ReadLine();
}
private static Random _random = new Random();
private static void Produce()
{
while (true)
{
_bq.Enqueue("item-" + _random.Next());
Thread.Sleep(2000);
}
}
private static void Consume()
{
while (true)
{
Console.WriteLine("Consumed-" + _bq.Dequeue());
Thread.Sleep(1000);
}
}
}
If you want to use the semaphore to control the number of concurrent threads, you're using it wrong. You should acquire the semaphore when you dequeue an item, and release the semaphore when the thread is done processing that item.
What you have right now is a system that allows only two items to be in the queue at any one time. Initially, your semaphore has a count of 2. Each time you enqueue an item, the count is reduced. After two items, the count is 0 and if you try to release again you're going to get a semaphore full exception.
If you really want to do this with a semaphore, you need to remove the Release call from the Enqueue method. And add a Release method to the BlockingQueue class. You then would write:
private static void Consume()
{
while (true)
{
Console.WriteLine("Consumed-" + _bq.Dequeue());
Thread.Sleep(1000);
bq.Release();
}
}
That would make your code work, but it's not a very good solution. A much better solution would be to use BlockingCollection<T> and two persistent consumers. Something like:
private BlockingCollection<int> bq = new BlockingCollection<int>();
void Test()
{
// create two consumers
var c1 = new Thread(Consume);
var c2 = new Thread(Consume);
c1.Start();
c2.Start();
// produce
for (var i = 0; i < 100; ++i)
{
bq.Add(i);
}
bq.CompleteAdding();
c1.Join();
c2.Join();
}
void Consume()
{
foreach (var i in bq.GetConsumingEnumerable())
{
Console.WriteLine("Consumed-" + i);
Thread.Sleep(1000);
}
}
That gives you two persistent threads consuming the items. The benefit is that you avoid the cost of spinning up a new thread (or having the RTL assign a pool thread) for each item. Instead, the threads do non-busy waits on the queue. You also don't have to worry about explicit locking, etc. The code is simpler, more robust, and much less likely to contain a bug.