I have ConcurrentQueue of type IProducerConsumerCollection i.e.
IProducerConsumerCollection<Job> _queue = new ConcurrentQueue<Job>();
and producer method which adds jobs to _queue and consumer method which processes the Job from _queue. Now in the consumer method, I like to process the jobs concurrently. Below is code for sample class with producer and consumer methods:
public class TestQueue
{
IProducerConsumerCollection<Job> _queue = new ConcurrentQueue<Job>();
private static HttpClient _client = new HttpClient();
public TestQueue()
{
WorkProducerThread();
WorkConsumerThread();
}
public void WorkConsumerThread()
{
if (_queue.Count > 0)
{
//At this point, 4 partitions are created but all records are in 1st partition only; 2,3,4 partition are empty
var partitioner = Partitioner.Create(_queue).GetPartitions(4);
Task t = Task.WhenAll(
from partition in partitioner
select Task.Run(async () =>
{
using (partition)
{
while (partition.MoveNext())
await CreateJobs(partition.Current);
}
}));
t.Wait();
//At this point, queue count is still 20, how to remove item from _queue collection when processed?
}
}
private async Task CreateJobs(Job job)
{
HttpContent bodyContent = null;
await _client.PostAsync("job", bodyContent);
}
public void WorkProducerThread()
{
if (_queue.Count == 0)
{
try
{
for (int i = 0; i < 20; i++)
{
Job job = new Job { Id = i, JobName = "j" + i.ToString(), JobCreated = DateTime.Now };
_queue.TryAdd(job);
}
}
catch (Exception ex)
{
//_Log.Error("Exception while adding jobs to collection", ex);
}
}
}
}
public class Job
{
public int Id { get; set; }
public string JobName { get; set; }
public DateTime JobCreated { get; set; }
}
There are 2 problems,
Partitioner.Create(_queue).GetPartitions(4); Partitioner.GetPartions creates 4 partitions but all records are in 1st partition only; 2,3,4 partition are empty. I can't find, why this is happening? Ideally, all 4 partitions should have 5 records each (since total 20 records are in queue). I read this article from MSDN on partitioning but didn't get any clue. Also I checked the partitioning example from this article.
Also, I want to remove the item from _queue after processing in consumer method and there is only one way _queue.TryTake method to remove item. I don't know how to remove item along with partitioning?
I can consider any alternate way to achieve the same result.
Thanks in advance.
Partitioner.Create(_queue).GetPartitions(4); Partitioner.GetPartions
creates 4 partitions but all records are in 1st partition only; 2,3,4
partition are empty.
This is not correct, your queue entries are being partitioned correctly. To verify, change your processing logic slightly to log the partition that is doing the work:
Task t = Task.WhenAll(
from partition in partitioner.Select((jobs, i) => new { jobs, i })
select Task.Run(async () =>
{
using (partition.jobs)
{
while (partition.jobs.MoveNext())
{
Console.WriteLine(partition.i);
await CreateJobs(partition.jobs.Current);
}
}
}));
You will notice that the Console.WriteLine will write values from 0 to 3 - indicating that they are being partitioned correctly.
Also, I want to remove the item from _queue after processing in
consumer method and there is only one way _queue.TryTake method to
remove item. I don't know how to remove item along with partitioning?
You can achieve that with a slight rewrite. The main changes are switching to BlockingCollection and by adding this NuGet package to give access to GetConsumingPartitioner.
Give this a try:
using System;
using System.Collections.Concurrent;
using System.Linq;
using System.Net.Http;
using System.Threading.Tasks;
namespace Test
{
public class TestQueue
{
BlockingCollection<Job> _queue = new BlockingCollection<Job>();
private static HttpClient _client = new HttpClient();
public TestQueue()
{
WorkProducerThread();
WorkConsumerThread();
}
public void WorkConsumerThread()
{
if (!_queue.IsCompleted)
{
//At this point, 4 partitions are created but all records are in 1st partition only; 2,3,4 partition are empty
var partitioner = _queue.GetConsumingPartitioner().GetPartitions(4);
Task t = Task.WhenAll(
from partition in partitioner
select Task.Run(async () =>
{
using (partition)
{
while (partition.MoveNext())
await CreateJobs(partition.Current);
}
}));
t.Wait();
Console.WriteLine(_queue.Count);
}
}
private async Task CreateJobs(Job job)
{
//HttpContent bodyContent = null;
//await _client.PostAsync("job", bodyContent);
await Task.Delay(100);
}
public void WorkProducerThread()
{
if (_queue.Count == 0)
{
try
{
for (int i = 0; i < 20; i++)
{
Job job = new Job { Id = i, JobName = "j" + i.ToString(), JobCreated = DateTime.Now };
_queue.TryAdd(job);
}
_queue.CompleteAdding();
}
catch (Exception ex)
{
//_Log.Error("Exception while adding jobs to collection", ex);
}
}
}
}
public class Job
{
public int Id { get; set; }
public string JobName { get; set; }
public DateTime JobCreated { get; set; }
}
class Program
{
static void Main(string[] args)
{
var g = new TestQueue();
Console.ReadLine();
}
}
}
Related
I'm trying to process documents asynchronously. The idea is that the user sends documents to a service, which takes time, and will look at the results later (about 20-90 seconds per document).
Ideally, I would like to just fill some kind of observable collection that would be emptied by the system as fast as it can. When there is an item, process it and produce the expected output in another object, and when there is no item just do nothing. When the user checks the output collection, he will find the items that are already processed.
Ideally all items would be visible from the start and would have a state (completed, ongoing or in queue), but once I know how to do the first, I should be able to handle the states.
I'm not sure which object to use for that, right now I'm looking at BlockingCollection but I don't think it's suited for the job, as I can't fill it while it's being emptied from the other end.
private BlockingCollection<IDocument> _jobs = new BlockingCollection<IDocument>();
public ObservableCollection<IExtractedDocument> ExtractedDocuments { get; }
public QueueService()
{
ExtractedDocuments = new ObservableCollection<IExtractedDocument>();
}
public async Task Add(string filePath, List<Extra> extras)
{
if (_jobs.IsAddingCompleted || _jobs.IsCompleted)
_jobs = new BlockingCollection<IDocument>();
var doc = new Document(filePath, extras);
_jobs.Add(doc);
_jobs.CompleteAdding();
await ProcessQueue();
}
private async Task ProcessQueue()
{
foreach (var document in _jobs.GetConsumingEnumerable(CancellationToken.None))
{
var resultDocument = await service.ProcessDocument(document);
ExtractedDocuments.Add(resultDocument );
Debug.WriteLine("Job completed");
}
}
This is how I'm handling it right now. If I remove the CompleteAdding call, it hangs on the second attempt. If I have that statement, then I can't just fill the queue, I have to empty it first which defeats the purpose.
Is there a way of having what I'm trying to achieve? A collection that I would fill and the system would process asynchronously and autonomously?
To summarize, I need :
A collection that I can fill, that would be processed gradually and asynchronously. A document or series or document can be added while some are being processed.
An ouput collection that would be filled after the process is complete
The UI thread and app to still be responsive while everything is running
I don't need to have multiple processes in parallel, or one document at a time. Whichever is easiest to put in place and maintain will do (small scale application). I'm assuming one at a time is simpler.
A common pattern here is to have a callback method that executes upon a document state change. With a background task running, it will chew threw documents as fast as it can. Call Dispose to shutdown the processor.
If you need to process the callback on a gui thread, you'll need to synchornize the callback to your main thread some how. Windows forms has methods to do this if that's what you are using.
This example program implements all the necessary classes and interfaces, and you can fine tune and tweak things as you need.
using System;
using System.Collections.Concurrent;
using System.Threading;
using System.Threading.Tasks;
namespace ConsoleApp2
{
class Program
{
private static Task Callback(IExtractedDocument doc, DocumentProcessor.DocState docState)
{
Console.WriteLine("Processing doc {0}, state: {1}", doc, docState);
return Task.CompletedTask;
}
public static void Main()
{
using DocumentProcessor docProcessor = new DocumentProcessor(Callback);
Console.WriteLine("Processor started, press any key to end processing");
for (int i = 0; i < 100; i++)
{
if (Console.KeyAvailable)
{
break;
}
else if (i == 5)
{
// make an error
docProcessor.Add(null);
}
else
{
docProcessor.Add(new Document { Text = "Test text " + Guid.NewGuid().ToString() });
}
Thread.Sleep(500);
}
Console.WriteLine("Doc processor shut down, press ENTER to quit");
Console.ReadLine();
}
public interface IDocument
{
public string Text { get; }
}
public class Document : IDocument
{
public string Text { get; set; }
}
public interface IExtractedDocument : IDocument
{
public IDocument OriginalDocument { get; }
public Exception Error { get; }
}
public class ExtractedDocument : IExtractedDocument
{
public override string ToString()
{
return $"Orig text: {OriginalDocument?.Text}, Extracted Text: {Text}, Error: {Error}";
}
public IDocument OriginalDocument { get; set; }
public string Text { get; set; }
public Exception Error { get; set; }
}
public class DocumentProcessor : IDisposable
{
public enum DocState { Processing, Completed, Error }
private readonly BlockingCollection<IDocument> queue = new BlockingCollection<IDocument>();
private readonly Func<IExtractedDocument, DocState, Task> callback;
private CancellationTokenSource cancelToken = new CancellationTokenSource();
public DocumentProcessor(Func<IExtractedDocument, DocState, Task> callback)
{
this.callback = callback;
Task.Run(() => StartQueueProcessor()).GetAwaiter();
}
public void Dispose()
{
if (!cancelToken.IsCancellationRequested)
{
cancelToken.Cancel();
}
}
public void Add(IDocument doc)
{
if (cancelToken.IsCancellationRequested)
{
throw new InvalidOperationException("Processor is disposed");
}
queue.Add(doc);
}
private void ProcessDocument(IDocument doc)
{
try
{
// do processing
DoCallback(new ExtractedDocument { OriginalDocument = doc }, DocState.Processing);
if (doc is null)
{
throw new ArgumentNullException("Document to process was null");
}
IExtractedDocument successExtractedDocument = DoSomeDocumentProcessing(doc);
DoCallback(successExtractedDocument, DocState.Completed);
}
catch (Exception ex)
{
DoCallback(new ExtractedDocument { OriginalDocument = doc, Error = ex }, DocState.Error);
}
}
private IExtractedDocument DoSomeDocumentProcessing(IDocument originalDocument)
{
return new ExtractedDocument { OriginalDocument = originalDocument, Text = "Extracted: " + originalDocument.Text };
}
private void DoCallback(IExtractedDocument result, DocState docState)
{
if (callback != null)
{
// send callbacks in background
callback(result, docState).GetAwaiter();
}
}
private void StartQueueProcessor()
{
try
{
while (!cancelToken.Token.IsCancellationRequested)
{
if (queue.TryTake(out IDocument doc, 1000, cancelToken.Token))
{
// can chance to Task.Run(() => ProcessDocument(doc)).GetAwaiter() for parallel execution
ProcessDocument(doc);
}
}
}
catch (OperationCanceledException)
{
// ignore, don't need to throw or worry about this
}
while (queue.TryTake(out IDocument doc))
{
DoCallback(new ExtractedDocument { Error = new ObjectDisposedException("Processor was disposed") }, DocState.Error);
}
}
}
}
}
I'm implementing the MailChimp.NET wrapper in both synchronous and asynchronous ways and calls are going through without a problem, BUT results tend to get lost in the synchronous methods. In other words, if I send 100 members to be added (by batches of 10 due to the simultaneous connections limit of the MailChimp API), all 100 will indeed be visible in my MC audience but I'll loose from 5 to 25% of the results on code side. Here's the concerned bit of my implementation :
public class MailChimpClient : IDisposable
{
private MailChimpManager _mcm;
private string _apiKey;
private bool _isDisposed;
private ConcurrentQueue<MailChimpMember> _updatedMembersQueue;
private ConcurrentQueue<MailChimpBaseException> _exceptionsQueue;
private const int BatchSize = 10;
private const int TaskDelay = 100;
private ConcurrentQueue<MailChimpMember> UpdatedMembersQueue
{
get { return _updatedMembersQueue = _updatedMembersQueue ?? new ConcurrentQueue<MailChimpMember>(); }
set { _updatedMembersQueue = value; }
}
private ConcurrentQueue<MailChimpBaseException> ExceptionsQueue
{
get { return _exceptionsQueue = _exceptionsQueue ?? new ConcurrentQueue<MailChimpBaseException>(); }
set { _exceptionsQueue = value; }
}
public MailChimpClient(string apiKey)
{
_apiKey = apiKey;
_mcm = new MailChimpManager(apiKey);
}
private async Task AddOrUpdateMember(MailChimpMember member, string listId)
{
try
{
var model = member.ToApiMember();
model = await _mcm.Members.AddOrUpdateAsync(listId, model);
UpdatedMembersQueue.Enqueue(new MailChimpMember(model));
await Task.Delay(TaskDelay);
}
catch (Exception ex)
{
var mccex = new MailChimpClientException($"Error adding/updating member \"{(member != null ? member.MailAddress.ToString() : "NULL")}\" to list with ID \"{listId}\".", ex);
ExceptionsQueue.Enqueue(mccex);
}
}
private MailChimpClientResult AddOrUpdateMemberRange(IEnumerable<MailChimpMember> members, string listId)
{
var batches = members.GetBatches(BatchSize);
var result = new MailChimpClientResult();
var i = 0;
foreach (var batch in batches)
{
AddOrUpdateMemberBatch(batch, listId);
i++;
FlushQueues(ref result);
}
return result;
}
private void AddOrUpdateMemberBatch(MailChimpMember[] batch, string listId)
{
Task.WaitAll(batch.Select(async b => await AddOrUpdateMember(b, listId)).ToArray(), -1);
}
private void FlushQueues(ref MailChimpClientResult result)
{
result.UpdatedMembers.FlushQueue(UpdatedMembersQueue);
result.Exceptions.FlushQueue(ExceptionsQueue);
}
public MailChimpClientResult AddOrUpdate(MailChimpMember member, string listId)
{
return AddOrUpdateMemberRange(new MailChimpMember[] { member }, listId);
}
public MailChimpClientResult AddOrUpdate(IEnumerable<MailChimpMember> members, string listId)
{
return AddOrUpdateMemberRange(members, listId);
}
}
public static class CollectionExtensions
{
public static T[][] GetBatches<T>(this IEnumerable<T> items, int batchSize)
{
var result = new List<T[]>();
var batch = new List<T>();
foreach (var t in items)
{
if (batch.Count == batchSize)
{
result.Add(batch.ToArray());
batch.Clear();
}
batch.Add(t);
}
result.Add(batch.ToArray());
batch.Clear();
return result.ToArray();
}
public static void FlushQueue<T>(this IList<T> list, ConcurrentQueue<T> queue)
{
T item;
while (queue.TryDequeue(out item))
list.Add(item);
}
}
MailChimpMember being a public copy of the MailChimp.NET Member. The problem seems to happen in the batch processing method, the Task.WaitAll(...) instruction firing its completion event before all calls are complete, therefore not all results are queued. I tried delaying the execution of each individual treatment with Task.Delay() but with little to no result.
Does anyone have an idea what is failing in my implementation ?
As a part of best practices for async and await it is recommended to not use Task.Run. I have a service which makes multiple calls to a third party service and we use async to make those calls. I'm looking for advice on code improvement in the code below.
public interface IRouteService
{
Task<IEnumerable<Route>> GetRoute(Coordinates orign, Coordinates destination);
}
public class RouteProvider
{
private readonly IRouteService _routeService;
public RouteProvider(IRouteService routeService)
{
_routeService = routeService;
}
public async Task<IEnumerable<Route>> GetRoutes(IEnumerable<Coordinates> origns, IEnumerable<Coordinates> destinations)
{
ConcurrentBag<Route> routes = new ConcurrentBag<Route>();
List<Task> tasks = new List<Task>();
foreach (var origin in origns)
{
foreach (var destination in destinations)
{
tasks.Add(Task.Run(async () =>
{
var response= await _routeService.GetRoute(origin, destination);
foreach (var item in response)
{
routes.Add(item);
}
}));
}
}
Task.WaitAll(tasks.ToArray());
return routes;
}
}
public class Route
{
public string Distance { get; set; }
public Coordinates Origin { get; set; }
public object Destination { get; set; }
public string OriginName { get; set; }
public string DestinationName { get; set; }
}
public class Coordinates
{
public float Lat { get; set; }
public float Long { get; set; }
}
For a problem like this it is handy to use LINQ. LINQ produces immutable results so you avoid concurrency issues and don't need any specialized collections.
In general, using LINQ or similar programming techniques (i.e. thinking like a functional programmer) will make multithreading much easier.
public async Task<IEnumerable<Route>> GetRoutes(IEnumerable<Coordinates> origins, IEnumerable<Coordinates> destinations)
{
var tasks = origins
.SelectMany
(
o => destinations.Select
(
d => _routeService.GetRoute(o, d)
)
);
await Task.WhenAll( tasks.ToArray() );
return tasks.SelectMany( task => task.Result );
}
As pointed in the comments I would suggest that you could use Task.WhenAll() to determine all task to complete and get the results with return await Task.WhenAll(tasks);. To do that, you can update your code like shown below.
public async Task<IEnumerable<Route>> GetRoutes(IEnumerable<Coordinates> origns, IEnumerable<Coordinates> destinations)
{
ConcurrentBag<Route> routes = new ConcurrentBag<Route>();
List<Task> tasks = new List<Task>();
foreach (var origin in origns)
{
foreach (var destination in destinations)
{
tasks.Add(_routeService.GetRoute(origin, destination));
}
}
var response = await Task.WhenAll(tasks);
foreach (var item in response)
{
routes.Add(item);
}
return routes;
}
}
Since all the calls will return the same type, you do not need to start a second foreach in other loop. Also, this way you will avoid locking thread execution with Task.WaitAll() and your program will run more syncronous. To see the difference between WhenAll() vs WaitAll(), you can check this out.
Instead of directly creating tasks using the Task.Run method you can use continuations.
foreach (var origin in origns)
{
foreach (var destination in destinations)
{
tasks.Add(
_routeService.GetRoute(origin, destination)
.ContinueWith(response =>
{
foreach (var item in response.Result)
routes.Add(item);
})
);
}
}
Thus, the GetRoute method will be executed asynchronously, without creating a separate thread. And the result obtained from it will be processed in a separate thread (task).
However, this is only necessary if the result takes a long time to process. Otherwise, a separate thread is not needed at all.
Here's what I'm trying to do:
Keep a queue in memory of items that need processed (i.e. IsProcessed = 0)
Every 5 seconds, get unprocessed items from the db, and if they're not already in the queue, add them
Continuous pull items from the queue, process them, and each time an item is processed, update it in the db (IsProcessed = 1)
Do this all "as parallel as possible"
I have a constructor for my service like
public MyService()
{
Ticker.Elapsed += FillQueue;
}
and I start that timer when the service starts like
protected override void OnStart(string[] args)
{
Ticker.Enabled = true;
Task.Run(() => { ConsumeWork(); });
}
and my FillQueue is like
private static async void FillQueue(object source, ElapsedEventArgs e)
{
var items = GetUnprocessedItemsFromDb();
foreach(var item in items)
{
if(!Work.Contains(item))
{
Work.Enqueue(item);
}
}
}
and my ConsumeWork is like
private static void ConsumeWork()
{
while(true)
{
if(Work.Count > 0)
{
var item = Work.Peek();
Process(item);
Work.Dequeue();
}
else
{
Thread.Sleep(500);
}
}
}
However this is probably a naive implementation and I'm wondering whether .NET has any type of class that is exactly what I need for this type of situation.
Though #JSteward' answer is a good start, you can improve it with mixing up the TPL-Dataflow and Rx.NET extensions, as a dataflow block may easily become an observer for your data, and with Rx Timer it will be much less effort for you (Rx.Timer explanation).
We can adjust MSDN article for your needs, like this:
private const int EventIntervalInSeconds = 5;
private const int DueIntervalInSeconds = 60;
var source =
// sequence of Int64 numbers, starting from 0
// https://msdn.microsoft.com/en-us/library/hh229435.aspx
Observable.Timer(
// fire first event after 1 minute waiting
TimeSpan.FromSeconds(DueIntervalInSeconds),
// fire all next events each 5 seconds
TimeSpan.FromSeconds(EventIntervalInSeconds))
// each number will have a timestamp
.Timestamp()
// each time we select some items to process
.SelectMany(GetItemsFromDB)
// filter already added
.Where(i => !_processedItemIds.Contains(i.Id));
var action = new ActionBlock<Item>(ProcessItem, new ExecutionDataflowBlockOptions
{
// we can start as many item processing as processor count
MaxDegreeOfParallelism = Environment.ProcessorCount,
});
IDisposable subscription = source.Subscribe(action.AsObserver());
Also, your check for item being already processed isn't quite accurate, as there is a possibility to item get selected as unprocessed from db right at the time you've finished it's processing, yet didn't update it in database. In this case item will be removed from Queue<T>, and after that added there again by producer, this is why I've added the ConcurrentBag<T> to this solution (HashSet<T> isn't thread-safe):
private static async Task ProcessItem(Item item)
{
if (_processedItemIds.Contains(item.Id))
{
return;
}
_processedItemIds.Add(item.Id);
// actual work here
// save item as processed in database
// we need to wait to ensure item not to appear in queue again
await Task.Delay(TimeSpan.FromSeconds(EventIntervalInSeconds * 2));
// clear the processed cache to reduce memory usage
_processedItemIds.Remove(item.Id);
}
public class Item
{
public Guid Id { get; set; }
}
// temporary cache for items in process
private static ConcurrentBag<Guid> _processedItemIds = new ConcurrentBag<Guid>();
private static IEnumerable<Item> GetItemsFromDB(Timestamped<long> time)
{
// log event timing
Console.WriteLine($"Event # {time.Value} at {time.Timestamp}");
// return items from DB
return new[] { new Item { Id = Guid.NewGuid() } };
}
You can implement cache clean up in other way, for example, start a "GC" timer, which will remove processed items from cache on regular basis.
To stop events and processing items you should Dispose the subscription and, maybe, Complete the ActionBlock:
subscription.Dispose();
action.Complete();
You can find more information about Rx.Net in their guidelines on github.
You could use an ActionBlock to do your processing, it has a built in queue that you can post work to. You can read up on tpl-dataflow here: Intro to TPL-Dataflow also Introduction to Dataflow, Part 1. Finally, this is a quick sample to get you going. I've left out a lot but it should at least get you started.
using System;
using System.Threading;
using System.Threading.Tasks;
using System.Threading.Tasks.Dataflow;
namespace MyWorkProcessor {
public class WorkProcessor {
public WorkProcessor() {
Processor = CreatePipeline();
}
public async Task StartProcessing() {
try {
await Task.Run(() => GetWorkFromDatabase());
} catch (OperationCanceledException) {
//handle cancel
}
}
private CancellationTokenSource cts {
get;
set;
}
private ITargetBlock<WorkItem> Processor {
get;
}
private TimeSpan DatabasePollingFrequency {
get;
} = TimeSpan.FromSeconds(5);
private ITargetBlock<WorkItem> CreatePipeline() {
var options = new ExecutionDataflowBlockOptions() {
BoundedCapacity = 100,
CancellationToken = cts.Token
};
return new ActionBlock<WorkItem>(item => ProcessWork(item), options);
}
private async Task GetWorkFromDatabase() {
while (!cts.IsCancellationRequested) {
var work = await GetWork();
await Processor.SendAsync(work);
await Task.Delay(DatabasePollingFrequency);
}
}
private async Task<WorkItem> GetWork() {
return await Context.GetWork();
}
private void ProcessWork(WorkItem item) {
//do processing
}
}
}
I'm trying to get some help on the .ContinueWith() method.
I know something like thisTask<Task<bool>> t2 = nextTask.ContinueWith(t => T.FirstLevel("GG"));
As you see below, this is a demo of what I was trying to do. My first level task returns a bool that will determine which task to continue with, also my second level task will return a bool to determine whether to go back execute the first level again or exit. This is where I'm stuck, I wouldn't want to make it recursive though.
Can anyone help?
I know one can use event handler to resolve this matter but the actual application code is really complex to change at this moment.
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
namespace Continuewith
{
class Program
{
static void Main(string[] args)
{
tester();
Console.ReadKey();
}
static async void tester()
{
TaskFunctions T = new TaskFunctions();
List<string> p = new List<string>() { "111"};
List<Task<bool>> CocurrentTasks = new List<Task<bool>>();
foreach (string s in p)
{
CocurrentTasks.Add(T.FirstLevel(s));
}
while (CocurrentTasks.Count > 0)
{
Task<bool> nextTask = await Task.WhenAny(CocurrentTasks);
if(await nextTask)
{
//do second level
}
else
{
//do first level
}
CocurrentTasks.Remove(nextTask);
}
}
}
class TaskFunctions
{
public async Task<bool> FirstLevel(string gg)
{
Console.WriteLine(gg);
Random random = new Random();
int randomNumber = random.Next(0, 10);
await Task.Delay(500 * randomNumber); //other real useful Task Function will be in place to take up the time
if (randomNumber > 5)
return true;
else
return false;
}
public async Task<bool> SecondLevel(string jj)
{
Console.WriteLine(jj);
Random random = new Random();
int randomNumber = random.Next(0, 10);
await Task.Delay(500 * randomNumber); //other real useful Task Function will be in place to take up the time
if (randomNumber > 5)
return true;
else
return false;
}
}
}
Here is my solution, not perfect but I can use it alright.
static async void tester()
{
TaskFunctions T = new TaskFunctions();
List<string> p = new List<string>() { "111"};
List<Task<bool>> CocurrentTasks = new List<Task<bool>>();
foreach (string s in p)
{
CocurrentTasks.Add(T.FirstLevel(s));
}
while (CocurrentTasks.Count > 0)
{
Task<bool> nextTask = await Task.WhenAny(CocurrentTasks);
if(await nextTask)
{
Task<Task<bool>> t2 = nextTask.ContinueWith(t => T.SecondLevel("GG"));
if(!await t2.Unwarp())
{
CocurrentTasks.Add(T.FirstLevel("111"));
}
}
else
{
CocurrentTasks.Add(T.FirstLevel("111"));
}
CocurrentTasks.Remove(nextTask);
}
}
This, however, requires the first level task returns "111" so that the program will be execute that specific data again. "111" can be anything one can pass not just limited to a string.
Why do you need FirstLevel and SecondLevel to be separate Tasks? Maybe it would be easier to control those calls as part of one, sequential Task? Basically, you would want to move code in your while to a method:
static async void tester()
{
TaskFunctions T = new TaskFunctions();
List<string> p = new List<string>() { "111" };
List<Task> CocurrentTasks = new List<Task>();
foreach (string s in p)
{
CocurrentTasks.Add(CallConcurrentTasks(T, s));
}
while (CocurrentTasks.Count > 0)
{
var nextTask = await Task.WhenAny(CocurrentTasks);
CocurrentTasks.Remove(nextTask);
}
}
static async Task CallConcurrentTasks(TaskFunctions t, string arg)
{
do
{
if (await t.FirstLevel(arg))
{
if (!await T.SecondLevel("GG"))
{
return;
}
}
}
while (true); //you should probably make some additional end condition here?
}
I hope I got the refacotring right. Alternatively you could use a thread-safe list for ConcurrentTasks collection and use your while code without any changes (collection passed as an argument to CallConcurrentTasks method).