I have code like this in a method:
ISubject<Message> messages = new ReplaySubject<Message>(messageTimeout);
public void HandleNext(string clientId, Action<object> callback)
{
messages.Where(message => !message.IsHandledBy(clientId))
.Take(1)
.Subscribe(message =>
{
callback(message.Message);
message.MarkAsHandledBy(clientId);
});
}
What is the rx'y way to code it, so that no race between MarkAsHandledBy() and IsHandledBy() may happen on multiple concurrent calls to HandleNext()?
EDIT:
This is for long polling. HandleNext() is called for each web request. The request can only handle one message and then returns to the client. Next request takes the next message and so forth.
The full code (still a work in progress of course) is this:
public class Queue
{
readonly ISubject<MessageWrapper> messages;
public Queue() : this(TimeSpan.FromSeconds(30)) {}
public Queue(TimeSpan messageTimeout)
{
messages = new ReplaySubject<MessageWrapper>(messageTimeout);
}
public void Send(string channel, object message)
{
messages.OnNext(new MessageWrapper(new List<string> {channel}, message));
}
public void ReceiveNext(string clientId, string channel, Action<object> callback)
{
messages
.Where(message => message.Channels.Contains(channel) && !message.IsReceivedBy(clientId))
.Take(1)
.Subscribe(message =>
{
callback(message.Message);
message.MarkAsReceivedFor(clientId);
});
}
class MessageWrapper
{
readonly List<string> receivers;
public MessageWrapper(List<string> channels, object message)
{
receivers = new List<string>();
Channels = channels;
Message = message;
}
public List<string> Channels { get; private set; }
public object Message { get; private set; }
public void MarkAsReceivedFor(string clientId)
{
receivers.Add(clientId);
}
public bool IsReceivedBy(string clientId)
{
return receivers.Contains(clientId);
}
}
}
EDIT 2:
Right now my code looks like this:
public void ReceiveNext(string clientId, string channel, Action<object> callback)
{
var subscription = Disposable.Empty;
subscription = messages
.Where(message => message.Channels.Contains(channel))
.Subscribe(message =>
{
if (message.TryDispatchTo(clientId, callback))
subscription.Dispose();
});
}
class MessageWrapper
{
readonly object message;
readonly List<string> receivers;
public MessageWrapper(List<string> channels, object message)
{
this.message = message;
receivers = new List<string>();
Channels = channels;
}
public List<string> Channels { get; private set; }
public bool TryDispatchTo(string clientId, Action<object> handler)
{
lock (receivers)
{
if (IsReceivedBy(clientId)) return false;
handler(message);
MarkAsReceivedFor(clientId);
return true;
}
}
void MarkAsReceivedFor(string clientId)
{
receivers.Add(clientId);
}
bool IsReceivedBy(string clientId)
{
return receivers.Contains(clientId);
}
}
It seems to me that you're making an Rx nightmare for yourself. Rx should provide a very easy way to wire up subscribers to your messages.
I like the fact that you've got a self contained class holding your ReplaySubject - that stops somewhere else in your code being malicious and calling OnCompleted prematurely.
However, the ReceiveNext method doesn't provide any way for you to remove subscribers. It is a memory leak at least. You tracking of client ids in the MessageWrapper is also a potential memory leak.
I'd suggest you try to work with this kind of function rather thanReceiveNext:
public IDisposable RegisterChannel(string channel, Action<object> callback)
{
return messages
.Where(message => message.Channels.Contains(channel))
.Subscribe(message => callback(message.Message));
}
It's very Rx-ish. It's a nice pure query and you can unsubscribe easily.
Since the Action<object> callback is no doubt directly related to the clientId I'd think about putting the logic to prevent duplicate message processing in there.
Right now you code is very procedural and not suited to Rx. It seems like you haven't quite got your head around how to best work with Rx. It's a good start, but you need to think more functionally (as in functional programming).
If you must use your code as-is, I'd suggest some changes.
In Queue do this:
public IDisposable ReceiveNext(
string clientId, string channel, Action<object> callback)
{
return
messages
.Where(message => message.Channels.Contains(channel))
.Take(1)
.Subscribe(message =>
message.TryReceive(clientId, callback));
}
And in MessageWrapper get rid of MarkAsReceivedFor & IsReceivedBy and do this instead:
public bool TryReceive(string clientId, Action<object> callback)
{
lock (receivers)
{
if (!receivers.Contains(clientId))
{
callback(this.Message);
receivers.Add(clientId);
return true;
}
else
return false;
}
}
I really don't see why you have the .Take(1) though, but these changes may reduce the race condition depending on its cause.
I'm not sure employing Rx like this is a good practice. Rx defines the concept of streams which requires there be no concurrent notifications.
That said, to answer your question, to avoid a Race condition, put a lock inside the IsReceivedBy and MarkAsReceivedFor methods.
As for a better approach, you could abandon the whole handling business, use a ConcurrentQueue and TryDequeue a message on receiving a request (you're only doing Take(1) - which fits a queue model). Rx can help you to give each message a TTL and remove it from the queue, but you could also do that on an incoming request.
Related
Purpose: I need to keep track of headers when I redeliver a message.
Configuration:
RabbitMQ 3.7.9
Erlang 21.2
MassTransit 5.1.5
MySql 8.0 for the Quartz database
What I've tried without success:
first attempt:
await context.Redeliver(TimeSpan.FromSeconds(5), (consumeCtx, sendCtx) => {
if (consumeCtx.Headers.TryGetHeader("SenderApp", out object sender))
{
sendCtx.Headers.Set("SenderApp", sender);
}
}).ConfigureAwait(false);
second attempt:
protected Task ScheduleSend(Uri rabbitUri, double delay)
{
return GetBus().ScheduleSend<IProcessOrganisationUpdate>(
rabbitUri,
TimeSpan.FromSeconds(delay),
_Data,
new HeaderPipe(_SenderApp, 0));
}
public class HeaderPipe : IPipe<SendContext>
{
private readonly byte _Priority;
private readonly string _SenderApp;
public HeaderPipe (byte priority)
{
_Priority = priority;
_SenderApp = Assembly.GetEntryAssembly()?.GetName()?.Name ?? "Default";
}
public HeaderPipe (string senderApp, byte priority)
{
_Priority = priority;
_SenderApp = senderApp;
}
public void Probe (ProbeContext context)
{ }
public Task Send (SendContext context)
{
context.Headers.Set("SenderApp", _SenderApp);
context.SetPriority(_Priority);
return Task.CompletedTask;
}
}
Expected: FinQuest.Robot.DBProcess
Result: null
I log in Consume method my SenderApp. The first time it's look like this
Initial trigger checking returns true for FinQuest.Robots.OrganisationLinkedinFeed (id: 001ae487-ad3d-4619-8d34-367881ec91ba, sender: FinQuest.Robot.DBProcess, modif: LinkedIn)
and looks like this after the redelivery
Initial trigger checking returns true for FinQuest.Robots.OrganisationLinkedinFeed (id: 001ae487-ad3d-4619-8d34-367881ec91ba, sender: , modif: LinkedIn)
What I'm doing wrong ? I don't want to use the Retry feature due to its maximum number of retry (I don't want to be limited).
Thanks in advance.
There is a method, used by the redelivery filter, that you might want to use:
https://github.com/MassTransit/MassTransit/blob/develop/src/MassTransit/SendContextExtensions.cs#L90
public static void TransferConsumeContextHeaders(this SendContext sendContext, ConsumeContext consumeContext)
In your code, you would use it:
await context.Redeliver(TimeSpan.FromSeconds(5), (consumeCtx, sendCtx) => {
sendCtx.TransferConsumeContextHeaders(consumeCtx);
});
In the following code, I am using the syntactic sugar provided by .net, the async/await approach, but read that this is not a good way of handling asynchronous operations within akka, and I should rather use PipeTo().
public class AggregatorActor : ActorBase, IWithUnboundedStash
{
#region Constructor
public AggregatorActor(IActorSystemSettings settings, IAccountComponent component, LogSettings logSettings) : base(settings, logSettings)
{
_accountComponent = component;
_settings = settings;
}
#endregion
#region Public Methods
public override void Listening()
{
ReceiveAsync<ProfilerMessages.ProfilerBase>(async x => await HandleMessage(x));
ReceiveAsync<ProfilerMessages.TimeElasped>(async x => await HandleMessage(x));
}
public override async Task HandleMessage(object msg)
{
msg.Match().With<ProfilerMessages.GetSummary>(async x =>
{
_sender = Context.Sender;
//Become busy. Stash
Become(Busy);
//Handle different request
await HandleSummaryRequest(x.UserId, x.CasinoId, x.GamingServerId, x.AccountNumber, x.GroupName);
});
msg.Match().With<ProfilerMessages.RecurringCheck>(x =>
{
_logger.Info("Recurring Message");
if (IsAllResponsesReceived())
{
BeginAggregate();
}
});
msg.Match().With<ProfilerMessages.TimeElasped>(x =>
{
_logger.Info("Time Elapsed");
BeginAggregate();
});
}
private async Task HandleSummaryRequest(int userId, int casinoId, int gsid, string accountNumber, string groupName)
{
try
{
var accountMsg = new AccountMessages.GetAggregatedData(userId, accountNumber, casinoId, gsid);
//AskPattern.AskAsync<AccountMessages.AccountResponseAll>(Context.Self, _accountActor, accountMsg, _settings.NumberOfMilliSecondsToWaitForResponse, (x) => { return x; });
_accountActor.Tell(accountMsg);
var contactMsg = new ContactMessages.GetAggregatedContactDetails(userId);
//AskPattern.AskAsync<Messages.ContactMessages.ContactResponse>(Context.Self, _contactActor, contactMsg, _settings.NumberOfMilliSecondsToWaitForResponse, (x) => { return x; });
_contactActor.Tell(contactMsg);
var analyticMsg = new AnalyticsMessages.GetAggregatedAnalytics(userId, casinoId, gsid);
//AskPattern.AskAsync<Messages.AnalyticsMessages.AnalyticsResponse>(Context.Self, _analyticsActor, analyticMsg, _settings.NumberOfMilliSecondsToWaitForResponse, (x) => { return x; });
_analyticsActor.Tell(analyticMsg);
var financialMsg = new FinancialMessages.GetAggregatedFinancialDetails(userId.ToString());
//AskPattern.AskAsync<Messages.FinancialMessages.FinancialResponse>(Context.Self, _financialActor, financialMsg, _settings.NumberOfMilliSecondsToWaitForResponse, (x) => { return x; });
_financialActor.Tell(financialMsg);
var verificationMsg = VerificationMessages.GetAggregatedVerification.Instance(groupName, casinoId.ToString(), userId.ToString(), gsid);
_verificationActor.Tell(verificationMsg);
var riskMessage = RiskMessages.GeAggregatedRiskDetails.Instance(userId, accountNumber, groupName, casinoId, gsid);
_riskActor.Tell(riskMessage);
_cancelable = Context.System.Scheduler.ScheduleTellOnceCancelable(TimeSpan.FromMilliseconds(_settings.AggregatorTimeOut), Self, Messages.ProfilerMessages.TimeElasped.Instance(), Self);
_cancelRecurring = Context.System.Scheduler.ScheduleTellRepeatedlyCancelable(_settings.RecurringResponseCheck, _settings.RecurringResponseCheck, Self, Messages.ProfilerMessages.RecurringCheck.Instance(), Self);
}
catch (Exception ex)
{
ExceptionHandler(ex);
}
}
#endregion
}
As you can see in the example code, I am making use of async/await, and using the ReceiveAsync() method procided by Akka.net.
What is the purpose of ReceiveAsync(), if we cannot use async/await within an actor?
You can use async/await within an actor, however this requires a little bit of orchestration necessary to suspend/resume actor's mailbox until the the asynchronous task completes. This makes actor non-reentrant, which means that it will not pick any new messages, until the current task is finished. To make use of async/await within an actor you can either:
Use ReceiveAsync which can take async handlers.
Wrapping your async method call with ActorTaskScheduler.RunTask. This is usually useful in context of actor lifecycle methods (like PreStart/PostStop).
Keep in mind that this will work if a default actor message dispatcher is used, but it's not guaranteed to work, if an actor is configured to use different types of dispatchers.
Also there is a performance downside associated with using async/await inside actors, which is related to suspend/resume mechanics and lack of reentrancy of your actors. In many business cases it's not really a problem, but can sometimes be an issue in high-performance/low-latency workflows.
I'm pushing data updates/changes using IObservable, I have a method that gets the latest data from a database GetLatestElement, whenever anyone calls an UpdateElement and the data gets updated, a message is distributed over a messaging system.
So I'm creating an observable that emits the latest value, and then re-emits the new value when it receives the update event form the messaging system:
public IObservable<IElement> GetElement(Guid id)
{
return Observable.Create<T>((observer) =>
{
observer.OnNext(GetLatestElement(id));
// subscribe to internal or external update notifications
var messageCallback = (message) =>
{
// new update message recieved,
observer.OnNext(GetLatestElement(id));
}
messageService.SubscribeToTopic(id, messageCallback);
return Disposable.Create(() => Console.Writeline("Observer Disposed"));
});
}
My problem is that this is indefinite. These updates will potentially happen forever. Since I'm trying to get the system as state-less as possible, a new Observable is created for each request for GetElementType. This means the lifetime is dictated by the subscriber, not the source of the data.
I'll never call OnComplete() in the Observable, I want to complete when the Observer/User is done.
However, I need to call messageService.Unsubscribe(messageCallback); at some point in time in order to unsubscribe from the messages when the Observable is done with.
I could do this when the subscription is disposed, but then I can only subscribe a single time, which seems likely to introduce bugs.
How should this be done with Observables?
It seems there is some misunderstanding about how Observable.Create works. Whenever you call Subscribe on the result of your GetElement() - the body of Observable.Create is executed. So for each subscriber you have separate subscription to your messageService with separate callback to execute. If you unsubscribe - you only remove subscription of that subscriber. All other remain active, because they have their own messageCallback. That is assuming of course that messageService is implemented properly. Here is sample application illustrating that:
static IElement GetLatestElement(Guid id) {
return new Element();
}
public class Element : IElement {
}
public interface IElement {
}
class MessageService {
private Dictionary<Guid, Dictionary<Action<IElement>, CancellationTokenSource>> _subs = new Dictionary<Guid, Dictionary<Action<IElement>, CancellationTokenSource>>();
public void SubscribeToTopic(Guid id, Action<IElement> callback) {
var ct = new CancellationTokenSource();
if (!_subs.ContainsKey(id))
_subs[id] = new Dictionary<Action<IElement>, CancellationTokenSource>();
_subs[id].Add(callback, ct);
Task.Run(() =>
{
while (!ct.IsCancellationRequested) {
callback(new Element());
Thread.Sleep(500);
}
});
}
public void Unsubscribe(Guid id, Action<IElement> callback) {
_subs[id][callback].Cancel();
_subs[id].Remove(callback);
}
}
public static IObservable<IElement> GetElement(Guid id)
{
var messageService = new MessageService();
return Observable.Create<IElement>((observer) =>
{
observer.OnNext(GetLatestElement(id));
// subscribe to internal or external update notifications
Action<IElement> messageCallback = (message) =>
{
// new update message recieved,
observer.OnNext(GetLatestElement(id));
};
messageService.SubscribeToTopic(id, messageCallback);
return Disposable.Create(() => {
messageService.Unsubscribe(id, messageCallback);
Console.WriteLine("Observer Disposed");
});
});
}
public static void Main(string[] args) {
var ob = GetElement(Guid.NewGuid());
var sub1 = ob.Subscribe(c =>
{
Console.WriteLine("got element");
});
var sub2 = ob.Subscribe(c =>
{
Console.WriteLine("got element 2");
});
// at this point we see both subscribers receive messages
Console.ReadKey();
sub1.Dispose();
// first one is unsubscribed, but second one is still alive
Console.ReadKey();
}
So as I said it comments - I see no reason to complete your observable in this case.
As Evk pointed out, Observable.Create runs then disposes almost immediately. If you want to keep the messageService subscription open though, Rx can help you with that. Look at MessageObservableProvider. The rest is just to make things compile:
public class MessageObservableProvider
{
private MessageService messageService;
private Dictionary<Guid, IObservable<Unit>> _messageNotifications = new Dictionary<Guid, IObservable<Unit>>();
private IObservable<Unit> GetMessageNotifications(Guid id)
{
return Observable.Create<Unit>((observer) =>
{
Action<Message> messageCallback = _ => observer.OnNext(Unit.Default);
messageService.SubscribeToTopic(id, messageCallback);
return Disposable.Create(() =>
{
messageService.Unsubscribe(messageCallback);
Console.WriteLine("Observer Disposed");
});
});
}
public IObservable<IElement> GetElement(Guid id)
{
if(!_messageNotifications.ContainsKey(id))
_messageNotifications[id] = GetMessageNotifications(id).Publish().RefCount();
return _messageNotifications[id]
.Select(_ => GetLatestElement(id))
.StartWith(GetLatestElement(id));
}
private IElement GetLatestElement(Guid id)
{
throw new NotImplementedException();
}
}
public class IElement { }
public class Message { }
public class MessageService
{
public void SubscribeToTopic(Guid id, Action<Message> callback)
{
throw new NotImplementedException();
}
public void Unsubscribe(Action<Message> callback)
{
throw new NotImplementedException();
}
}
Your original Create implementation incorporated the functionality of a StartWith and a Select. I moved those out, so now the Observable.Create just returns a notification when a new message is available.
More importantly though, in GetElement there's now a .Publish().RefCount() call. This will leave the messageService subscription open (by not calling .Dispose()) as long as there's at least one child observable (subscription) hanging around.
I have a component that submits requests to a web-based API, but these requests must be throttled so as not to contravene the API's data limits. This means that all requests must pass through a queue to control the rate at which they are submitted, but they can (and should) execute concurrently to achieve maximum throughput. Each request must return some data to the calling code at some point in the future when it completes.
I'm struggling to create a nice model to handle the return of data.
Using a BlockingCollection I can't just return a Task<TResult> from the Schedule method, because the enqueuing and dequeuing processes are at either ends of the buffer. So instead I create a RequestItem<TResult> type that contains a callback of the form Action<Task<TResult>>.
The idea is that once an item has been pulled from the queue the callback can be invoked with the started task, but I've lost the generic type parameters by that point and I'm left using reflection and all kinds of nastiness (if it's even possible).
For example:
public class RequestScheduler
{
private readonly BlockingCollection<IRequestItem> _queue = new BlockingCollection<IRequestItem>();
public RequestScheduler()
{
this.Start();
}
// This can't return Task<TResult>, so returns void.
// Instead RequestItem is generic but this poses problems when adding to the queue
public void Schedule<TResult>(RequestItem<TResult> request)
{
_queue.Add(request);
}
private void Start()
{
Task.Factory.StartNew(() =>
{
foreach (var item in _queue.GetConsumingEnumerable())
{
// I want to be able to use the original type parameters here
// is there a nice way without reflection?
// ProcessItem submits an HttpWebRequest
Task.Factory.StartNew(() => ProcessItem(item))
.ContinueWith(t => { item.Callback(t); });
}
});
}
public void Stop()
{
_queue.CompleteAdding();
}
}
public class RequestItem<TResult> : IRequestItem
{
public IOperation<TResult> Operation { get; set; }
public Action<Task<TResult>> Callback { get; set; }
}
How can I continue to buffer my requests but return a Task<TResult> to the client when the request is pulled from the buffer and submitted to the API?
First, you can return Task<TResult> from Schedule(), you just need to use TaskCompletionSource for that.
Second, to get around the genericity issue, you can hide all of it inside (non-generic) Actions. In Schedule(), create an action using a lambda that does exactly what you need. The consuming loop will then execute that action, it doesn't need to know what's inside.
Third, I don't understand why are you starting a new Task in each iteration of the loop. For one, it means you won't actually get any throttling.
With these modifications, the code could look like this:
public class RequestScheduler
{
private readonly BlockingCollection<Action> m_queue = new BlockingCollection<Action>();
public RequestScheduler()
{
this.Start();
}
private void Start()
{
Task.Factory.StartNew(() =>
{
foreach (var action in m_queue.GetConsumingEnumerable())
{
action();
}
}, TaskCreationOptions.LongRunning);
}
public Task<TResult> Schedule<TResult>(IOperation<TResult> operation)
{
var tcs = new TaskCompletionSource<TResult>();
Action action = () =>
{
try
{
tcs.SetResult(ProcessItem(operation));
}
catch (Exception e)
{
tcs.SetException(e);
}
};
m_queue.Add(action);
return tcs.Task;
}
private T ProcessItem<T>(IOperation<T> operation)
{
// whatever
}
}
I have a method that queues some work to be executed asynchronously. I'd like to return some sort of handle to the caller that can be polled, waited on, or used to fetch the return value from the operation, but I can't find a class or interface that's suitable for the task.
BackgroundWorker comes close, but it's geared to the case where the worker has its own dedicated thread, which isn't true in my case. IAsyncResult looks promising, but the provided AsyncResult implementation is also unusable for me. Should I implement IAsyncResult myself?
Clarification:
I have a class that conceptually looks like this:
class AsyncScheduler
{
private List<object> _workList = new List<object>();
private bool _finished = false;
public SomeHandle QueueAsyncWork(object workObject)
{
// simplified for the sake of example
_workList.Add(workObject);
return SomeHandle;
}
private void WorkThread()
{
// simplified for the sake of example
while (!_finished)
{
foreach (object workObject in _workList)
{
if (!workObject.IsFinished)
{
workObject.DoSomeWork();
}
}
Thread.Sleep(1000);
}
}
}
The QueueAsyncWork function pushes a work item onto the polling list for a dedicated work thread, of which there will only over be one. My problem is not with writing the QueueAsyncWork function--that's fine. My question is, what do I return to the caller? What should SomeHandle be?
The existing .Net classes for this are geared towards the situation where the asynchronous operation can be encapsulated in a single method call that returns. That's not the case here--all of the work objects do their work on the same thread, and a complete work operation might span multiple calls to workObject.DoSomeWork(). In this case, what's a reasonable approach for offering the caller some handle for progress notification, completion, and getting the final outcome of the operation?
Yes, implement IAsyncResult (or rather, an extended version of it, to provide for progress reporting).
public class WorkObjectHandle : IAsyncResult, IDisposable
{
private int _percentComplete;
private ManualResetEvent _waitHandle;
public int PercentComplete {
get {return _percentComplete;}
set
{
if (value < 0 || value > 100) throw new InvalidArgumentException("Percent complete should be between 0 and 100");
if (_percentComplete = 100) throw new InvalidOperationException("Already complete");
if (value == 100 && Complete != null) Complete(this, new CompleteArgs(WorkObject));
_percentComplete = value;
}
public IWorkObject WorkObject {get; private set;}
public object AsyncState {get {return WorkObject;}}
public bool IsCompleted {get {return _percentComplete == 100;}}
public event EventHandler<CompleteArgs> Complete; // CompleteArgs in a usual pattern
// you may also want to have Progress event
public bool CompletedSynchronously {get {return false;}}
public WaitHandle
{
get
{
// initialize it lazily
if (_waitHandle == null)
{
ManualResetEvent newWaitHandle = new ManualResetEvent(false);
if (Interlocked.CompareExchange(ref _waitHandle, newWaitHandle, null) != null)
newWaitHandle.Dispose();
}
return _waitHandle;
}
}
public void Dispose()
{
if (_waitHandle != null)
_waitHandle.Dispose();
// dispose _workObject too, if needed
}
public WorkObjectHandle(IWorkObject workObject)
{
WorkObject = workObject;
_percentComplete = 0;
}
}
public class AsyncScheduler
{
private Queue<WorkObjectHandle> _workQueue = new Queue<WorkObjectHandle>();
private bool _finished = false;
public WorkObjectHandle QueueAsyncWork(IWorkObject workObject)
{
var handle = new WorkObjectHandle(workObject);
lock(_workQueue)
{
_workQueue.Enqueue(handle);
}
return handle;
}
private void WorkThread()
{
// simplified for the sake of example
while (!_finished)
{
WorkObjectHandle handle;
lock(_workQueue)
{
if (_workQueue.Count == 0) break;
handle = _workQueue.Dequeue();
}
try
{
var workObject = handle.WorkObject;
// do whatever you want with workObject, set handle.PercentCompleted, etc.
}
finally
{
handle.Dispose();
}
}
}
}
If I understand correctly you have a collection of work objects (IWorkObject) that each complete a task via multiple calls to a DoSomeWork method. When an IWorkObject object has finished its work you'd like to respond to that somehow and during the process you'd like to respond to any reported progress?
In that case I'd suggest you take a slightly different approach. You could take a look at the Parallel Extension framework (blog). Using the framework, you could write something like this:
public void QueueWork(IWorkObject workObject)
{
Task.TaskFactory.StartNew(() =>
{
while (!workObject.Finished)
{
int progress = workObject.DoSomeWork();
DoSomethingWithReportedProgress(workObject, progress);
}
WorkObjectIsFinished(workObject);
});
}
Some things to note:
QueueWork now returns void. The reason for this is that the actions that occur when progress is reported or when the task completes have become part of the thread that executes the work. You could of course return the Task that the factory creates and return that from the method (to enable polling for example).
The progress-reporting and finish-handling are now part of the thread because you should always avoid polling when possible. Polling is more expensive because usually you either poll too frequently (too early) or not often enough (too late). There is no reason you can't report on the progress and finishing of the task from within the thread that is running the task.
The above could also be implemented using the (lower level) ThreadPool.QueueUserWorkItem method.
Using QueueUserWorkItem:
public void QueueWork(IWorkObject workObject)
{
ThreadPool.QueueUserWorkItem(() =>
{
while (!workObject.Finished)
{
int progress = workObject.DoSomeWork();
DoSomethingWithReportedProgress(workObject, progress);
}
WorkObjectIsFinished(workObject);
});
}
The WorkObject class can contain the properties that need to be tracked.
public class WorkObject
{
public PercentComplete { get; private set; }
public IsFinished { get; private set; }
public void DoSomeWork()
{
// work done here
this.PercentComplete = 50;
// some more work done here
this.PercentComplete = 100;
this.IsFinished = true;
}
}
Then in your example:
Change the collection from a List to a Dictionary that can hold Guid values (or any other means of uniquely identifying the value).
Expose the correct WorkObject's properties by having the caller pass the Guid that it received from QueueAsyncWork.
I'm assuming that you'll start WorkThread asynchronously (albeit, the only asynchronous thread); plus, you'll have to make retrieving the dictionary values and WorkObject properties thread-safe.
private Dictionary<Guid, WorkObject> _workList =
new Dictionary<Guid, WorkObject>();
private bool _finished = false;
public Guid QueueAsyncWork(WorkObject workObject)
{
Guid guid = Guid.NewGuid();
// simplified for the sake of example
_workList.Add(guid, workObject);
return guid;
}
private void WorkThread()
{
// simplified for the sake of example
while (!_finished)
{
foreach (WorkObject workObject in _workList)
{
if (!workObject.IsFinished)
{
workObject.DoSomeWork();
}
}
Thread.Sleep(1000);
}
}
// an example of getting the WorkObject's property
public int GetPercentComplete(Guid guid)
{
WorkObject workObject = null;
if (!_workList.TryGetValue(guid, out workObject)
throw new Exception("Unable to find Guid");
return workObject.PercentComplete;
}
The simplest way to do this is described here. Suppose you have a method string DoSomeWork(int). You then create a delegate of the correct type, for example:
Func<int, string> myDelegate = DoSomeWork;
Then you call the BeginInvoke method on the delegate:
int parameter = 10;
myDelegate.BeginInvoke(parameter, Callback, null);
The Callback delegate will be called once your asynchronous call has completed. You can define this method as follows:
void Callback(IAsyncResult result)
{
var asyncResult = (AsyncResult) result;
var #delegate = (Func<int, string>) asyncResult.AsyncDelegate;
string methodReturnValue = #delegate.EndInvoke(result);
}
Using the described scenario, you can also poll for results or wait on them. Take a look at the url I provided for more info.
Regards,
Ronald
If you don't want to use async callbacks, you can use an explicit WaitHandle, such as a ManualResetEvent:
public abstract class WorkObject : IDispose
{
ManualResetEvent _waitHandle = new ManualResetEvent(false);
public void DoSomeWork()
{
try
{
this.DoSomeWorkOverride();
}
finally
{
_waitHandle.Set();
}
}
protected abstract DoSomeWorkOverride();
public void WaitForCompletion()
{
_waitHandle.WaitOne();
}
public void Dispose()
{
_waitHandle.Dispose();
}
}
And in your code you could say
using (var workObject = new SomeConcreteWorkObject())
{
asyncScheduler.QueueAsyncWork(workObject);
workObject.WaitForCompletion();
}
Don't forget to call Dispose on your workObject though.
You can always use alternate implementations which create a wrapper like this for every work object, and who call _waitHandle.Dispose() in WaitForCompletion(), you can lazily instantiate the wait handle (careful: race conditions ahead), etc. (That's pretty much what BeginInvoke does for delegates.)