Reactive extension timer - c#

I have a HashSet. Occasionlay, new values are added to this hashset. What I am trying to do is have a timer remove each element from the set exactly one minute after it was added.
I am still new to rx but this seems like an ideal occasion to use it.
I tried something like this:
AddItem(string item)
{
_mySet.Add(item);
var timer = Observable.Timer(TimeSpan.FromSeconds(60), _scheduler);
timer
.Take(1)
.Do(item => RemoveItem(item))
.Subscribe(_ => Console.WriteLine("Removed {0}", item));
}
It seems to work ok (passes unit tests).
Does anyone see anything wrong with this approach?

Your lambda in the Do call doesn't look right - Observable.Timer produces int values, but your collection is a HashSet<string> - this shouldn't compile. I'm guessing it was just a typo.
Do: in general your subscription should be done in Subscribe. Do is intended for side effects (I dislike the idea of side-effects in a stream so I avoid it but it is useful for debugging).
Take: Observable.Timer only produces one value before it terminates, thus there is no need for the Take operator
I would write your function as:
AddItem(string item)
{
_mySet.Add(item);
Observable.Timer(TimeSpan.FromSeconds(60), _scheduler)
.Subscribe(_ => RemoveItem(item));
}

You dont need to create a sequence to do this. You are already being a good citizen and using a Scheduler explicity, so just use that!
You could just have this for your code
AddItem(string item)
{
_mySet.Add(item);
//Note this does return an IDisposable if you want to cancel the subscription.
_scheduler.Schedule(
TimeSpan.FromSeconds(60),
()=>
{
RemoveItem(item);
Console.WriteLine("Removed {0}", item);
});
}
This basically means there is much less work going on under the covers. Consider all the work the Observable.Timer method is going, when effectively all you just want it to do is schedule an OnNext with a value (that you ignore).
I would also assume that even a user that doesn't know anything about Rx would be able to read this schedule code too. ie. "After I add this item, I schedule this remove action to run in 60 seconds).

If you were using ReactiveUI, a class called ReactiveCollection would definitely help here, you could use it like this:
theCollection.ItemsAdded
.SelectMany(x => Observable.Timer(TimeSpan.FromSeconds(60), _scheduler).Select(_ => x))
.Subscribe(x => theCollection.Remove(x));

Sorry, don't mean to pick on you, but:
ALWAYS DISPOSE IDISPOSABLES!!!!!
(EDIT: Ok, not sure what the heck I put in my coffee this morning, but I answered with a whole mess of nonsense; I'll leave the above only because in general, you do want to make sure to dispose any IDisposable, but in an effort to make up for the babble that follows...)
That call to Subscribe creates a subscription that you are NOT disposing, so multiple calls to this method are just going to queue up more and more crap - now in this specific case, it's not the end of the world since the Timer only fires once, but still...Dispose!
If you really want to use this method (I think a better approach would be to have some running thread/task that "tends" to your values, removing when it thinks its neccesary), at least try something akin to:
Ok, ignore all that struck-out crap. The implementation of Observable.Timer is this:
public static IObservable<long> Timer(TimeSpan dueTime)
{
return s_impl.Timer(dueTime);
}
which in turn calls into this:
public virtual IObservable<long> Timer(TimeSpan dueTime)
{
return Timer_(dueTime, SchedulerDefaults.TimeBasedOperations);
}
which calls...
private static IObservable<long> Timer_(TimeSpan dueTime, IScheduler scheduler)
{
return new Timer(dueTime, null, scheduler);
}
And here's where things get fun - Timer is a Producer<long>, where the meaty bits are:
private IDisposable InvokeStart(IScheduler self, object state)
{
this._pendingTickCount = 1;
SingleAssignmentDisposable disposable = new SingleAssignmentDisposable();
this._periodic = disposable;
disposable.Disposable = self.SchedulePeriodic<long>(1L, this._period, new Func<long, long>(this.Tock));
try
{
base._observer.OnNext(0L);
}
catch (Exception exception)
{
disposable.Dispose();
exception.Throw();
}
if (Interlocked.Decrement(ref this._pendingTickCount) > 0)
{
SingleAssignmentDisposable disposable2 = new SingleAssignmentDisposable {
Disposable = self.Schedule<long>(1L, new Action<long, Action<long>>(this.CatchUp))
};
return new CompositeDisposable(2) { disposable, disposable2 };
}
return disposable;
}
Now, the base._observer.OnNext, that's the internal sink set up to trigger on the timer tick, where the Invoke on that is:
private void Invoke()
{
base._observer.OnNext(0L);
base._observer.OnCompleted();
base.Dispose();
}
So yes. It auto-disposes itself - and there won't be any "lingering subscriptions" floating around.
Mmm....crow is tasty. :|

Related

How do I prevent by Rx test from hanging?

I am reproducing my Rx issue with a simplified test case below. The test below hangs. I am sure it is a small, but fundamental, thing that I am missing, but can't put my finger on it.
public class Service
{
private ISubject<double> _subject = new Subject<double>();
public void Reset()
{
_subject.OnNext(0.0);
}
public IObservable<double> GetProgress()
{
return _subject;
}
}
public class ObTest
{
[Fact]
private async Task SimpleTest()
{
var service = new Service();
var result = service.GetProgress().Take(1);
var task = Task.Run(async () =>
{
service.Reset();
});
await result;
}
}
UPDATE
My attempt above was to simplify the problem a little and understand it. In my case GetProgress() is a merge of various Observables that publish the download progress, one of these Observables is a Subject<double> that publishes 0 everytime somebody calls a method to delete the download.
The race condition identified by Enigmativity and Theodor Zoulias may(??) happen in real life. I display a view which attempts to get the progress, however, quick fingers delete it just in time.
What I need to understand a bit more is if the download is started again (subscription has taken place by now, by virtue of displaying a view, which has already made the subscription) and somebody again deletes it.
public class Service
{
private ISubject<double> _deleteSubject = new Subject<double>();
public void Reset()
{
_deleteSubject.OnNext(0.0);
}
public IObservable<double> GetProgress()
{
return _deleteSubject.Merge(downloadProgress);
}
}
Your code isn't hanging. It's awaiting an observable that sometimes never gets a value.
You have a race condition.
The Task.Run is sometimes executing to completion before the await result creates the subscription to the observable - so it never sees the value.
Try this code instead:
private async Task SimpleTest()
{
var service = new Service();
var result = service.GetProgress().Take(1);
var awaiter = result.GetAwaiter();
var task = Task.Run(() =>
{
service.Reset();
});
await awaiter;
}
The line await result creates a subscription to the observable. The problem is that the notification _subject.OnNext(0.0) may occur before this subscription, in which case the value will pass unobserved, and the await result will continue waiting for a notification for ever. In this particular example the notification is always missed, at least in my PC, because the subscription is delayed for around 30 msec (measured with a Stopwatch), which is longer than the time needed for the task that resets the service to complete, probably because the JITer must load and compile some RX-related assembly. The situation changes when I do a warm-up by calling new Subject<int>().FirstAsync().Subscribe() before running the example. In that case the notification is observed almost always, and the hanging is avoided.
I can think of two robust solutions to this problem.
The solution suggested by Enigmativity, to create an awaitable subscription before starting the task that resets the service. This can be done with either GetAwaiter or ToTask.
To use a ReplaySubject<T> instead of a plain vanilla Subject<T>.
Represents an object that is both an observable sequence as well as an observer. Each notification is broadcasted to all subscribed and future observers, subject to buffer trimming policies.
The ReplaySubject will cache the value so that it can be observed by the future subscription, eliminating the race condition. You could initialize it with a bufferSize of 1 to minimize the memory footprint of the buffer.

How do I implement polling using Observables?

I have a parametrized rest call that should be executed every five seconds with different params:
Observable<TResult> restCall = api.method1(param1);
I need to create an Observable<TResult> which will poll the restCall every 5 seconds with different values for param1. If the api call fails I need to get an error and make the next call in 5 seconds. The interval between calls should be measured only when restCall is finished (success/error).
I'm currently using RxJava, but a .NET example would also be good.
Introduction
First, an admission, I'm a .NET guy, and I know this approach uses some idioms that have no direct equivalent in Java. But I'm taking you at your word and proceeding on the basis that this is a great question that .NET guys will enjoy, and that hopefully it will lead you down the right path in rx-java, which I have never looked at. This is quite a long answer, but it's mostly explanation - the solution code itself is pretty short!
Use of Either
We will need sort some tools out first to help with this solution. The first is the use of the Either<TLeft, TRight> type. This is important, because you have two possible outcomes of each call either a good result, or an error. But we need to wrap these in a single type - we can't use OnError to send errors back since this would terminate the result stream. Either looks a bit like a Tuple and makes it easier to deal with this situation. The Rxx library has a very full and good implementation of Either, but here is a simple generic example of usage followed by a simple implementation good for our purposes:
var goodResult = Either.Right<Exception,int>(1);
var exception = Either.Left<Exception,int>(new Exception());
/* base class for LeftValue and RightValue types */
public abstract class Either<TLeft, TRight>
{
public abstract bool IsLeft { get; }
public bool IsRight { get { return !IsLeft; } }
public abstract TLeft Left { get; }
public abstract TRight Right { get; }
}
public static class Either
{
public sealed class LeftValue<TLeft, TRight> : Either<TLeft, TRight>
{
TLeft _leftValue;
public LeftValue(TLeft leftValue)
{
_leftValue = leftValue;
}
public override TLeft Left { get { return _leftValue; } }
public override TRight Right { get { return default(TRight); } }
public override bool IsLeft { get { return true; } }
}
public sealed class RightValue<TLeft, TRight> : Either<TLeft, TRight>
{
TRight _rightValue;
public RightValue(TRight rightValue)
{
_rightValue = rightValue;
}
public override TLeft Left { get { return default(TLeft); } }
public override TRight Right { get { return _rightValue; } }
public override bool IsLeft { get { return false; } }
}
// Factory functions to create left or right-valued Either instances
public static Either<TLeft, TRight> Left<TLeft, TRight>(TLeft leftValue)
{
return new LeftValue<TLeft, TRight>(leftValue);
}
public static Either<TLeft, TRight> Right<TLeft, TRight>(TRight rightValue)
{
return new RightValue<TLeft, TRight>(rightValue);
}
}
Note that by convention when using Either to model a success or failure, the Right side is used for the successful value, because it's "Right" of course :)
Some Helper Functions
I'm going to simulate two aspects of your problem with some helper functions. First, here is a factory to generate parameters - each time it is called it will return the next integer in the sequence of integers starting with 1:
// An infinite supply of parameters
private static int count = 0;
public int ParameterFactory()
{
return ++count;
}
Next, here is a function that simulates your Rest call as an IObservable. This function accepts an integer and:
If the integer is even it returns an Observable that immediately sends an OnError.
If the integer is odd it returns a string concatenating the integer with "-ret", but only after a second has passed. We will use this to check the polling interval is behaving as you requested - as a pause between completed invocations however long they take, rather than a regular interval.
Here it is:
// A asynchronous function representing the REST call
public IObservable<string> SomeRestCall(int x)
{
return x % 2 == 0
? Observable.Throw<string>(new Exception())
: Observable.Return(x + "-ret").Delay(TimeSpan.FromSeconds(1));
}
Now The Good Bit
Below is a reasonably generic reusable function I have called Poll. It accepts an asynchronous function that will be polled, a parameter factory for that function, the desired rest (no pun intended!) interval, and finally an IScheduler to use.
The simplest approach I could come up with is to use Observable.Create that uses a scheduler to drive the result stream. ScheduleAsync is a way of Scheduling that uses the .NET async/await form. This is a .NET idiom that allows you to write asynchronous code in an imperative fashion. The async keyword introduces an asynchronous function that can then await one or more asynchronous calls in it's body and will continue on only when the call completes. I wrote a long explanation of this style of scheduling in this question, which includes the older recursive the style that might be easier to implement in an rx-java approach. The code looks like this:
public IObservable<Either<Exception, TResult>> Poll<TResult, TArg>(
Func<TArg, IObservable<TResult>> asyncFunction,
Func<TArg> parameterFactory,
TimeSpan interval,
IScheduler scheduler)
{
return Observable.Create<Either<Exception, TResult>>(observer =>
{
return scheduler.ScheduleAsync(async (ctrl, ct) => {
while(!ct.IsCancellationRequested)
{
try
{
var result = await asyncFunction(parameterFactory());
observer.OnNext(Either.Right<Exception,TResult>(result));
}
catch(Exception ex)
{
observer.OnNext(Either.Left<Exception, TResult>(ex));
}
await ctrl.Sleep(interval, ct);
}
});
});
}
Breaking this down, Observable.Create in general is a factory for creating IObservables that gives you a great deal of control over how results are posted to observers. It's often overlooked in favour of unnecessarily complex composition of primitives.
In this case, we are using it to create a stream of Either<TResult, Exception> so that we can return the successful and failed polling results.
The Create function accepts an observer that represents the Subscriber to which we pass results to via OnNext/OnError/OnCompleted. We need to return an IDisposable within the Create call - in .NET this is a handle by which the Subscriber can cancel their subscription. It's particularly important here because Polling will otherwise go on forever - or at least it won't ever OnComplete.
The result of ScheduleAsync (or plain Schedule) is such a handle. When disposed, it will cancel any pending event we Scheduled - thereby ending the the polling loop. In our case, the Sleep we use to manage the interval is the cancellable operation, although the Poll function could easily be modified to accept a cancellable asyncFunction that accepts a CancellationToken as well.
The ScheduleAsync method accepts a function that will be called to schedule events. It is passed two arguments, the first ctrl is the scheduler itself. The second ct is a CancellationToken we can use to see if cancellation has been requested (by the Subscriber disposing their subscription handle).
The polling itself is performed via an infinite while loop that terminates only if the CancellationToken indicates cancellation has been requested.
In the loop, we can use the magic of async/await to asynchronously invoke the polling function yet still wrap it in an exception handler. This is so awesome! Assuming no error, we send the result as the right value of an Either to the observer via OnNext. If there was an exception, we send that as the left value of an Either to the observer. Finally, we use the Sleep function on the scheduler to schedule a wake-up call after the rest interval - not to be confused with a Thread.Sleep call, this one typically doesn't block any threads. Note that Sleep accepts the CancellationToken enabling that to be aborted as well!
I think you'll agree this is a pretty cool use of async/await to simplify what would have been an awfully tricky problem!
Example Usage
Finally, here is some test code that calls Poll, along with sample output - for LINQPad fans all the code together in this answer will run in LINQPad with Rx 2.1 assemblies referenced:
void Main()
{
var subscription = Poll(SomeRestCall,
ParameterFactory,
TimeSpan.FromSeconds(5),
ThreadPoolScheduler.Instance)
.TimeInterval()
.Subscribe(x => {
Console.Write("Interval: " + x.Interval);
var result = x.Value;
if(result.IsRight)
Console.WriteLine(" Success: " + result.Right);
else
Console.WriteLine(" Error: " + result.Left.Message);
});
Console.ReadLine();
subscription.Dispose();
}
Interval: 00:00:01.0027668 Success: 1-ret
Interval: 00:00:05.0012461 Error: Exception of type 'System.Exception' was thrown.
Interval: 00:00:06.0009684 Success: 3-ret
Interval: 00:00:05.0003127 Error: Exception of type 'System.Exception' was thrown.
Interval: 00:00:06.0113053 Success: 5-ret
Interval: 00:00:05.0013136 Error: Exception of type 'System.Exception' was thrown.
Note the interval between results is either 5 seconds (the polling interval) if an error was immediately returned, or 6 seconds (the polling interval plus the simulated REST call duration) for a successful result.
EDIT - Here is an alternative implementation that doesn't use ScheduleAsync, but uses old style recursive scheduling and no async/await syntax. As you can see, it's a lot messier - but it does also support cancelling the asyncFunction observable.
public IObservable<Either<Exception, TResult>> Poll<TResult, TArg>(
Func<TArg, IObservable<TResult>> asyncFunction,
Func<TArg> parameterFactory,
TimeSpan interval,
IScheduler scheduler)
{
return Observable.Create<Either<Exception, TResult>>(
observer =>
{
var disposable = new CompositeDisposable();
var funcDisposable = new SerialDisposable();
bool cancelRequested = false;
disposable.Add(Disposable.Create(() => { cancelRequested = true; }));
disposable.Add(funcDisposable);
disposable.Add(scheduler.Schedule(interval, self =>
{
funcDisposable.Disposable = asyncFunction(parameterFactory())
.Finally(() =>
{
if (!cancelRequested) self(interval);
})
.Subscribe(
res => observer.OnNext(Either.Right<Exception, TResult>(res)),
ex => observer.OnNext(Either.Left<Exception, TResult>(ex)));
}));
return disposable;
});
}
See my other answer for a different approach that avoids .NET 4.5 async/await features and doesn't use Schedule calls.
I do hope that is some help to the rx-java guys!
I've cleaned up the approach that doesn't use the Schedule call directly - using the Either type from my other answer - it will also work with the same test code and give the same results:
public IObservable<Either<Exception, TResult>> Poll2<TResult, TArg>(
Func<TArg, IObservable<TResult>> asyncFunction,
Func<TArg> parameterFactory,
TimeSpan interval,
IScheduler scheduler)
{
return Observable.Create<Either<Exception, TResult>>(
observer =>
Observable.Defer(() => asyncFunction(parameterFactory()))
.Select(Either.Right<Exception, TResult>)
.Catch<Either<Exception, TResult>, Exception>(
ex => Observable.Return(Either.Left<Exception, TResult>(ex)))
.Concat(Observable.Defer(
() => Observable.Empty<Either<Exception, TResult>>()
.Delay(interval, scheduler)))
.Repeat().Subscribe(observer));
}
This has proper cancellation semantics.
Notes on implementation
The whole construction uses a Repeat to get the looping behaviour.
The initial Defer is used to ensure a different parameter is passed on each iteration
The Select projects the OnNext result to an Either on right side
The Catch projects an OnError result to an Either on the left side - note that this exception terminates the current asyncFunction observable, hence the need for repeat
The Concat adds in the interval delay
My opinion is that the scheduling version is more readable, but this one doesn't use async/await and is therefore .NET 4.0 compatible.

Return an object directly from a method using thread pool

I have a method like this:
public IOrganizationService GetConnection(bool multi)
{
if(!multi)
{
Parallel.For(0, 1, i =>
{
dynamic _serviceobject= InitializeCRMService();
});
}
else
{
ThreadPool.QueueUserWorkItem
(
new WaitCallback
(
(_) =>
{
dynamic _serviceobject= InitializeCRMService();
}
)
);
}
}
I want to return the _serviceobject *directly* from inside the method.Will returing it twice i.e once from if and once from the else loop solve my problem.Please note I am using Multithreading using the concept of Pool threading.Will the _serviceobjects stay unique in case two threads are running parallely.I do not wan't any interaction to happen between my threads.
The code inside of WaitCallback will execute in the thread pool, and will do so probably after GetConnection has returned (that's the point of doing asynchronous operations). So, since it is another thread (with another call stack) and it will potentially execute after GetConnection has returned, you cannot make GetConnection return from inside of WaitCallback. If you really want to do that, then you will have to make GetConnection wait until WaitCallback has completed execution. ManualResetEvent can do the trick:
public IOrganizationService GetConnection(bool multi)
{
var waitHandle = new ManualResetEvent(false);
dynamic result = null;
if(!multi)
{
Parallel.For(0, 1, i =>
{
result = InitializeCRMService();
waitHandle.Set();
});
}
else
{
ThreadPool.QueueUserWorkItem
(
new WaitCallback
(
(_) =>
{
result = InitializeCRMService();
waitHandle.Set();
}
)
);
}
//We wait until the job is done...
waitHandle.WaitOne();
return result as IOrganizationService; //Or use an adecuate casting
}
But doing this defies the point of having asynchronous operations in the first place. Since the caller thread will have to wait until the job is done in another thread, sitting there, doing nothing... Then, why don't just do it synchrnously? In a word: Pointless.
The problems is that returning the value directly is a synchronous API. If you want asyncrhonous operations, you will want an asycrhonous API. If you will have an asynchronous API then you are going to have to change the way the caller works.
Solutions include:
Having a public property to access the reuslt (option 1)
Having a callback (option 2)
resourcing to events
Returning a Task (or use the async keywork if available)
Returning IObservable (using Reactive Extensions if available)
Notes:
Having a puplic property means you will need to deal with syncrhonization in the caller.
Having a callback, means an odd way to call the method and no explicit way to wait.
Using events has the risk of the caller staying subscribed to the event handler.
Returning a Task seems like an overkill since you are using the thread pool.
Using IObservable without Reactive Extension is prone to error, and much more work compared to the alternatives.
Personally I would go for the callback option:
public void GetConnection(bool multi, Action<IOrganizationService> callback)
{
if (ReferenceEquals(callback, null))
{
throw new ArgumentNullException("callback");
}
if(!multi)
{
Parallel.For(0, 1, i =>
{
callback(InitializeCRMService() as IOrganizationService);
//Or instead of using "as", use an adecuate casting
});
}
else
{
ThreadPool.QueueUserWorkItem
(
new WaitCallback
(
(_) =>
{
callback(InitializeCRMService() as IOrganizationService);
//Or instead of using "as", use an adecuate casting
}
)
);
}
}
The caller then does something like this:
GetConnection
(
false,
(seriveObject) =>
{
/* do something with seriveObject here */
}
);
//Remember, even after GetConnection completed seriveObject may not be ready
// That's because it is asyncrhonous: you want to say "hey Bob do this for me"
// and you can go do something else
// after a while Bob comes back an says:
// "that thing you asked me to do? well here is the result".
// We call that a callback, and the point is that you didn't have to wait for Bob
// you just kept doing your stuff...
//So... when is seriveObject ready? I don't know.
//But when seriveObject is ready the callback will run and then you can use it
You cannot return it from inside the WaitCallback handler because there's no one in your code to return it to. That's just a callback.
You may want to try defining a custom event (derived from EventArgs) which has a dynamic member.
Then you can raise this event from your worker entry point and also send with it the dynamic object.
You can bind to the event where needed (i.e. where you want to use the dynamic object).
EDIT (to also show some code):
In the same class where you have your GetConnection method, also define an event:
internal event EventHandler<SomeEventArgs> OnWorkerFinished = (s, e) => {};
then, define somewhere in your project (close to this class), the SomeEventArgs class:
internal class SomeEventArgs : EventArgs
{
public dynamic WorkerResult { get; private set; }
public SomeEventArgs(dynamic workerResult)
{
WorkerResult = workerResult;
}
}
Next, in the worker:
new WaitCallback
(
(_) =>
{
dynamic _serviceobject= InitializeCRMService();
//Here raise the event
SomeEventArgs e = new SomeEventArgs(_serviceObject);
OnWorkerFinished(this, e);
}
)
I don't know where you want to use get the result, but in that place you should bind to the OnWorkerFinished event of this class (in which you have the GetConnectionMethod).

C# equivalent for Java ExecutorService.newSingleThreadExecutor(), or: how to serialize mulithreaded access to a resource

I have a couple of situations in my code where various threads can create work items that, for various reasons, shouldn't be done in parallel. I'd like to make sure the work gets done in a FIFO manner, regardless of what thread it comes in from. In Java, I'd put the work items on a single-threaded ExecutorService; is there an equivalent in C#? I've cobbled something together with a Queue and a bunch of lock(){} blocks, but it'd be nice to be able to use something off-the-shelf and tested.
Update: Does anybody have experience with System.Threading.Tasks? Does it have a solution for this sort of thing? I'm writing a Monotouch app so who knows if I could even find a backported version of it that I could get to work, but it'd at least be something to think about for the future.
Update #2 For C# developers unfamiliar with the Java libraries I'm talking about, basically I want something that lets various threads hand off work items such that all those work items will be run on a single thread (which isn't any of the calling threads).
Update, 6/2018: If I was architecting a similar system now, I'd probably use Reactive Extensions as per Matt Craig's answer. I'm leaving Zachary Yates' answer the accepted one, though, because if you're thinking in Rx you probably wouldn't even ask this question, and I think ConcurrentQueue is easier to bodge into a pre-Rx program.
Update: To address the comments on wasting resources (and if you're not using Rx), you can use a BlockingCollection (if you use the default constructor, it wraps a ConcurrentQueue) and just call .GetConsumingEnumerable(). There's an overload that takes a CancellationToken if the work is long-running. See the example below.
You can use ConcurrentQueue, (if monotouch supports .net 4?) it's thread safe and I think the implementation is actually lockless. This works pretty well if you have a long-running task (like in a windows service).
Generally, your problem sounds like you have multiple producers with a single consumer.
var work = new BlockingCollection<Item>();
var producer1 = Task.Factory.StartNew(() => {
work.TryAdd(item); // or whatever your threads are doing
});
var producer2 = Task.Factory.StartNew(() => {
work.TryAdd(item); // etc
});
var consumer = Task.Factory.StartNew(() => {
foreach (var item in work.GetConsumingEnumerable()) {
// do the work
}
});
Task.WaitAll(producer1, producer2, consumer);
You should use BlockingCollection if you have a finite pool of work items. Here's an MSDN page showing all of the new concurrent collection types.
I believe this can be done using a SynchronizationContext. However, I have only done this to post back to the UI thread, which already has a synchronization context (if told to be installed) provided by .NET -- I don't know how to prepare it for use from a "vanilla thread" though.
Some links I found for "custom synchronizationcontext provider" (I have not had time to review these, do not fully understand the working/context, nor do I have any additional information):
Looking for an example of a custom SynchronizationContext (Required for unit testing)
http://codeidol.com/csharp/wcf/Concurrency-Management/Custom-Service-Synchronization-Context/
Happy coding.
There is a more contemporary solution now available - the EventLoopScheduler class.
Not native AFAIK, but look at this:
Serial Task Executor; is this thread safe?
I made an example here https://github.com/embeddedmz/message_passing_on_csharp which makes use of BlockingCollection.
So you will have a class that manages a resource and you can use the class below which creates a thread that will be the only one to manage it :
using System;
using System.Collections.Concurrent;
using System.Threading.Tasks;
public class ResourceManagerThread<Resource>
{
private readonly Resource _managedResource;
private readonly BlockingCollection<Action<Resource>> _tasksQueue;
private Task _task;
private readonly object _taskLock = new object();
public ResourceManagerThread(Resource resource)
{
_managedResource = (resource != null) ? resource : throw new ArgumentNullException(nameof(resource));
_tasksQueue = new BlockingCollection<Action<Resource>>();
}
public Task<T> Enqueue<T>(Func<Resource, T> method)
{
var tcs = new TaskCompletionSource<T>();
_tasksQueue.Add(r => tcs.SetResult(method(r)));
return tcs.Task;
}
public void Start()
{
lock (_taskLock)
{
if (_task == null)
{
_task = Task.Run(ThreadMain);
}
}
}
public void Stop()
{
lock (_taskLock)
{
if (_task != null)
{
_tasksQueue.CompleteAdding();
_task.Wait();
_task = null;
}
}
}
public bool HasStarted
{
get
{
lock (_taskLock)
{
if (_task != null)
{
return _task.IsCompleted == false ||
_task.Status == TaskStatus.Running ||
_task.Status == TaskStatus.WaitingToRun ||
_task.Status == TaskStatus.WaitingForActivation;
}
else
{
return false;
}
}
}
}
private void ThreadMain()
{
try
{
foreach (var action in _tasksQueue.GetConsumingEnumerable())
{
try
{
action(_managedResource);
}
catch
{
//...
}
}
}
catch
{
}
}
}
Example :
private readonly DevicesManager _devicesManager;
private readonly ResourceManagerThread<DevicesManager> _devicesManagerThread;
//...
_devicesManagerThread = new ResourceManagerThread<DevicesManager>(_devicesManager);
_devicesManagerThread.Start();
_devicesManagerThread.Enqueue((DevicesManager dm) =>
{
return dm.Initialize();
});
// Enqueue will return a Task. Use the 'Result' property to get the result of the 'message' or 'request' sent to the the thread managing the resource
As I wrote in comments, you discovered by yourself that the lock statement can do the work.
If you are interested in getting a "container" that can make simpler the job of managing a queue of work items, look at the ThreadPool class.
I think that, in a well designed architecture, with these two elemnts (ThreadPool class and lock statement) you can easily and succesfully serialize access to resources.

Strategies for calling synchronous service calls asynchronously in C#

With business logic encapsulated behind synchronous service calls e.g.:
interface IFooService
{
Foo GetFooById(int id);
int SaveFoo(Foo foo);
}
What is the best way to extend/use these service calls in an asynchronous fashion?
At present I've created a simple AsyncUtils class:
public static class AsyncUtils
{
public static void Execute<T>(Func<T> asyncFunc)
{
Execute(asyncFunc, null, null);
}
public static void Execute<T>(Func<T> asyncFunc, Action<T> successCallback)
{
Execute(asyncFunc, successCallback, null);
}
public static void Execute<T>(Func<T> asyncFunc, Action<T> successCallback, Action<Exception> failureCallback)
{
ThreadPool.UnsafeQueueUserWorkItem(state => ExecuteAndHandleError(asyncFunc, successCallback, failureCallback), null);
}
private static void ExecuteAndHandleError<T>(Func<T> asyncFunc, Action<T> successCallback, Action<Exception> failureCallback)
{
try
{
T result = asyncFunc();
if (successCallback != null)
{
successCallback(result);
}
}
catch (Exception e)
{
if (failureCallback != null)
{
failureCallback(e);
}
}
}
}
Which lets me call anything asynchronously:
AsyncUtils(
() => _fooService.SaveFoo(foo),
id => HandleFooSavedSuccessfully(id),
ex => HandleFooSaveError(ex));
Whilst this works in simple use cases it quickly gets tricky if other processes need to coordinate about the results, for example if I need to save three objects asynchronously before the current thread can continue then I'd like a way to wait-on/join the worker threads.
Options I've thought of so far include:
having AsyncUtils return a WaitHandle
having AsyncUtils use an AsyncMethodCaller and return an IAsyncResult
rewriting the API to include Begin, End async calls
e.g. something resembling:
interface IFooService
{
Foo GetFooById(int id);
IAsyncResult BeginGetFooById(int id);
Foo EndGetFooById(IAsyncResult result);
int SaveFoo(Foo foo);
IAsyncResult BeginSaveFoo(Foo foo);
int EndSaveFoo(IAsyncResult result);
}
Are there other approaches I should consider? What are the benefits and potential pitfalls of each?
Ideally I'd like to keep the service layer simple/synchronous and provide some easy to use utility methods for calling them asynchronously. I'd be interested in hearing about solutions and ideas applicable to C# 3.5 and C# 4 (we haven't upgraded yet but will do in the relatively near future).
Looking forward to your ideas.
Given your requirement to stay .NET 2.0 only, and not work on 3.5 or 4.0, this is probably the best option.
I do have three remarks on your current implementation.
Is there a specific reason you're using ThreadPool.UnsafeQueueUserWorkItem? Unless there is a specific reason this is required, I would recommend using ThreadPool.QueueUserWorkItem instead, especially if you're in a large development team. The Unsafe version can potentially allow security flaws to appear as you lose the calling stack, and as a result, the ability to control permissions as closely.
The current design of your exception handling, using the failureCallback, will swallow all exceptions, and provide no feedback, unless a callback is defined. It might be better to propogate the exception and let it bubble up if you're not going to handle it properly. Alternatively, you could push this back onto the calling thread in some fashion, though this would require using something more like IAsyncResult.
You currently have no way to tell if an asynchronous call is completed. This would be the other advantage of using IAsyncResult in your design (though it does add some complexity to the implementation).
Once you upgrade to .NET 4, however, I would recommend just putting this in a Task or Task<T>, as it was designed to handle this very cleanly. Instead of:
AsyncUtils(
() => _fooService.SaveFoo(foo),
id => HandleFooSavedSuccessfully(id),
ex => HandleFooSaveError(ex));
You can use the built-in tools and just write:
var task = Task.Factory.StartNew(
() => return _fooService.SaveFoo(foo) );
task.ContinueWith(
t => HandleFooSavedSuccessfully(t.Result),
TaskContinuationOptions.NotOnFaulted);
task.ContinueWith(
t => try { t.Wait(); } catch( Exception e) { HandleFooSaveError(e); },
TaskContinuationOptions.OnlyOnFaulted );
Granted, the last line there is a bit odd, but that's mainly because I tried to keep your existing API. If you reworked it a bit, you could simplify it...
Asynchronous interface (based on IAsyncResult) is useful only when you have some non-blocking call under the cover. The main point of the interface is to make it possible to do the call without blocking the caller thread.
This is useful in scenarios when you can make some system call and the system will notify you back when something happens (e.g. when a HTTP response is received or when an event happens).
The price for using IAsyncResult based interface is that you have to write code in a somewhat awkward way (by making every call using callback). Even worse, asynchronous API makes it impossible to use standard language constructs like while, for, or try..catch.
I don't really see the point of wrapping synchronous API into asynchronous interface, because you won't get the benefit (there will always be some thread blocked) and you'll just get more awkward way of calling it.
Of course, it makes a perfect sense to run the synchronous code on a background thread somehow (to avoid blocking the main application thread). Either using Task<T> on .NET 4.0 or using QueueUserWorkItem on .NET 2.0. However, I'm not sure if this should be done automatically in the service - it feels like doing this on the caller side would be easier, because you may need to perform multiple calls to the service. Using asynchronous API, you'd have to write something like:
svc.BeginGetFooId(ar1 => {
var foo = ar1.Result;
foo.Prop = 123;
svc.BeginSaveFoo(foo, ar2 => {
// etc...
}
});
When using synchronous API, you'd write something like:
ThreadPool.QueueUserWorkItem(() => {
var foo = svc.GetFooId();
foo.Prop = 123;
svc.SaveFoo(foo);
});
The following is a response to Reed's follow-up question. I'm not suggesting that it's the right way to go.
public static int PerformSlowly(int id)
{
// Addition isn't so hard, but let's pretend.
Thread.Sleep(10000);
return 42 + id;
}
public static Task<int> PerformTask(int id)
{
// Here's the straightforward approach.
return Task.Factory.StartNew(() => PerformSlowly(id));
}
public static Lazy<int> PerformLazily(int id)
{
// Start performing it now, but don't block.
var task = PerformTask(id);
// JIT for the value being checked, block and retrieve.
return new Lazy<int>(() => task.Result);
}
static void Main(string[] args)
{
int i;
// Start calculating the result, using a Lazy<int> as the future value.
var result = PerformLazily(7);
// Do assorted work, then get result.
i = result.Value;
// The alternative is to use the Task as the future value.
var task = PerformTask(7);
// Do assorted work, then get result.
i = task.Result;
}

Categories

Resources