I have made one WCF service Method which is consuming third party service methods(call methodA, methodB, methodC) and here all three belongs to different services i.e. serviceA, serviceB, serviceC.
Each method accepting single input object for processing (not List of input object). but I have to work on multiple objects, so I am consuming this methods in for loop.
Now the problem is, suppose I have 3 objects to process with methodA, 2 objects to process with methodB and 5 objects to process with methodC and consider each method taking 1 sec to process then total time taken to process all is almost 10 seconds. To overcome this problem after googling I got options like threading and parallel-linq. of course I don't have enough knowledge about threading and its performance, I choose to stay away. Now with parallel linq I found performance is up. But still expectations are not satisfied (and sometime its throwing timeout exception).
So please advice what should i try now? whether to dive in threading or anything other to try?
As TPL in general or Parallel class are solutions too, I suggest you to try out the TPL Dataflow library as you have a data flowing across your application, and your code will be much more structured this way.
So you can create 3 ActionBlock<> objects, each for the services you have, and post data to them in your loop. Also you can add task continuation handler for them so you'll be notified then all of messages are consumed by the services. Also you can add a BufferBlock<T> and link it to other ones with filter function. The code will be something like this:
void ProducingMethod()
{
var serviceABlock = new ActionBlock<YourInputObject>(o =>
{
serviceA.Call(o);
});
serviceABlock.Completion.ContinueWith(t =>
{
sendNotifyA();
});
var serviceBBlock = new ActionBlock<YourInputObject>(o =>
{
serviceB.Call(o);
});
serviceBBlock.Completion.ContinueWith(t =>
{
sendNotifyB();
});
var serviceCBlock = new ActionBlock<YourInputObject>(o =>
{
serviceC.Call(o);
});
serviceCBlock.Completion.ContinueWith(t =>
{
sendNotifyC();
});
foreach (var objectToProcess in queue)
{
if (SendToA)
{
serviceABlock.SendAsync(objectToProcess);
}
else if (SendToB)
{
serviceBBlock.SendAsync(objectToProcess);
}
else if (SendToC)
{
serviceCBlock.SendAsync(objectToProcess);
}
}
}
Related
I'm about to start using hangfire in C# in a asp.net mvc web application, and wonder how to create the right architecture.
As we are going to use HangFire, we are using it as a messagequeue, so we can process(store in the database) the user data directly and then for instance notify other systems and send email later in a separate process.
So our code now looks like this
function Xy(Client newClient)
{
_repository.save(newClient);
_crmConnector.notify(newClient);
mailer.Send(repository.GetMailInfo(), newClient)
}
And now we want to put the last two lines 'on the queue'
So following the example on the hangfire site we could do this
var client = new BackgroundJobClient();
client.Enqueue(() => _crmConnector.notify(newClient));
client.Enqueue(() => mailer.Send(repository.GetMailInfo(), newClient));
but I was wondering whether that is the right solution.
I once read about putting items on a queue and those were called 'commands', and they were classes especially created to wrap a task/command/thing-to-do and put it on a queue.
So for the notify the crm connector this would then be
client.Enqueue(() => new CrmNotifyCommand(newClient).Execute();
The CrmNotifyCommand would then receive the new client and have the knowledge to execute _crmConnector.notify(newClient).
In this case all items that are put on the queue (executed by HangFire) would be wrapped in a 'command'.
Such a command would then be a self containing class which knows how to execute a kind of business functionality. When the command itself uses more than 1 other class it could also be known as a facade I guess.
What do you think about such an architecture?
I once read about putting items on a queue and those were called
'commands', and they were classes especially created to wrap a
task/command/thing-to-do and put it on a queue.
Yes, your intuition is correct.
You should encapsulate all dependencies and explicit functionality in a separate class, and tell Hangfire to simply execute a single method (or command).
Here is my example, that I derived from Blake Connally's Hangfire demo.
namespace HangfireDemo.Core.Demo
{
public interface IDemoService
{
void RunDemoTask(PerformContext context);
}
public class DemoService : IDemoService
{
[DisplayName("Data Gathering Task Confluence Page")]
public void RunDemoTask(PerformContext context)
{
Console.WriteLine("This is a task that ran from the demo service.");
BackgroundJob.ContinueJobWith(context.BackgroundJob.Id, () => NextJob());
}
public void NextJob()
{
Console.WriteLine("This is my next task.");
}
}
}
And then separately, to schedule that command, you'd write something like the following:
BackgroundJob.Enqueue("demo-job", () => this._demoService.RunDemoTask(null));
If you need further clarification, I encourage you to watch Blake Connally's Hangfire demo.
Update 2
The queueing problem was probably solved already, as we've been able to run multiple requests concurrently and the lib nicely reported progress for each operation. Other issues we're still facing about concurrency were likely the reason for this apparent behaviour, but that's a design matter. To solve this however, it'd be helpful to have some knowledge of the inner workings of classes, modules and variables as used in VB6. A question arise: would encapsulating everything (connections, components etc.) in classes ensure that every created object does not share any data with other instances?
Update 1
We've refactored the application a bit more to cope with resource disposing, especially when dealing with OCXs. Apparently that solved the out of memory issue. What still bothers me is that I don't understand what is happening beneath the surface. In this regard, is there a way to see what objects are currently in memory and how many references they have? I know the reference counting model is different from garbage collector-based systems. Still I would suppose the RCW wrapping our com objects would keep things clean for us. In the model given, is that a safe assumption or there's something we're missing?
So, I've probably read the most variegated kind of articles and docs about the topic of COM multithreading, but I still cannot get how that's supposed to work exactly, especially when interacting with .Net technologies such as ASP.Net MVC. That could be considered a simple fancy of mine, except for the fact that we've got this quite critical project and we're experiencing severe issues in trying to tie everything up. We're getting out of memory errors (in VB6) and apparently we got wrong how objects are created and data shared between these in COM. Continue reading to know how the story goes...
How things came to be
Not much to say here. We have a legacy VB6 Desktop application made up of a number of ActiveX DLLs. These are configured to use Apartment as the threading model, and all classes are set as MultiUse. All worked well and nice until the time came when we was requested to transpose the app on the mighty web :O
The problem we faced and how we (thought we) solved it
Since we haven't got the resources to design and develop a solution from scratch, we used a third party java(script)-based framework to quickly build a web app. However, much of the real work is done by the legacy library, so we needed a way to interface these two components. The easiest way we could think of was to build a very basic (w/o auth and w/o UI) Asp.Net MVC website to use as the middle layer. This would receive requests from the web app and translate them for the COM lib to crunch data.
To this end, and since the libs were never meant to be used as a server, we tried to refactor the whole thing a bit so that most classes can now be used in a standalone manner: this included separating logic from the UI and eliminating all module and public vars where possible; unfortunately, some of the former are still present, in particular some ComponentOne OCXs to handle reports and prints. All in all, this seemed to work just fine, until we had to deal with the COM threading model :O
Making sense of nonsense
Long story short, after a lot of digging and headaches we devised the current solution, which is outlined below:
we install the legacy app as usual, so that it register its dlls in the registry;
in our MVC solution, we use System.Threading.Tasks, one per every request, to start the requested operation in an asynchronous manner. We assign the operation an id and return this id to the client. To start the task we call this method:
protected Task<TReturn> StartSTATask<TReturn>(Func<TReturn> function)
{
var task = Task.Factory.StartNew(
function,
System.Threading.CancellationToken.None,
TaskCreationOptions.None,
STATaskScheduler // property to store the scheduler instance
);
return task;
}
the task is run using the STATaskScheduler. We modified it so that it spawns a new thread if the number of threads in the pool is set to 0.
/// <summary>Initializes a new instance of the StaTaskScheduler class with the specified concurrency level.</summary>
/// <param name="numberOfThreads">The number of threads that should be created and used by this scheduler.</param>
public StaTaskScheduler(int numberOfThreads)
{
// Validate arguments
//if (numberOfThreads < 1) throw new ArgumentOutOfRangeException("concurrencyLevel");
// Initialize the tasks collection
_tasks = new BlockingCollection<Task>();
if (numberOfThreads > 0)
{
// Create the threads to be used by this scheduler
_threads = Enumerable.Range(0, numberOfThreads).Select(i =>
{
var thread = new Thread(() =>
{
// Continually get the next task and try to execute it.
// This will continue until the scheduler is disposed and no more tasks remain.
foreach (var t in _tasks.GetConsumingEnumerable())
{
TryExecuteTask(t);
}
});
thread.Name = "sta_thread_" + i;
thread.IsBackground = true;
thread.SetApartmentState(ApartmentState.STA);
return thread;
}).ToList();
// Start all of the threads
_threads.ForEach(t => t.Start());
}
}
/// <summary>Queues a Task to be executed by this scheduler.</summary>
/// <param name="task">The task to be executed.</param>
protected override void QueueTask(Task task)
{
if (_threads != null)
// Push it into the blocking collection of tasks
_tasks.Add(task);
else
{
var thread = new Thread(() => TryExecuteTask(task));
thread.Name = "sta_thread_task_" + task.Id;
thread.IsBackground = true;
thread.SetApartmentState(ApartmentState.STA);
thread.Start();
}
}
And in our base controller's OnActionExecuting method we initiliaze it so
STATaskScheduler = HttpContext.Application["STATaskScheduler"] as TaskScheduler;
if (null == STATaskScheduler)
{
STATaskScheduler = new StaTaskScheduler(0);
HttpContext.Application["STATaskScheduler"] = STATaskScheduler;
}
we use a thin wrapper to instantiate and call our COM libs through reflection:
// Libraries is a Dictionary containing the names of the registered dlls
protected object InitCom(Libraries lib)
{
return InitCom(lib, true);
}
protected virtual object InitCom(Libraries lib, bool setOperation)
{
var comObj = GetComInstance(lib);
var success = SetUpConnection(comObj);
if (!success)
throw new LeafOperationException(lib, "Errore durante la connessione: {1}".Printf(connectionString));
if(setOperation)
return InitOperation(comObj);
return comObj;
}
protected object GetComInstance(Libraries lib)
{
var comType = Type.GetTypeFromProgID(MALib[lib]);
var comObj = Activator.CreateInstance(comType);
return comObj;
}
protected virtual bool DisposeCom(object comObj)
{
var success = CloseConnection(comObj);
if(!success)
throw new LeafOperationException("Errore durante la chiusura della connessione: {1}".Printf(connectionString));
//Marshal.FinalReleaseComObject(comObj);
//comObj = null;
return success;
}
protected bool SetUpConnection(object comObj)
{
var serverName = connectionString.ServerName();
var catalogName = connectionString.CatalogName();
return Convert.ToBoolean(comObj.InvokeMethod("Set_ConnectionWeb", serverName, catalogName));
}
protected bool CloseConnection(object comObj)
{
return Convert.ToBoolean(comObj.InvokeMethod("Close_ConnectionWeb"));
}
protected object InitOperation(object comObj)
{
comObj.GetType().InvokeMember("OperationID", BindingFlags.SetProperty, null, comObj, new object[] { OperationId });
comObj.GetType().InvokeMember("OperationHash", BindingFlags.SetProperty, null, comObj, new object[] { OperationHash });
return comObj;
}
The rationale behind this is that we create a new instance of the class with each request, eventually releasing it when done. Read here to know why we commented out the ReleaseComObject part. Basically, we were trading out of memory for a lot of COM object that has been separated from its underlying RCW cannot be used exceptions.
The object is then used like this within methods of various classes:
public bool ChiusuraMese()
{
try
{
PulisciMessaggi();
var comObj = InitCom(Libraries.Chiusura);
var byRefArgs = new int[] { 2 };
var oReturn = comObj.InvokeMethodByRef("ChiusuraMese", byRefArgs, IdDitta, PeriodoGiornaliera, IdDipendenti.PadLeft(), IdGruppoInstallazione, CodGruppoGestione);
DisposeCom(comObj);
return Convert.ToInt32(oReturn) == 0;
}
catch (Exception ex)
{
using (ErrorLog Log = new ErrorLog(System.Reflection.Assembly.GetExecutingAssembly().FullName, ex)) { }
aErrorMessage = ex.Message;
return false;
}
}
where InvokeMethodByRef is an extension method defined this way:
public static object InvokeMethodByRef(this object comObj, string methodName, int[] byRefArgs, params object[] args)
{
var modifiers = new ParameterModifier(args.Length);
byRefArgs.ToList().ForEach(index => { modifiers[index] = true; });
return comObj.GetType().InvokeMember(methodName, BindingFlags.InvokeMethod, null, comObj, args, new ParameterModifier[] { modifiers }, null, null);
}
Left out of the apartment
For what I understood, this whole apartment stuff is really hard to get right, with its cross-thread marshalling, message loop, yadda yadda whatnot. Add to that we're using and old, unsupported technology used to develop an application that was not architected for the purpose we're forcing it into. All that said, and taken for grant that the .Net side of things is working correctly, a couple of thoughts still wander in our minds. In particular:
is this the correct way to get advantage of multithreading with COM? Sometimes, multiple requests for the same object get stuck as if queued. This makes us wonder whether COM is actually sharing some instances between threads;
are we really creating and disposing of objects with each request, or under the hood COM handles things differently? Apparently, we're getting public vars overwritten, so there's probably some resource contention and reentering somewhere we wouldn't expect;
is the setup correct? Are there alternatives which are easier to maintain and debug? Please keep in mind we don't have neither the time nor the resources to rewrite anything in great extent. We could probably try something like creating an exe ActiveX, but I wouldn't count on that.
what's the "least worse" way to use OCXs in a project of this kind (not using them is not an option at the moment)? Should we dispose of them in some particular way? We already checked we set them to nothing when finished, but maybe some other thread is still using them;
should we be aware of any particular COM limit related to our out of memory issue? We encountered the problem before when the form had more than 256 unique controls displayed. Maybe the same is happening here somehow? The error seems to be especially related to classes using UI components.
Things I've already read (and probably did not understand)
Before you point to resources online I should read, I add here some topics I've encountered, in random order:
About SingleUse/MultiUse
http://www.vb-helper.com/howto_activex_dll.html
https://msdn.microsoft.com/en-us/library/aa242108(v=vs.60).aspx
Not really much choice here, if we want to stick with ActiveX DLLs with forms.
About (apartment) threading
https://msdn.microsoft.com/en-us/library/aa716297(v=vs.60).aspx
https://msdn.microsoft.com/en-us/library/aa716228(v=vs.60).aspx. By the way, this one probably hints that calls to objects are being serialized for access by other threads.
https://msdn.microsoft.com/en-us/library/windows/desktop/ms680112%28v=vs.85%29.aspx?f=255&MSPPError=-2147217396
About debugging
https://msdn.microsoft.com/en-us/library/aa241684(v=vs.60).aspx
https://msdn.microsoft.com/en-us/library/aa716193%28v=vs.60%29.aspx?f=255&MSPPError=-2147217396
Could a stack dump be of any help when we face the error? I don't even know how to use WinDbg, so I'd like at least to know if that would be a total waste of time :D
We're kinda stuck here, as we've got no clue as to where or what to look for, so any kind of help would be really appreciated.
Comments
So I've been pointed out I should read more about COM's threading model. I kind of expected that. Anyhow, to elaborate further, let me write some comments.
First, I don't have any control over CoInitialize or whatever, I'm just instantiating some VB6 dlls. I guess COM is doing such and such under the hood. Fact is, I could not find anywhere what that is (edit - apparently, .Net is already taking care of that for me, see the answer to this question: Do i need to call CoInitialize before interacting with COM in .NET?).
To recap:
I'm using STA threads from the client app
I'm using Activator.CreateInstance supposing it is actually creating a new object every time it is called. The call is done within a new STA thread.
Let's set aside for a moment questions about thread-safety in the actual DLLs. What I'm mainly interested in understanding here is if the described solution is a correct way (possibly not the best way, I'm aware of that) to exploit multithreading with COM libraries.
To cite some sources, to the best of my current knowledge I should be in the situation depicted in Figure 8.5 here: https://msdn.microsoft.com/en-us/library/aa716228(v=vs.60).aspx
I can't find any reason why this should not work, since as I said I'm supposing each object resides in its own apartment and has its own variables, plus a copy of global vars (see here: https://msdn.microsoft.com/en-us/library/aa261361(v=vs.60).aspx).
I have a loop in my application that loops through a set of entities in the following fashion
foreach(var entity in mEntities)
{
entity.Update();
}
Some of these entities maintain a networking component that will call a Azure Mobile Service in order to update their state to the server. An example is below:
public class TestEntity {
public int Index;
public int PropertyValue;
public async void Update()
{
Task.Run(() => {
MyAzureMobileServiceClient.Update(Index, PropertyValue);
});
}
}
The UI rendering is done by Monogame in a more traditional game loop fashion. Whilst I do not know the inner workings of it, I am fairly certain that it does not have an actual separate thread doing the work. In practice, this shows as the UI freezing every time this update is called.
I want to be able to run it "smoothly" in the background. In the old Windows model this could have easily been done by starting a new Thread that would handle it, but I don't understand the threading well enough in WinRT to understand what is wrong with my approach.
Any ideas?
[update] I also tried this:
Task.Factory.StartNew(async () =>
{
while(true) {
await Task.Delay(1000);
MyAzureMobileServiceClient.Update(Index, PropertyValue);
}
});
Every 1 seconds, I get a mini-freeze like before.
[update 2] I tried this with a twist. I replaced the Azure Mobile Service client call with a standard HTTP request and it worked splendidly; no mini-freezes. Granted it wasn't to the backend yet, but at least I have a work around by doing the whole thing manually. Would prefer to not do that, however.
[update 3] This is getting peculiar. I realize I simplified the code in this question in order to get it coherent in the context. However, this appears to have removed the true source of the problem. I tried the following things:
I created a HTTP request and created the request manually, called it inside the Task.Run() and it worked splendidly with no latency.
I called the Azure Mobile Service client Update DIRECTLY and there was no latency.
So this brings me to where the problem lies. I basically have a wrapper class for the Azure Mobile Service. The real path that goes looks roughly like this:
CommunicationClient.UpdateAsync(myObject);
public Task UpdateAsync(MyObjectType obj)
{
var table = mMobileServiceClient.GetTable<MyObjectType>();
return table.UpdateAsync(obj);
}
This causes the lag, but if I do this instead of it, it works with no latency whatsoever:
var client = CommunicationClient.MobileServiceClient;
var table = client.GetTable<MyObjectType>();
table.UpdateAsync(obj);
Soooooo... I should probably refactor the whole question. It's getting tl;dry.
I had a question about how to run things on a backgroundthread and they advised me to use ThreadPool so I would advise you to look at my question and the answer on it maybe you can pick up on some things and get it working on your end.
Create Backgroundthread Monogame
I'm building a Windows Store app, and I have some code that needs to be posted to the UI thread.
For that, i'd like to retrieve the CoreDispatcher and use it to post the code.
It seems that there are a few ways to do so:
// First way
Windows.ApplicationModel.Core.CoreApplication.GetCurrentView().CoreWindow.Dispatcher;
// Second way
Window.Current.Dispatcher;
I wonder which one is correct? or if both are equivalent?
This is the preferred way:
Windows.ApplicationModel.Core.CoreApplication.MainView.CoreWindow.Dispatcher.RunAsync(CoreDispatcherPriority.Normal,
() =>
{
// Your UI update code goes here!
});
The advantage this has is that it gets the main CoreApplicationView and so is always available. More details here.
There are two alternatives which you could use.
First alternative
Windows.ApplicationModel.Core.CoreApplication.GetCurrentView().CoreWindow.Dispatcher
This gets the active view for the app, but this will give you null, if no views has been activated. More details here.
Second alternative
Window.Current.Dispatcher
This solution will not work when it's called from another thread as it returns null instead of the UI Dispatcher. More details here.
For anyone using C++/CX
Windows::ApplicationModel::Core::CoreApplication::MainView->CoreWindow->Dispatcher->RunAsync(
CoreDispatcherPriority::Normal,
ref new Windows::UI::Core::DispatchedHandler([this]()
{
// do stuff
}));
await CoreApplication.MainView.CoreWindow.Dispatcher.RunAsync(
CoreDispatcherPriority.Normal,
() => { // your code should be here});
While this is an old thread, I wanted to draw attention to a possible issue developers may run across which impacted me and made it extremely difficult to debug in large UWP apps. In my case, I refactored the following code from the suggestions above back in 2014 but would occasionally be plagued with the occasional app freezes that were random in nature.
public static class DispatcherHelper
{
public static Task RunOnUIThreadAsync(Action action)
{
return RunOnUIThreadAsync(Windows.UI.Core.CoreDispatcherPriority.Normal, action);
}
public static async Task RunOnUIThreadAsync(Windows.UI.Core.CoreDispatcherPriority priority, Action action)
{
try
{
await returnDispatcher().RunAsync(priority, () =>
{
action();
});
}
catch (Exception ex)
{
var noawait = ExceptionHandler.HandleException(ex, false);
}
}
private static Windows.UI.Core.CoreDispatcher returnDispatcher()
{
return (Windows.UI.Xaml.Window.Current == null) ?
CoreApplication.MainView.CoreWindow.Dispatcher :
CoreApplication.GetCurrentView().CoreWindow.Dispatcher;
}
}
From the above, I had used a static class to allow the calling of the Dispatcher through-out the application - allowing for a single call. For 95% of the time, everything was fine even through QA regression but clients would report an issue every now and then. The solution was to include the call below, not using a static call in the actual pages.
await Windows.ApplicationModel.Core.CoreApplication.MainView.CoreWindow.Dispatcher.RunAsync(Windows.UI.Core.CoreDispatcherPriority.Normal, () =>
{
});
This is not the case when I need to ensure the UI Thread was called from App.xaml.cs or my Singleton NavigationService which handled pushing/popping on to the stack. The dispatcher apparently was losing track of which UI Thread was called, since each page has it's own UI thread, when the stack had a variety of Messages triggering from the MessageBus.
Hope this helps others that may be impacted and it is also where I think each platform would do a service to their developers by publishing a complete project covering the best practices.
Actually, I would propose something in the line of this:
return (Window.Current == null) ?
CoreApplication.MainView.CoreWindow.Dispatcher :
CoreApplication.GetCurrentView().CoreWindow.Dispatcher
That way, should you have openend another View/Window, you won't get the Dispatchers confused...
This little gem checks whether there is even a Window. If none, use the MainView's Dispatcher. If there is a view, use that one's Dispatcher.
I am writing refactoring a Silverlight program to consumes a portion of its existing business logic from a WCF service. In doing so, I've run into the restriction in Silverlight 3 that only allows asynchronous calls to WCF services to avoid cases where long-running or non-responsive service calls block the UI thread (SL has an interesting queuing model for invoking WCF services on the UI thread).
As a consequence, writing what once was straightforward, is becoming rapidly more complex (see the code examples at the end of my question).
Ideally, I would use coroutines to simplify the implementation, but sadly, C# does not currently support coroutines as a native language facility. However, C# does have the concept of generators (iterators) using the yield return syntax. My idea is to re-purpose the yield keyword to allow me to build a simple coroutine model for the same logic.
I am reluctant to do this, however, because I am worried that there may be some hidden (technical) pitfalls that I'm not anticipating (given my relative inexperience with Silverlight and WCF). I am also worried that the implementation mechanism may not be clear to future developers and may hinder rather than simplify their efforts to maintain or extend the code in the future. I've seen this question on SO about re-purposing iterators to build state machines: implementing a state machine using the "yield" keyword, and while it's not exactly the same thing I'm doing, it does make me pause.
However, I need to do something to hide the complexity of the service calls and manage the effort and potential risk of defects in this type of change. I am open to other ideas or approaches I can use to solve this problem.
The original non-WCF version of the code looks something like this:
void Button_Clicked( object sender, EventArgs e ) {
using( var bizLogic = new BusinessLogicLayer() ) {
try {
var resultFoo = bizLogic.Foo();
// ... do something with resultFoo and the UI
var resultBar = bizLogic.Bar(resultFoo);
// ... do something with resultBar and the UI
var resultBaz = bizLogic.Baz(resultBar);
// ... do something with resultFoo, resultBar, resultBaz
}
}
}
The re-factored WCF version becomes quite a bit more involved (even without exception handling and pre/post condition testing):
// fields needed to manage distributed/async state
private FooResponse m_ResultFoo;
private BarResponse m_ResultBar;
private BazResponse m_ResultBaz;
private SomeServiceClient m_Service;
void Button_Clicked( object sender, EventArgs e ) {
this.IsEnabled = false; // disable the UI while processing async WECF call chain
m_Service = new SomeServiceClient();
m_Service.FooCompleted += OnFooCompleted;
m_Service.BeginFoo();
}
// called asynchronously by SL when service responds
void OnFooCompleted( FooResponse fr ) {
m_ResultFoo = fr.Response;
// do some UI processing with resultFoo
m_Service.BarCompleted += OnBarCompleted;
m_Service.BeginBar();
}
void OnBarCompleted( BarResponse br ) {
m_ResultBar = br.Response;
// do some processing with resultBar
m_Service.BazCompleted += OnBazCompleted;
m_Service.BeginBaz();
}
void OnBazCompleted( BazResponse bz ) {
m_ResultBaz = bz.Response;
// ... do some processing with Foo/Bar/Baz results
m_Service.Dispose();
}
The above code is obviously a simplification, in that it omits exception handling, nullity checks, and other practices that would be necessary in production code. Nonetheless, I think it demonstrates the rapid increase in complexity that begins to occur with the asynchronous WCF programming model in Silverlight. Re-factoring the original implementation (which didn't use a service layer, but rather had its logic embedded in the SL client) is rapidly looking to be a daunting task. And one that is likely to be quite error prone.
The co-routine version of the code would look something like this (I have not tested this yet):
void Button_Clicked( object sender, EventArgs e ) {
PerformSteps( ButtonClickCoRoutine );
}
private IEnumerable<Action> ButtonClickCoRoutine() {
using( var service = new SomeServiceClient() ) {
FooResponse resultFoo;
BarResponse resultBar;
BazResponse resultBaz;
yield return () => {
service.FooCompleted = r => NextStep( r, out resultFoo );
service.BeginFoo();
};
yield return () => {
// do some UI stuff with resultFoo
service.BarCompleted = r => NextStep( r, out resultBar );
service.BeginBar();
};
yield return () => {
// do some UI stuff with resultBar
service.BazCompleted = r => NextStep( r, out resultBaz );
service.BeginBaz();
};
yield return () => {
// do some processing with resultFoo, resultBar, resultBaz
}
}
}
private void NextStep<T>( T result, out T store ) {
store = result;
PerformSteps(); // continues iterating steps
}
private IEnumerable<Action> m_StepsToPerform;
private void PerformSteps( IEnumerable<Action> steps ) {
m_StepsToPerform = steps;
PerformSteps();
}
private void PerformSteps() {
if( m_StepsToPerform == null )
return; // nothing to do
m_StepsToPerform.MoveNext();
var nextStep = m_StepsToPerform.Current;
if( nextStep == null ) {
m_StepsToPerform.Dispose();
m_StepsToPerform = null;
return; // end of steps
}
nextStep();
}
There are all sorts of things that need to be improved in the above code. But the basic premise is to factor out the continuation pattern (creating an interception point for exception handling and various checks) while allowing the event-based async model of WCF to drive when each step is performed - basically when the last async WCF call completes. While on the surface this looks like more code, it's worth mentioning that PerformSteps() and NextStep() are reusable, only the implementation in ButtonClickCoRoutine() would change with each different implementation site.
I'm not entirely sure I like this model, and I wouldn't be surprised if a simpler way existed to implement it. But I haven't been able to find one on the "interwebs" or MSDN, or anywhere else. Thanks in advance for the help.
You should definitely look at the Concurrency and Coordination Runtime. It uses iterators for exactly this purpose.
On the other hand, you should also look at Parallel Extensions and its approach to continuations. Parallel Extensions is part of .NET 4.0, whereas the CCR requires separate licensing. I would advise you to go with a framework written by people who eat, breathe and sleep this stuff though. It's just too easy to get details wrong on your own.
The Reactive Extensions for .NET provide a much cleaner model for handling this.
They provide extensions which let you write simple delegates against asynchronous events in a much, much cleaner manner. I recommend looking into them, and adapting them to this situation.
I didn't read your whole thing.
They use this strategy in CCR robotics studio, and a number of other projects use this strategy. An alternative is to use LINQ, see e.g. this blog for a description. The Reactive framework (Rx) is kinda built along these lines.
Luca mentions in his PDC talk that perhaps future version of C#/VB may add async primitives to the language.
In the meantime, if you can use F#, it is a winning strategy. Right now what you can do with F# here blows everything else right out of the water.
EDIT
To quote the example from my blog, suppose you have a WCF client that you want to call a couple methods on. The synchronous version might be written as
// a sample client function that runs synchronously
let SumSquares (client : IMyClientContract) =
(box client :?> IClientChannel).Open()
let sq1 = client.Square(3)
let sq2 = client.Square(4)
(box client :?> IClientChannel).Close()
sq1 + sq2
and the corresponding async code would be
// async version of our sample client - does not hold threads
// while calling out to network
let SumSquaresAsync (client : IMyClientContract) =
async { do! (box client :?> IClientChannel).OpenAsync()
let! sq1 = client.SquareAsync(3)
let! sq2 = client.SquareAsync(4)
do! (box client :?> IClientChannel).CloseAsync()
return sq1 + sq2 }
No crazy callbacks, you can use control constructs like if-then-else, while, try-finally, etc, write it almost exactly like you write straight-line code, and everything work, but now it's async. It's very easy to take a given BeginFoo/EndFoo pair of methods and make the corresponding F# async methods for use in this model.
You may also want to consider Jeffrey Richter's AsyncEnumerator which is part of his 'power threading' library. He worked together with the CCR team to develop CCR. The AsyncEnumerator is according to Jeffrey more 'lightweight' than the CCR. Personally I've played about with AsyncEnumerator but not with the CCR.
I haven't used it in anger though - so far I've found the limitations with using enumerators to implement coroutines too painful. Currently learning F# because of amongst other things async workflows (If I remember the name correctly) which look like they're full fledged coroutines or 'continuations' (I forget the correct name or the exact distinctions between the terms).
Anyway, here's some links:
http://www.wintellect.com/PowerThreading.aspx
Channel 9 video on AsyncEnumerator
MSDN Article