I have a loop in my application that loops through a set of entities in the following fashion
foreach(var entity in mEntities)
{
entity.Update();
}
Some of these entities maintain a networking component that will call a Azure Mobile Service in order to update their state to the server. An example is below:
public class TestEntity {
public int Index;
public int PropertyValue;
public async void Update()
{
Task.Run(() => {
MyAzureMobileServiceClient.Update(Index, PropertyValue);
});
}
}
The UI rendering is done by Monogame in a more traditional game loop fashion. Whilst I do not know the inner workings of it, I am fairly certain that it does not have an actual separate thread doing the work. In practice, this shows as the UI freezing every time this update is called.
I want to be able to run it "smoothly" in the background. In the old Windows model this could have easily been done by starting a new Thread that would handle it, but I don't understand the threading well enough in WinRT to understand what is wrong with my approach.
Any ideas?
[update] I also tried this:
Task.Factory.StartNew(async () =>
{
while(true) {
await Task.Delay(1000);
MyAzureMobileServiceClient.Update(Index, PropertyValue);
}
});
Every 1 seconds, I get a mini-freeze like before.
[update 2] I tried this with a twist. I replaced the Azure Mobile Service client call with a standard HTTP request and it worked splendidly; no mini-freezes. Granted it wasn't to the backend yet, but at least I have a work around by doing the whole thing manually. Would prefer to not do that, however.
[update 3] This is getting peculiar. I realize I simplified the code in this question in order to get it coherent in the context. However, this appears to have removed the true source of the problem. I tried the following things:
I created a HTTP request and created the request manually, called it inside the Task.Run() and it worked splendidly with no latency.
I called the Azure Mobile Service client Update DIRECTLY and there was no latency.
So this brings me to where the problem lies. I basically have a wrapper class for the Azure Mobile Service. The real path that goes looks roughly like this:
CommunicationClient.UpdateAsync(myObject);
public Task UpdateAsync(MyObjectType obj)
{
var table = mMobileServiceClient.GetTable<MyObjectType>();
return table.UpdateAsync(obj);
}
This causes the lag, but if I do this instead of it, it works with no latency whatsoever:
var client = CommunicationClient.MobileServiceClient;
var table = client.GetTable<MyObjectType>();
table.UpdateAsync(obj);
Soooooo... I should probably refactor the whole question. It's getting tl;dry.
I had a question about how to run things on a backgroundthread and they advised me to use ThreadPool so I would advise you to look at my question and the answer on it maybe you can pick up on some things and get it working on your end.
Create Backgroundthread Monogame
Related
I have "inherited" a multi-tier ASP.NET Core project that uses AutoMapper and EntityFramework.
It also makes heavy use of async/await.
As soon as there are multiple simultaneous requests, I receive an InvalidOperationException from EntityFramework, informing me that "A second operation started on this context before a previous asynchronous operation completed."
I have already played around with the lifetime settings (everything was set to Scoped) and thought setting everything to Transient had solved the issue as a temporary workaround. But after slightly cleaning up my code, so that I could merge it into the main branch, the problem returned.
Anyway, I figured it would be better to create a sample project to replicate the problem. But that didn't work either. I tried to copy the exact project structure, use the same (outdated) versions of Entity Framework etc., but no luck. Everything works fine, both with "Scoped" and "Transient" lifetime.
How would you try to debug this problem?
I have already added code to log the addresses in memory, like so:
private static string GetAddress(object a)
{
GCHandle handle = GCHandle.Alloc(a, GCHandleType.Weak);
IntPtr pointer = GCHandle.ToIntPtr(handle);
handle.Free();
return "0x" + pointer.ToString("X");
}
public async Task<IActionResult> GetAll()
{
Debug.WriteLine("######## {0:00}: Controller {1}\tManager {2}\tRepository\t\t\tContext\t\t\t(DepartmentController.GetAll)", Thread.CurrentThread.ManagedThreadId, GetAddress(this), GetAddress(_departmentManager));
Thread.Sleep(2000); // To make it easier to reproduce the issue
var departments = await _departmentManager.GetAllItems();
return new ObjectResult(GetJsonObject(departments));
}
But this didn't really help much. With the lifetime set to "Transient", all instances have different addresses on different threads, so it doesn't really look like the threads are accessing each other's DbContexts.
I suspect there's a very subtle problem somewhere, like a missing keyword or so, that is running code on a different thread.
I just can't seem to figure out what it is...
Any help will be highly appreciated.
Update 2
The queueing problem was probably solved already, as we've been able to run multiple requests concurrently and the lib nicely reported progress for each operation. Other issues we're still facing about concurrency were likely the reason for this apparent behaviour, but that's a design matter. To solve this however, it'd be helpful to have some knowledge of the inner workings of classes, modules and variables as used in VB6. A question arise: would encapsulating everything (connections, components etc.) in classes ensure that every created object does not share any data with other instances?
Update 1
We've refactored the application a bit more to cope with resource disposing, especially when dealing with OCXs. Apparently that solved the out of memory issue. What still bothers me is that I don't understand what is happening beneath the surface. In this regard, is there a way to see what objects are currently in memory and how many references they have? I know the reference counting model is different from garbage collector-based systems. Still I would suppose the RCW wrapping our com objects would keep things clean for us. In the model given, is that a safe assumption or there's something we're missing?
So, I've probably read the most variegated kind of articles and docs about the topic of COM multithreading, but I still cannot get how that's supposed to work exactly, especially when interacting with .Net technologies such as ASP.Net MVC. That could be considered a simple fancy of mine, except for the fact that we've got this quite critical project and we're experiencing severe issues in trying to tie everything up. We're getting out of memory errors (in VB6) and apparently we got wrong how objects are created and data shared between these in COM. Continue reading to know how the story goes...
How things came to be
Not much to say here. We have a legacy VB6 Desktop application made up of a number of ActiveX DLLs. These are configured to use Apartment as the threading model, and all classes are set as MultiUse. All worked well and nice until the time came when we was requested to transpose the app on the mighty web :O
The problem we faced and how we (thought we) solved it
Since we haven't got the resources to design and develop a solution from scratch, we used a third party java(script)-based framework to quickly build a web app. However, much of the real work is done by the legacy library, so we needed a way to interface these two components. The easiest way we could think of was to build a very basic (w/o auth and w/o UI) Asp.Net MVC website to use as the middle layer. This would receive requests from the web app and translate them for the COM lib to crunch data.
To this end, and since the libs were never meant to be used as a server, we tried to refactor the whole thing a bit so that most classes can now be used in a standalone manner: this included separating logic from the UI and eliminating all module and public vars where possible; unfortunately, some of the former are still present, in particular some ComponentOne OCXs to handle reports and prints. All in all, this seemed to work just fine, until we had to deal with the COM threading model :O
Making sense of nonsense
Long story short, after a lot of digging and headaches we devised the current solution, which is outlined below:
we install the legacy app as usual, so that it register its dlls in the registry;
in our MVC solution, we use System.Threading.Tasks, one per every request, to start the requested operation in an asynchronous manner. We assign the operation an id and return this id to the client. To start the task we call this method:
protected Task<TReturn> StartSTATask<TReturn>(Func<TReturn> function)
{
var task = Task.Factory.StartNew(
function,
System.Threading.CancellationToken.None,
TaskCreationOptions.None,
STATaskScheduler // property to store the scheduler instance
);
return task;
}
the task is run using the STATaskScheduler. We modified it so that it spawns a new thread if the number of threads in the pool is set to 0.
/// <summary>Initializes a new instance of the StaTaskScheduler class with the specified concurrency level.</summary>
/// <param name="numberOfThreads">The number of threads that should be created and used by this scheduler.</param>
public StaTaskScheduler(int numberOfThreads)
{
// Validate arguments
//if (numberOfThreads < 1) throw new ArgumentOutOfRangeException("concurrencyLevel");
// Initialize the tasks collection
_tasks = new BlockingCollection<Task>();
if (numberOfThreads > 0)
{
// Create the threads to be used by this scheduler
_threads = Enumerable.Range(0, numberOfThreads).Select(i =>
{
var thread = new Thread(() =>
{
// Continually get the next task and try to execute it.
// This will continue until the scheduler is disposed and no more tasks remain.
foreach (var t in _tasks.GetConsumingEnumerable())
{
TryExecuteTask(t);
}
});
thread.Name = "sta_thread_" + i;
thread.IsBackground = true;
thread.SetApartmentState(ApartmentState.STA);
return thread;
}).ToList();
// Start all of the threads
_threads.ForEach(t => t.Start());
}
}
/// <summary>Queues a Task to be executed by this scheduler.</summary>
/// <param name="task">The task to be executed.</param>
protected override void QueueTask(Task task)
{
if (_threads != null)
// Push it into the blocking collection of tasks
_tasks.Add(task);
else
{
var thread = new Thread(() => TryExecuteTask(task));
thread.Name = "sta_thread_task_" + task.Id;
thread.IsBackground = true;
thread.SetApartmentState(ApartmentState.STA);
thread.Start();
}
}
And in our base controller's OnActionExecuting method we initiliaze it so
STATaskScheduler = HttpContext.Application["STATaskScheduler"] as TaskScheduler;
if (null == STATaskScheduler)
{
STATaskScheduler = new StaTaskScheduler(0);
HttpContext.Application["STATaskScheduler"] = STATaskScheduler;
}
we use a thin wrapper to instantiate and call our COM libs through reflection:
// Libraries is a Dictionary containing the names of the registered dlls
protected object InitCom(Libraries lib)
{
return InitCom(lib, true);
}
protected virtual object InitCom(Libraries lib, bool setOperation)
{
var comObj = GetComInstance(lib);
var success = SetUpConnection(comObj);
if (!success)
throw new LeafOperationException(lib, "Errore durante la connessione: {1}".Printf(connectionString));
if(setOperation)
return InitOperation(comObj);
return comObj;
}
protected object GetComInstance(Libraries lib)
{
var comType = Type.GetTypeFromProgID(MALib[lib]);
var comObj = Activator.CreateInstance(comType);
return comObj;
}
protected virtual bool DisposeCom(object comObj)
{
var success = CloseConnection(comObj);
if(!success)
throw new LeafOperationException("Errore durante la chiusura della connessione: {1}".Printf(connectionString));
//Marshal.FinalReleaseComObject(comObj);
//comObj = null;
return success;
}
protected bool SetUpConnection(object comObj)
{
var serverName = connectionString.ServerName();
var catalogName = connectionString.CatalogName();
return Convert.ToBoolean(comObj.InvokeMethod("Set_ConnectionWeb", serverName, catalogName));
}
protected bool CloseConnection(object comObj)
{
return Convert.ToBoolean(comObj.InvokeMethod("Close_ConnectionWeb"));
}
protected object InitOperation(object comObj)
{
comObj.GetType().InvokeMember("OperationID", BindingFlags.SetProperty, null, comObj, new object[] { OperationId });
comObj.GetType().InvokeMember("OperationHash", BindingFlags.SetProperty, null, comObj, new object[] { OperationHash });
return comObj;
}
The rationale behind this is that we create a new instance of the class with each request, eventually releasing it when done. Read here to know why we commented out the ReleaseComObject part. Basically, we were trading out of memory for a lot of COM object that has been separated from its underlying RCW cannot be used exceptions.
The object is then used like this within methods of various classes:
public bool ChiusuraMese()
{
try
{
PulisciMessaggi();
var comObj = InitCom(Libraries.Chiusura);
var byRefArgs = new int[] { 2 };
var oReturn = comObj.InvokeMethodByRef("ChiusuraMese", byRefArgs, IdDitta, PeriodoGiornaliera, IdDipendenti.PadLeft(), IdGruppoInstallazione, CodGruppoGestione);
DisposeCom(comObj);
return Convert.ToInt32(oReturn) == 0;
}
catch (Exception ex)
{
using (ErrorLog Log = new ErrorLog(System.Reflection.Assembly.GetExecutingAssembly().FullName, ex)) { }
aErrorMessage = ex.Message;
return false;
}
}
where InvokeMethodByRef is an extension method defined this way:
public static object InvokeMethodByRef(this object comObj, string methodName, int[] byRefArgs, params object[] args)
{
var modifiers = new ParameterModifier(args.Length);
byRefArgs.ToList().ForEach(index => { modifiers[index] = true; });
return comObj.GetType().InvokeMember(methodName, BindingFlags.InvokeMethod, null, comObj, args, new ParameterModifier[] { modifiers }, null, null);
}
Left out of the apartment
For what I understood, this whole apartment stuff is really hard to get right, with its cross-thread marshalling, message loop, yadda yadda whatnot. Add to that we're using and old, unsupported technology used to develop an application that was not architected for the purpose we're forcing it into. All that said, and taken for grant that the .Net side of things is working correctly, a couple of thoughts still wander in our minds. In particular:
is this the correct way to get advantage of multithreading with COM? Sometimes, multiple requests for the same object get stuck as if queued. This makes us wonder whether COM is actually sharing some instances between threads;
are we really creating and disposing of objects with each request, or under the hood COM handles things differently? Apparently, we're getting public vars overwritten, so there's probably some resource contention and reentering somewhere we wouldn't expect;
is the setup correct? Are there alternatives which are easier to maintain and debug? Please keep in mind we don't have neither the time nor the resources to rewrite anything in great extent. We could probably try something like creating an exe ActiveX, but I wouldn't count on that.
what's the "least worse" way to use OCXs in a project of this kind (not using them is not an option at the moment)? Should we dispose of them in some particular way? We already checked we set them to nothing when finished, but maybe some other thread is still using them;
should we be aware of any particular COM limit related to our out of memory issue? We encountered the problem before when the form had more than 256 unique controls displayed. Maybe the same is happening here somehow? The error seems to be especially related to classes using UI components.
Things I've already read (and probably did not understand)
Before you point to resources online I should read, I add here some topics I've encountered, in random order:
About SingleUse/MultiUse
http://www.vb-helper.com/howto_activex_dll.html
https://msdn.microsoft.com/en-us/library/aa242108(v=vs.60).aspx
Not really much choice here, if we want to stick with ActiveX DLLs with forms.
About (apartment) threading
https://msdn.microsoft.com/en-us/library/aa716297(v=vs.60).aspx
https://msdn.microsoft.com/en-us/library/aa716228(v=vs.60).aspx. By the way, this one probably hints that calls to objects are being serialized for access by other threads.
https://msdn.microsoft.com/en-us/library/windows/desktop/ms680112%28v=vs.85%29.aspx?f=255&MSPPError=-2147217396
About debugging
https://msdn.microsoft.com/en-us/library/aa241684(v=vs.60).aspx
https://msdn.microsoft.com/en-us/library/aa716193%28v=vs.60%29.aspx?f=255&MSPPError=-2147217396
Could a stack dump be of any help when we face the error? I don't even know how to use WinDbg, so I'd like at least to know if that would be a total waste of time :D
We're kinda stuck here, as we've got no clue as to where or what to look for, so any kind of help would be really appreciated.
Comments
So I've been pointed out I should read more about COM's threading model. I kind of expected that. Anyhow, to elaborate further, let me write some comments.
First, I don't have any control over CoInitialize or whatever, I'm just instantiating some VB6 dlls. I guess COM is doing such and such under the hood. Fact is, I could not find anywhere what that is (edit - apparently, .Net is already taking care of that for me, see the answer to this question: Do i need to call CoInitialize before interacting with COM in .NET?).
To recap:
I'm using STA threads from the client app
I'm using Activator.CreateInstance supposing it is actually creating a new object every time it is called. The call is done within a new STA thread.
Let's set aside for a moment questions about thread-safety in the actual DLLs. What I'm mainly interested in understanding here is if the described solution is a correct way (possibly not the best way, I'm aware of that) to exploit multithreading with COM libraries.
To cite some sources, to the best of my current knowledge I should be in the situation depicted in Figure 8.5 here: https://msdn.microsoft.com/en-us/library/aa716228(v=vs.60).aspx
I can't find any reason why this should not work, since as I said I'm supposing each object resides in its own apartment and has its own variables, plus a copy of global vars (see here: https://msdn.microsoft.com/en-us/library/aa261361(v=vs.60).aspx).
I'm building a Windows Store app, and I have some code that needs to be posted to the UI thread.
For that, i'd like to retrieve the CoreDispatcher and use it to post the code.
It seems that there are a few ways to do so:
// First way
Windows.ApplicationModel.Core.CoreApplication.GetCurrentView().CoreWindow.Dispatcher;
// Second way
Window.Current.Dispatcher;
I wonder which one is correct? or if both are equivalent?
This is the preferred way:
Windows.ApplicationModel.Core.CoreApplication.MainView.CoreWindow.Dispatcher.RunAsync(CoreDispatcherPriority.Normal,
() =>
{
// Your UI update code goes here!
});
The advantage this has is that it gets the main CoreApplicationView and so is always available. More details here.
There are two alternatives which you could use.
First alternative
Windows.ApplicationModel.Core.CoreApplication.GetCurrentView().CoreWindow.Dispatcher
This gets the active view for the app, but this will give you null, if no views has been activated. More details here.
Second alternative
Window.Current.Dispatcher
This solution will not work when it's called from another thread as it returns null instead of the UI Dispatcher. More details here.
For anyone using C++/CX
Windows::ApplicationModel::Core::CoreApplication::MainView->CoreWindow->Dispatcher->RunAsync(
CoreDispatcherPriority::Normal,
ref new Windows::UI::Core::DispatchedHandler([this]()
{
// do stuff
}));
await CoreApplication.MainView.CoreWindow.Dispatcher.RunAsync(
CoreDispatcherPriority.Normal,
() => { // your code should be here});
While this is an old thread, I wanted to draw attention to a possible issue developers may run across which impacted me and made it extremely difficult to debug in large UWP apps. In my case, I refactored the following code from the suggestions above back in 2014 but would occasionally be plagued with the occasional app freezes that were random in nature.
public static class DispatcherHelper
{
public static Task RunOnUIThreadAsync(Action action)
{
return RunOnUIThreadAsync(Windows.UI.Core.CoreDispatcherPriority.Normal, action);
}
public static async Task RunOnUIThreadAsync(Windows.UI.Core.CoreDispatcherPriority priority, Action action)
{
try
{
await returnDispatcher().RunAsync(priority, () =>
{
action();
});
}
catch (Exception ex)
{
var noawait = ExceptionHandler.HandleException(ex, false);
}
}
private static Windows.UI.Core.CoreDispatcher returnDispatcher()
{
return (Windows.UI.Xaml.Window.Current == null) ?
CoreApplication.MainView.CoreWindow.Dispatcher :
CoreApplication.GetCurrentView().CoreWindow.Dispatcher;
}
}
From the above, I had used a static class to allow the calling of the Dispatcher through-out the application - allowing for a single call. For 95% of the time, everything was fine even through QA regression but clients would report an issue every now and then. The solution was to include the call below, not using a static call in the actual pages.
await Windows.ApplicationModel.Core.CoreApplication.MainView.CoreWindow.Dispatcher.RunAsync(Windows.UI.Core.CoreDispatcherPriority.Normal, () =>
{
});
This is not the case when I need to ensure the UI Thread was called from App.xaml.cs or my Singleton NavigationService which handled pushing/popping on to the stack. The dispatcher apparently was losing track of which UI Thread was called, since each page has it's own UI thread, when the stack had a variety of Messages triggering from the MessageBus.
Hope this helps others that may be impacted and it is also where I think each platform would do a service to their developers by publishing a complete project covering the best practices.
Actually, I would propose something in the line of this:
return (Window.Current == null) ?
CoreApplication.MainView.CoreWindow.Dispatcher :
CoreApplication.GetCurrentView().CoreWindow.Dispatcher
That way, should you have openend another View/Window, you won't get the Dispatchers confused...
This little gem checks whether there is even a Window. If none, use the MainView's Dispatcher. If there is a view, use that one's Dispatcher.
Generally with services, the task you want to complete is repeated, maybe in a loop or maybe a trigger or maybe something else.
I'm using Topshelf to complete a repeated task for me, specifically I'm using the Shelf'ing functionality.
The problem I'm having is how to handle the looping of the task.
When boot strapping the service in Topshelf, you pass it a class (in this case ScheduleQueueService) and indicate which is its Start method and it's Stop method:
Example:
public class QueueBootstrapper : Bootstrapper<ScheduledQueueService>
{
public void InitializeHostedService(IServiceConfigurator<ScheduledQueueService> cfg)
{
cfg.HowToBuildService(n => new ScheduledQueueService());
cfg.SetServiceName("ScheduledQueueHandler");
cfg.WhenStarted(s => s.StartService());
cfg.WhenStopped(s => s.StopService());
}
}
But in my StartService() method I am using a while loop to repeat the task I'm running, but when I attempt to stop the service through Windows services it fails to stop and I suspect its because the StartService() method never ended when it was originally called.
Example:
public class ScheduledQueueService
{
bool QueueRunning;
public ScheduledQueueService()
{
QueueRunning = false;
}
public void StartService()
{
QueueRunning = true;
while(QueueRunning){
//do some work
}
}
public void StopService()
{
QueueRunning = false;
}
}
what is a better way of doing this?
I've considered using the .NET System.Threading.Tasks to run the work in and then maybe closing the thread on StopService()
Maybe using Quartz to repeat the task and then remove it.
Thoughts?
Generally, how I would handle this is have a Timer event, that fires off a few moments after StartService() is called. At the end of the event, I would check for a stop flag (set in StopService()), if the flag (e.g. your QueueRunning) isn't there, then I would register a single event on the Timer to happen again in a few moments.
We do something pretty similar in Topshelf itself, when polling the file system: https://github.com/Topshelf/Topshelf/blob/v2_master/src/Topshelf/FileSystem/PollingFileSystemEventProducer.cs#L80
Now that uses the internal scheduler type instead of a Timer object, but generally it's the same thing. The fiber is basically which thread to process the event on.
If you have future questions, you are also welcomed to join the Topshelf mailing list. We try to be pretty responsive on there. http://groups.google.com/group/topshelf-discuss
I was working on some similar code today I stumbled on https://stackoverflow.com/a/2033431/981 by accident and its been working like a charm for me.
I don't know about Topshelf specifically but when writing a standard windows service you want the start and stop events to complete as quickly as possible. If the start thread takes too long windows assumes that it has failed to start up, for example.
To get around this I generally use a System.Timers.Timer. This is set to call a startup method just once with a very short interval (so it runs almost immediately). This then does the bulk of the work.
In your case this could be your method that is looping. Then at the start of each loop check a global shutdown variable - if its true you quit the loop and then the program can stop.
You may need a bit more (or maybe even less) complexity than this depending on where exactly the error is but the general principle should be fine I hope.
Once again though I will disclaim that this knowledge is not based on topshelf, jsut general service development.
Apologies for the indescriptive title, however it's the best I could think of for the moment.
Basically, I've written a singleton class that loads files into a database. These files are typically large, and take hours to process. What I am looking for is to make a method where I can have this class running, and be able to call methods from within it, even if it's calling class is shut down.
The singleton class is simple. It starts a thread that loads the file into the database, while having methods to report on the current status. In a nutshell it's al little like this:
public sealed class BulkFileLoader {
static BulkFileLoader instance = null;
int currentCount = 0;
BulkFileLoader()
public static BulkFileLoader Instance
{
// Instanciate the instance class if necessary, and return it
}
public void Go() {
// kick of 'ProcessFile' thread
}
public void GetCurrentCount() {
return currentCount;
}
private void ProcessFile() {
while (more rows in the import file) {
// insert the row into the database
currentCount++;
}
}
}
The idea is that you can get an instance of BulkFileLoader to execute, which will process a file to load, while at any time you can get realtime updates on the number of rows its done so far using the GetCurrentCount() method.
This works fine, except the calling class needs to stay open the whole time for the processing to continue. As soon as I stop the calling class, the BulkFileLoader instance is removed, and it stops processing the file. What I am after is a solution where it will continue to run independently, regardless of what happens to the calling class.
I then tried another approach. I created a simple console application that kicks off the BulkFileLoader, and then wrapped it around as a process. This fixes one problem, since now when I kick off the process, the file will continue to load even if I close the class that called the process. However, now the problem I have is that cannot get updates on the current count, since if I try and get the instance of BulkFileLoader (which, as mentioned before is a singleton), it creates a new instance, rather than returning the instance that is currently in the executing process. It would appear that singletons don't extend into the scope of other processes running on the machine.
In the end, I want to be able to kick off the BulkFileLoader, and at any time be able to find out how many rows it's processed. However, that is even if I close the application I used to start it.
Can anyone see a solution to my problem?
You could create a Windows Service which will expose, say, a WCF endpoint which will be its API. Through this API you'll be able to query services' status and add more files for processing.
You should make your "Bulk Uploader" a service, and have your other processes speak to it via IPC.
You need a service because your upload takes hours. And it sounds like you'd like it to run unattended if necessary,, and you'd like it to be detached from the calling thread. That's what services do well.
You need some form of Inter-Process Communication because you'd like to send information between processes.
For communicating with your service see NetNamedPipeBinding
You can then send "Job Start" and "Job Status" commands and queries whenever you feel like to your background service.