COM+ Queued Components in .NET or something similar - c#

I am trying to write an app which would wake up on-demand as the messages are queued in MSMQ, do some processing and go back to sleep. Now I am expecting only ~20 messages per day, so keeping a process alive 24/7 just to watch the queue may not be a good idea.
The COM+ Queued Component came to my mind and I am nostalgic now. I could create a dll and register it with RegSVC() and set it up under Component Services to be Queued. But there are a couple of problems with this.
I don't like the way ".NET Serviced Components" are deployed (RegSVC)
Microsoft is showing the following warning on their page about Queued Components
This document may not represent best practices for current
development, links to downloads and other resources may no longer be
valid. Current recommended version can be found here.
ArchiveDisclaimer
For those who don't know how Queued Components work (just a little summary)
You create an object of a component, and call a few methods. The methods weren't actually called, but a "Recorder" recorded what you did and placed them in an MSMQ Queue. Later on, when its your turn; the "Player" plays your method calls. Your component does not have to be awake 24/7 taking up CPU and Memory just to wait for messages.
Is there anything similar in .NET now?

Well you can implement a queued component in .NET...
https://msdn.microsoft.com/en-US/library/6b617zws(v=vs.80).aspx
If you're looking for a more modern approach, it's not .NET as such, but perhaps checkout...
http://gruntjs.com/

Related

ASP.Net WebAPI replacement for CORBA?

I am in need of some guidance on a project I am working on. We are looking for a replacement for CORBA server setup. In a nutshell we currently run a CORBA deamon service that hosts 10 instances of a C++ exe that is the entry point into our calculation process. The C++ code hooks into a bunch of different .net and C++ dlls and OCXs via COM. We also have another version of the executable that is compiled as a .dll that we are able to call in a similar fashion but it is only a single instance system so all is well there.
We are now looking to replace the CORBA components with a WebAPI so I have put together a basic ASP.net webAPI project that is able to process the requests into this C++ dll. Again, this works great when it only needs to handle 1 request at a time. Things start going sideways when I start testing concurrent requests. The request come into my handler just fine and I can see the 5 requests (I have logging everywhere tracking whats going on,) and each thread creates an instance of the dll but they are run synchronously.
What I have figured out is that even though there are multiple threads going in the ASP.net handler, the dll is STAThreaded (this is confirmed in the code) so the calls are queued up and only processed 1 at a time. My guess here is because the threads are all inside the same process the dll treats all the threads as the same apartment (STAThread) and causes the queue.
I have tried different async/await and task.run code and I can see different threads but it still comes down to the same process which makes the dll run synchronously. I did try change the dll to be MTA by changing the CoInitializeEx(NULL,0x2) to CoInitializeEx(NULL,0x0) but that didn't seem to change anything.
I am now running out of ideas and I don't think changing to use the .exe version and spawning multiple process is going to work because there is the CORBA stuff that allows a return object to be created and communicated back to the calling code. I need to be able to get the objects that are created in the exe to send back in the request.
Sorry for the long post, hopefully someone will take the time to read this wall of text and have some ideas of what I can try.
Thank you!
I would suggest that the WebAPI architecture is a poor solution to your problem. Typically you do not want to spawn long-running or blocking processes from ASP.NET, because it's quite easy to exhaust the threadpool and prevent the server from being able to handle new requests.
If you do want to continue having a WebAPI endpoint, I would start with taking the requests and putting them in a queue, and having the client poll or subscribe for the completed result.
You may be interested in looking a what they're doing with gRPC in dotnetcore 3.0 - if you want to keep that kind of architecture, but update the platform.
You can create multiple app domains. App domain is "It can be considered as a Lightweight process which is both a container and boundary" ref. Load your DLLs into that different domains. This way every app domain you create will load your COM DLLs separately. Create proxies using MarshalByRefObject as used here. Write an orchestrator that distributes requests to app domains and get the results from appdomains and send responses. Keeps tracks of which domain is busy which is not or create new domains for the request.
Also different methods mentioned in this link

BackgroundWorker vs. Android Service in Xamarin

I'm investigating about mobile apps using Mono on Visual Studio.Net.
Currently we have an application we want to translate to Android from Windows CE. The original program used small BackgroundWorkers to keep the UI responsive and to keep it updated with the ProgressChanged event. However I have been reading that in Android there are Services that can replace that functionality.
Reading pros and cons about services I know that they are usually used because they have a better priority than threads and, mainly, if the functionality will be used in more than one app.
More info I have found comparing threads and Services say that Services are better used for multiple tasks (like downloading multiple files) and threads for individual tasks (like uploading a single file). I consider this info because BackgroundWorker uses threads.
Is there something I am missing? Basically a service should be for longer tasks because the O.S. gives it better priority (there are less risk it will be killed) and Threads/BackgroundWorkers are better for short tasks. Are there any more pros/cons to use one or the other?
Thank you in advance!
[Edit]
If you need a very specific question... how about telling me when and why would you use a Service instead of a BackgroundWorker? That would be useful.
Some of the functionality I have to recreate on Android:
- GPS positioning and compass information - this has to be working most of the time to get the location of the device when certain events are working and trace in a map its movements.
- A very long process that might even be active for an hour.
The last one is the one I am concerned about. It must be very reliable and responsible, keeping the user informed of what it is doing but also being able to keep working even if the user moves to other activity or functionality (doing a call, hitting the home button, etc.)
Other than that I believe the other functionality that used BackgroundWorker on WinCE will not have problems with Android.
[Edit 2: 20140225]
However I would like to know if the AsyncTask can help me in the next scenario:
- The app reads and writes information from/to another device. The commands are short in nature and the answer is fast so for individual commands there is no problem. However there is a process that can take even an hour or so and during that time it will be asking the status from the device. How would you do it?
I think you're misunderstanding what a Service in Android is. See the documentation on Services:
A Service is an application component that can perform long-running operations in the background and does not provide a user interface. Another application component can start a service and it will continue to run in the background even if the user switches to another application.
Also note:
A service runs in the main thread of its hosting process—the service does not create its own thread and does not run in a separate process (unless you specify otherwise).
Using a worker thread and using a Service are not mutually exclusive.
If you are looking to move work off the main thread, then clearly you need to use another thread. Through a BackgroundWorker or perhaps the TPL will do just fine in many cases but if you want to interact with UI (e.g. on completion of the task or to update progress in the UI), the Android way is to use an AsyncTask (mono docs).
If this work needs to continue outside of the user interaction with your application, then you may want to host this work (including the BackgroundWorker/Thread/AsyncTask/etc.) in a Service. If the work you want to do is only ever relevant while the user is interacting with your application directly, then a Service is not necessary.
Basically, a service is used when something needs run at the same time as the main app - for example keeping a position updated on a map. A thread is used when consuming a webservice or a long running database call.
The rule-of-thumb, as far as I can see, is rather use threads and close them, unless there is something that needs to happen in the background (like navigation updates). This will keep the footprint of the app smaller, which is a large consideration.
I hope this helps at least a little.
Now that you know you don't need a Service, I want to point out how is the Xamarin guideline doing/recommending this: create a separate thread using ThreadPool and when you want to make changes to GUI from that thread, you call the main thread to do them using the RunOnUiThread method.
I'm not sure that by using AsyncTask you can write your code inline in c#, but with Xamarin recommendation you certainly can, like so:
//do stuff in background thread
ThreadPool.QueueUserWorkItem ((object state) => {
//do some slow operation
//call main thread to update gui
RunOnUiThread(()=>{
//code to update gui here
});
//do some more slow stuff if you want then update gui again
});
http://developer.xamarin.com/guides/android/advanced_topics/writing_responsive_applications/

Windows service to do job every 6 hours

I've got a windows service with only two methods - one private method DoWork(), and an exposed method which calls DoWork method. I want to achieve the following:
Windows service runs DoWork() method every 6 hours
An external program can also invoke the exposed method which calls DoWork() method. If the service is already running that method called from the service, DoWork() will again be invoked after the current method ends.
What's the best approach to this problem? Thanks!
An alternative approach would be to make use of a console application which can be scheduled by Windows task scheduler to run every 6 hours. In that case you don't waste resources to keep the Windows service running the entire time but only consume resources when needed.
For your second question: when you take the console app approach you can have it called by making use of Process.Start for example.
If the purpose of your application is only to run a specific task every six hours, you might be better off creating a command line application and creating a scheduled task that Windows runs automatically. Obviously, you could then manually start this application.
If you're still convinced you need a service (and honestly, from what I've seen so far, it sounds like you don't), you should look into using a Timer, but choose your timer carefully and read this article to get a better understanding of the timers built into .NET (Hint: Pay close attention to System.Timers.Timer).
To prevent reentry if another method tries to call DoWork() while the process is in the middle of performing its operation, look into using either a Mutex or a Semaphore.
there are benefits and drawbacks either way. my inclination with those options is to choose the windows service because it makes your deployment easier. scheduling things with the windows task scheduler is scriptable and can be automated for deployment to a new machine/environment, but it's still a little more nonstandard than just deploying and installing a windows service. you also need to make sure with task scheduler it is running under an account that can make the webservice call and that you aren't going to have problems with passwords expiring and your scheduled tasks suddenly not running. with a windows service, though, you need to have some sort of checking in place to make sure it is always running and that if it restarts that you don't lose hte state that lets it know when it should run next.
another option you could consider is using nservicebus sagas. sagas are really intended for more than just scheduling tasks (they persist state for workflow type processes that last for more than the duration of a single request/message), but they have a nice way of handling periodic or time-based processes (which is a big part of long running workflows). in that a saga can request that it get back a message from a timeout manager at a time it requests. using nservicebus is a bigger architectural question and probably well beyond the scope of what you are asking here, but sagas have become how i think about periodic processes and it comes with the added benefit of being able to manage some persistent state for your process (which may or may not be a concern) and gives you a reason to think about some architectural questions that perhaps you haven't considered before.
you can create a console application for your purpose. You can schedule the application to run every 6 hours. The console will have a default method called on application start. you can call your routine from this method. Hope this helps!!

Programming a long-running time-based process

I was wondering what the best way to write an application would be. Basically, I have a sports simulation project that is multi-threaded and can execute different game simulations concurrently.
I store my matches in a SQLite database that have a DateTime attached to it.
I want to write an application that checks every hour or so to see if any new matches need to be played and spawns those threads off.
I can't rely on the task scheduler to execute this every hour because there are objects that the different instances of that process would share (specifically a tournament object), that I suspect would be overwritten by a newer process when saved back into the DB. So ideally I need to write some sort of long-running process that sleeps between hours.
I've written my object model so that each object is only loaded once from memory, so as long as all simulation threads are spawned from this one application, they shouldn't be overwriting data.
EDIT: More detail on requirements
Basically, multiple matches need to be able to run concurrently. These matches can be of arbitrary length, so it's not necessary that one finishes before the other begins (in fact, in most cases there will be multiple matches executing at the same time).
What I'm envisioning is a program that runs in the background (a service, I guess?) that sleeps for 60 minutes and then checks the database to see if any games should be started. If there are any to be started, it fires off threads to simulate those games and then goes back to sleep. Hence, the simulation threads are running but the "scheduling" thread is sleeping for another 60 minutes.
The reason I can't (I think) use the default OS task-scheduling interface is that these require the task to be executed to be spurned as a new process. I have developed my database object model such that they are cached by each object class on first load (the memory reference) meaning that each object is only loaded from memory once and that reference is used on all saves. Meaning that when each simulation thread is done and saves off its state, the same reference is used (with updated state) to save off the state. If a different executable is launched every time, presumably a different memory reference will be opened by each process and hence one process could save into the DB and overwrite the state written by the other process.
A service looks like the way to go. Is there a way to make a service just sleep for 60 minutes and wake up and execute a function after that? I feel like making this a standard console application would waste memory, but I don't know if there is an efficient way to do that which I'm not aware of.
If you want to make it really reliable, make it a Service.
But I don't see any problems in making it a normal (Console, WinForms, WPF) application.
Maybe you could expand on the requirements a little.
The reason I can't (I think) use the default OS task-scheduling interface is that these require the task to be executed to be spurned as a new process. I have developed my database object model such that they are cached by each object class on first load (the memory reference) meaning that each object is only loaded from memory once and that reference is used on all saves
If you want everything to remain cached forever, then you do need to have an application that simply runs forever. You can make this a windows service, or a normal windows application.
A windows service is just a normal exe that conforms to the service manager API. If you want to make one, visual studio has a wizard which auto-generates some skeleton code for you. Basically instead of having a Main method you have a Service class with a Run method and everything else is the same.
You could, if you wanted to, use the windows task scheduler to schedule your actions. The way you'd do this is to have your long-running windows service in the background that does nothing. Have it open a TCP socket or named pipe or something and just sit there. Then write a small "stub" exe which just connects to this socket or named pipe and tells the background app to wake up.
This is, of course, a lot harder than just doing a sleep in your background application, but it does let you have a lot more control - you can change the sleep time without restarting the background service, run it on-demand, etc.
I would however, consider your design. The fact that you rely on a long-running service is a large point of failure. If your app needs to run for days, and you have a single bug which crashes it, then you have to start again. A much better architecture is to follow the Unix model, where you have small processes which start, do one thing, then finish (in this case, process each game simulation as it's own process so if one dies it doesn't take the master process or the other simulations down).
It seems like the main reason you're trying to have it long-running is to cache your database queries. Do you actually need to do this at all? A lot of the time databases are plenty fast enough (they have their own caches, which are plenty smart). A common mistake I've seen programmers make is to just assume that something like a database is slow, and waste a pile of time optimizing when in actual fact it would be fine

Best way to decouple (for parallel processing) a web application's non-immediate processes?

I'm looking for a good strategy to truly decouple, for parallel processing, my web application's (ASP.NET MVC/C#) non-immediate processes. I define non-immediate as everything that doesn't require to be done right away to render a page or update information.
Those processes include sending email, updating some internal statistics based on database information, fetching outside information from web services which only needs to be done periodically and so forth.
Some communication needs to exist between the main ASP.NET MVC application and those background tasks though; e.g. the MVC application needs to inform the emailing process to send something out.
What is the best strategy to do this? MSMQ? Turn all those non-immediate processes into windows services? I'm imagining a truly decoupled scenario, but I don't want a trade off that makes troubleshooting/unit testing much harder or introduces vast amounts of code.
Thank you!
Can't speak for ASP.NET as I work primarily in Python, but...luckily I can answer this one as it's more of a meta-language question.
I've typically done this with a queue-based backend daemon which runs independently. When you need to add something to the queue, you can IPC with a method of your choice (I'm partial to HTTP) and deliver a job. The daemon just knocks through the jobs one by one -- possibly delegating them to worker threads itself. You can bust out of the RESTful side of your application and fire off jobs to the backend, i.e.:
# In frontend (sorry for Python, should be clear)
...
backend_do_request("http://loadbalancer:7124/ipc", my_job)
...
# In backend (psuedoPython)
while 1:
job = wait_for_request()
myqueue.append(job)
...
def workerthread():
job = myqueue.pop()
do_job(job)
If you later need to check in with the background daemon and ask "is job 2025 done?" you can account for that in your design.
If you want to do that with a Windows Service I would imagine you can. All it needs to do is listen on a port of your choice for whatever IPC you want to do -- I'd stick with network transports, as local IPC will assume same-machine and limit your scalability. Your unit testing shouldn't be that much harder; you can just account for the frontend and the backend as two different projects.
ThreadPool in .NET is queue based worker pool, however its used internally by ASP.NET host process, so if you try to utilize ThreadPool more, you may reduce performance of Web Server.
So you must create your own thread, mark it as background and let it poll every few seconds for job availability.
The best way to do is, create a Job Table in database as follow,
Table: JobQueue
JobID (bigint, auto number)
JobType (sendemail,calcstats)
JobParams (text)
IsRunning (true/false)
IsOver (true/false)
LastError (text)
JobThread class could be like following.
class JobThread{
static Thread bgThread = null;
static AutoResetEvent arWait = new AutoResetEvent(false);
public static void ProcessQueue(Job job)
{
// insert job in database
job.InsertInDB();
// start queue if its not created or if its in wait
if(bgThread==null){
bgThread = new Thread(new ..(WorkerProcess));
bgThread.IsBackground = true;
bgThread.Start();
}
else{
arWait.Set();
}
}
private static void WorkerProcess(object state){
while(true){
Job job = GetAvailableJob(
IsProcessing = false and IsOver = flase);
if(job == null){
arWait.WaitOne(10*1000);// wait ten seconds.
// to increase performance
// increase wait time
continue;
}
job.IsRunning = true;
job.UpdateDB();
try{
//
//depending upon job type do something...
}
catch(Exception ex){
job.LastError = ex.ToString(); // important step
// this will update your error in JobTable
// for later investigation
job.UpdateDB();
}
job.IsRunning = false;
job.IsOver = true;
job.UpdateDB();
}
}
}
Note
This implementation is not recommended for high memory usage tasks, ASP.NET will give lots of memory unavailability errors for big tasks, like for example, we had lot of image uploads and we needed to create thumbnails and process them using Bitmap objects, ASP.NET just wont allow you to use more memory so we had to create windows service of same type.
By creating Windows service you can create same thread queue and utilize more memory easily, and to make communication between ASP.NET and Windows Service you can use WCF or Mutex objects.
MSMQ
MSMQ is also great, but it increases configuration tasks and it becomes difficult to trace errors sometimes. We avoid MSMQ because lot of time we spend looking for an answer of problem in our code where else MSMQ configuration is problem and the errors sometimes dont give enough information of where exactly is the problem. In our custom solution we can create full debugger version with logs to trace errors. And thats biggest advantage of Managed Programs, in earlier Win32 apps, the errors were really difficult to trace.
Simplest way to handle async processing in ASP.NET is to use the ThreadPool to create a worker that you hand your work off to. Be aware that if you have lots of small jobs you are trying to hand-off quickly, the default ThreadPool has some annoying lock contention issues. In that scenario, you either need to use C# 4.0's new Stealing ThreadPool, or you can use MindTouch's Dream library which has a Stealing Threadpool implementation (along with tons of other async helpers) and works with 3.5.
Nservicebus sounds like it might be applicable here, though under the covers it'd probably use msmq. Essentially you sound like you're after doing asynchronous stuff, which .net has good mechanisms for dealing with.
We've done this with the workflow API, or if it's not imperative that it it execute you could use a simple delegate.BeginInvoke to run this on a background thread.
This is a pattern that I tend to think of as 'Offline Services', and I've usually implemented it as a Windows service that can run multiple tasks on their own schedules.
Each task implements a business process such as sending pending emails from a message queue or database table, writing queued log messages to an underlying provider, or performing some batch processing that needs to happen at regular intervals, such as archiving old data or importing data objects from incoming feeds.
The advantage of this approach is that you can build in full management capabilities into the task management service, such as tracing, impersonation, remote integration via WCF, and error handling and reporting, all while using your .NET language of choice to implement the tasks themselves.
There are a few scheduling APIs out there, such as Quartz.NET, that can be used as the starting point for this sort of system. In terms of multi-threading, my general approach is to run each task on its own worker thread, but to only allow one instance of a task to be running at a given time. If a task needs parallel execution then that is implemented in the task body as it will be entirely dependent on the work the task needs to do.
My view is that a web application should not be managing these sorts of tasks at all, as the web application's purpose is to handle requests from your users, not manage intermediate background jobs. It's a lot of work to build a system like this initially, but you'll be able to re-use it on virtually any project.
A windows service managing these tasks, using a ThreadPool, and communicating with it via an MSMQ is certainly my preferred approach. Nicely scalable as well, due to the public queue abilities.
If you can develop for .NET 4 Framework then you can decouple by using F# or the Parallel Computing features (http://msdn.microsoft.com/en-us/library/dd460693(VS.100).aspx)
F# is designed to support parallel computing so it may be a better choice than moving code into services.
Though, if you wanted, you could just use WCF and off-load everything to webservices, but that may not really solve your problem as it just moves the issues elsewhere.
EDIT: Moving the non-essential to webservices may make the most sense then, and this is a standard practice where the webserver is outside of the firewall, so vulnerable, so all the real work is done by other servers, and the webserver is just responsible for static pages and rendering.
You can use Spring.NET for this, if you don't want to add webservices, but either way you are largely just calling a remote process to do the work.
This is scalable as you can separate the business logic to several different servers, and since the webserver is largely just the view part of MVC it can handle more requests than if all the MVC work is in the webserver.
Because it is designed for this, Spring.NET should be easier to test, but, webservices can also be tested, as you should test each part separately, then do functional tests, but, by using Spring.NET it is easier to mock out levels.
MSMQ is an awesome way to do this. A web farm can feed requests into one or more queues. The queues can be serviced by one or more processes on one or more servers giving you scale and dedundancy. (Run MSMQ on a cluster if you want to remove the single point of failure). We did this about 8-9 years back and it was awesome watching it all run :) And even back then MSMQ was dead simple to use (from COM) -- I have to imagine things have only gotten better with .NET.
Following sound software engineering principles will keep your unit testing complexity to a minimum. Follow SRP (Single Responsibility Principle). This is especially the case for multi-threaded code which sounds like where you're headed. Robert Martin addresses this in his book "Clean Code".
To answer your question, there are, as you've seen from the array of posts, many ways to solve background processing. MSMQ is a great way to communicate with background processes and is also a great mechanism for addressing reliability (eg., request 5 emails sent, expect 5 emails sent).
A really simple and effective way to run a backround process in asp.net is using a background worker. You need to understand if the background worker (a thread) runs in the application's domain or inetinfo. If it's in the app domain, then the trade-off is you'll lose the thread when the app pool recycles. If you need it durable, then it should be carved out into its own process (eg., Windows Service). If you look into WCF, Microsoft addresses WS-Reliability using MSMQ. Better news is you can host WCF services in a Windows Service. One-way calls to the service suffice to eliminate blocking on the web server which effectively gives you background process.
James Black mentions using Spring.NET. I agree with his recommendation for 2 reasons: 1) Because Spring.NET's support for services and web are superior to other frameworks and 2) Spring.NET forces you to decouple which also simplifies testing.
Back on track:
1: Background worker - tradeoff is it's closely tied to the app pool/app domain and you're not separating effectively. Good for simple one-off type jobs (image resizing, etc). In-memory queues are volatile which can mean loss of data.
2: Windows Service - tradeoff is deployment complexity (although I'll argue this is minimal). If you will have families of low-resource-utilized background processes, opt for pluggability and host all in one Windows Service. Use durable storage (MSMQ, DB, FILE) for job requests and plan for recovery in your design. If you have 100 requests in queue and the Windows service restarts, it should be written so it immediately checks the queue for work.
3: WCF hosted in IIS - about the same complexity as (2) as I would expect the Windows Service to host WCF and that would be the communication mechanism between ASP.NET and the service. I don't personally like the "dump and run" design (where asp.net writes to a queue) because it reduces clarity and you're ultimately tightly coupling to msmq.

Categories

Resources