I have an Azure hosted MVC Web App where a user can request a report to be generated by pressing a button. This report is a compute intensive, long-running process that I only want to run at night. I have experience using Queue Triggered WebJobs to process background tasks; however, this job will require more resources than my Web App Service plan has and I don't want to run this compute process along side my Web App. My hope is that I can write a queue message for each request and then have something check that queue each night to see if it has any messages. If it does, create/start a new Worker Role instance of sufficient power/memory to handle the job, process the queue message(s), then shut down/and deallocate the worker to prevent ongoing charges.
I can't figure out the best way to check the queue before starting the Worker Role and only create/start the Worker if there is work to be done since it will be a largish instance I want to minimize uptime to keep costs down.
You can use create a triggered WebJob that uses a TimerTrigger that is set to wake up once a day at some early hour like 2:00AM. the method triggered by the TimerTrigger can then peek at the queue to see if a message exists. If one or more messages exist, kick off a worker role that actually dequeues and processes messages.
You could write a Web Job using a Queue Trigger so it's automatically triggered when a new message pops into the queue. Then you can host the Web Job in it's own App Service Plan, separate from the Web App, so it has it's own dedicated resources allocated.
Since you mention that you want to keep costs down, I would actually recommend instead you use an Azure Function. The Azure Function can be setup with a Queue Trigger as well, with the added benefit of only paying for your Azure Function when it is running using the "Consumption Plan" pricing option.
Here's a link that outlines how Azure Functions pricing works:
https://buildazure.com/2016/10/11/how-azure-functions-pricing-works/
Well, Rob's comment got me on to the Service Management library which then took me to Azure Automation and Runbooks. I think that will end up being the solution. I can create a Runbook using PowerShell cmdlets that will peek the storage queue and if it finds any messages it can create/deploy a new instance of the Worker Role, which will look at the queue on startup and start processing any messages. The tricky part looks to be shutting down the Worker Role. The Self-Destruct/suicidal cloud service examples I have found only seem to kill off multiple instances, but not the last instance. Since I will only have one instance running, I don't think I can have it kill itself and get it into a Unallocated state. I think the solution is to use Runbooks again. Have the Worker Role write a "finished" message to another queue and have a scheduled Runbook watch this queue every x minutes. If it finds a message, stop and deallocate the worker role. Given that Functions has a hard run time limit, I can't use that. And cloud services gives me more resource options (more fine-tuned VM types) than anything I could get on WebJobs/Web App Service plans.
Related
I am having an asp.mvc application which resides on a server.From this application, I want to start a process which is a bit long-running operation and will be resource intensive operation.
So what I want to do is I want to have some user agent like 3 which will be there on 3 machines and this user agent will use resources of their respective machines only.
Like in Hadoop we have master nodes and cluster in which tasks are run on the individual cluster and there is 1 master node keeping track of all those clusters.
In Azure, we have virtual machines on which tasks are run and if require Azure can automatically scale horizontally by spinning up the new instance in order to speed up the task.
So I want to create infrastructure like this where I can submit my task to 3 user agents from the mvc application and my application will keep track of this agents like which agent is free, which is occupied, which is not working something like this.
I would like to receive progress from each of this user agent and show on my MVC application.
Is there any framework in .net from which I can manage this background running operations(tracking, start, stop etc..) or what should be the approach for this?
Update : I don't want to put loads of server for this long running operations and moreover I want to keep track of this long running process too like what they are doing, where is error etc.
Following are the approach which I am thinking and I don't know which will make more sense:
1) Install Windows Service in the form of agents of 2-3 computer on premises to take advantage of resp resources and open a tcp/ip connection with this agents unless and until the long running process is complete.
2) Use hangfire to run this long running process outside of IIS thread but I guess this will put load on server.
I would like to know possible problems of above approaches and if there are any better approaches than this.
Hangfire is really a great solution for processing background tasks, and we have used used it extensively in our projects.
We have setup our MVC application on separate IIS servers which is also a hangfire client and just enqueues the jobs needs to be executed by hangfire server. Then we have two hangfire server instances, which are windows service application. So effectively there is no load on the MVC app server to process the background jobs, as it is being processed by separate hangfire servers.
One of the extremely helpful feature of hangfire is its out of the box dashboard, that allows you to monitor and control any aspect of background job processing, including statistics, background job history etc.
Configure the hangfire in application as well as in hangfire servers
public void Configuration(IAppBuilder app)
{
GlobalConfiguration.Configuration.UseSqlServerStorage("<connection string or its name>");
app.UseHangfireDashboard();
app.UseHangfireServer();
}
Please note that you use the same connection string across. Use app.UseHangfireServer() only if you want to use the instance as hangfire server, so in your case you would like to omit this line from application server configuration and use only in the hangfire servers.
Also use app.UseHangfireDashboard() in instance which will serve your hangfire dashboard, which would be probably your MVC application.
At that time we have done it using Windows Service, but if had to do it now, I would like to go with Azure worker role or even better now Azure Web Jobs to host my hangfire server, and manage things like auto scaling easily.
Do refer hangfire overview and documentation for more details.
Push messages to MSMQ from your MVC app and have your windows services listen (or loop) on new messages entering the queue.
In your MVC app create a ID per message queued, so make restful API calls from your windows services back to the mvc app as you make progress on the job?
Have a look at Hangfire, this can manage background tasks and works across VMs without conflict. We have replaced windows services using this and it works well.
https://www.hangfire.io
Give a try to http://easynetq.com/
EasyNetQ is a simple to use, opinionated, .NET API for RabbitMQ.
EasyNetQ is a collection of components that provide services on top of the RabbitMQ.Client library. These do things like serialization, error handling, thread marshalling, connection management, etc.
To publish with EasyNetQ
var message = new MyMessage { Text = "Hello Rabbit" };
bus.Publish(message);
To subscribe to a message we need to give EasyNetQ an action to perform whenever a message arrives. We do this by passing subscribe a delegate:
bus.Subscribe<MyMessage>("my_subscription_id", msg => Console.WriteLine(msg.Text));
Now every time that an instance of MyMessage is published, EasyNetQ will call our delegate and print the message’s Text property to the console.
The performance of EasyNetQ is directly related to the performance of the RabbitMQ broker. This can vary with network and server performance. In tests on a developer machine with a local instance of RabbitMQ, sustained over-night performance of around 5000 2K messages per second was achieved. Memory use for all the EasyNetQ endpoints was stable for the overnight run
I am developing a Windows Service Application for sending bulk emails to the customers. In my case, I need to manually start and stop the service from the ASP.Net MVC web application passing some parameters.
Since this is on demand and there is no schedule for this activity, I want the service to perform a long-running task (Sending bulk emails to 100K customers) only once, once started and then stop itself once completed.
Is this approach correct?
If yes, how I could track its progress in MVC application?
Thank You,
This approach is incorrect. A "Windows Service" is a kind of application that waits for some client to call it, or an event to happen. (It is called daemon in Unix/Linux jargon)
In your case, "the service" should be an ordinary program that does batch processing. So, just start a separate process with Process.Start(). It doesn't block anything in your ASP.NET MVC application, it just does its job and terminate.
There are two ways to track the progress. You can call Process objects methods like WaitFor() with a timeout to see if it completes its job. Also, your "service" application would probably write its progress into a file, and the MVC application can read it anytime to see what's happening.
We have a data transfer between 2 systems that can take upto a few hours. The code for doing this transfer is written in C#.
We would like to trigger the transfer with a WCF web service call.
Ideally we would like the web service call to return at once, with a message "OK job started", and then the job to run on the server untill it is complete.
What is the best way to do this?
The possible problems we see are:
web service timing out before job finished
job stops after returning result
Although not entirely similar to your predicament, I had a similar scenario with my MVC application. There are lots of "Jobs" to do that involve importing data, batch emails, financial processes etc.
Basically, I created a windows service which had a job manager, and obviously the various jobs that could be done. It also ran a light HttpServer. This allowed the main MVC application to talk to the service. One sends a request to start jobs and get the status of all jobs or a particular job (when a job is started it is given a unique ID).
So if I was going to implement it in your case, I'd add a download job which did the actual work, and instigate it from the MVC App via a JSON call. The status of the download could be queried at any time by using the ID passed back from the "StartJob" JSON call. Your main web request is therefore handled and over immediately.
I'd write a console application that does this job and let the Web Service call that application.
Running heavy processes within the web server itself is never a good idea.
You may use background thread on the WebServce call.
My application that runs on Windows Azure processes incoming requests from a user (which are put into an Azure Queue) and assigns them to real-world people.
The people have a certain amount of time to handle the request. If none of the people assigned handle the request, I need to move on to a new set of people. Basically, I want to queue these tasks to be handled at a certain time, and then handle them again. If one of the users handles the task, I need to dequeue it so it isn't handled again by the worker.
You need to use a scheduled task. There are two good libraries out there that you could use:
Quartz.Net and
Castle scheduler.
With a scheduledler, such a task becomes easy.
You just create a job that runs when the processing time expires. There you would check for any unprocessed requests and if there are any left, you notify the next set of people and set another scheduled start to fire after processing time expires.
Let me know if you need further detail.
I've used Quartz.net in an azure webrole successfully in a production app.
I am developing the web site, It consists of device name list and related Build Button. When one click the Build button the one process will run in server. When more than ten user click the Build button more processes will create at that server will hang. How can send all request from client to single process in server.
You could set up a WCF Windows service that internally has some kind of count of currently running processes and makes sure that there are never more than X threads each running one process. If you want exactly one, rather than just a limited amount, you don't even need to worry about the threads and can just halt other requests while it's processing one.
It might make it more difficult to give feedback to the users though if that's needed, since if one or more processes are queued, you can't immediately tell the user that the build has begun etc.
It sounds like you are spawning a process thread to do the build on the sever (I don't recommend this, as spawning threads in a ASP.Net application can be tricky). The easiest way to get around each request spawning a new thread would be to separate out your build process from the Asp.net web application. So:
Move the build action to a Windows Service on the same machine.
Use Windows Communication Foundation to expose the service entry point to the Asp.Net application
Make the WCF instance a singleton instance, so that only one request can access it at a time. (This is the drawback of using only one thread.)
Your other option is to log the requests to a queue, and have a process (windows service maybe) monitor the queue, and process it one at a time. The drawback to this is that you won't get immediate results for the user. You'll need some way of notifying them when the process has finished, but you'll most likely have to do the same thing with the above solution too.