Queue-Based Background Processing in ASP.NET MVC Web Application [closed] - c#

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
How can I implement background processing queues in my ASP.NET MVC web app? While most data changes, updates etc. need to be visible immediately, there are other updates that don't need real time processing which I would like to hand off to a lower-priority background process which will take care of it at its own pace.
As an example, take StackOverflow's badge award system. Usually you may take a specific action that would award you a badge, but the actual 'award' happens later (typically between 10 minutes and a couple hours later). I assume this is done via a separate background process as it is not critical for SO's workings to award badges immediately when earned.
So, I'm trying to create some kind of queue system in which I could stuff tasks (say anything implementing ITask interface which will have a Process() method) which will eventually get executed by a separate process.
How would I go about implementing such a system? Ideas/Hint/Sample Code?
Thank you!

Windows Services and MSMQ to communicate with them (if you even need to).
-- Edit
To slightly expand.
You'll create several services, depending on what you want to do, and have them all run an endless thread of various sleeping levels, to do what you want. They will then update the database appropriately, and you will not have had to do anything on the client side.
You may wish to interact with them from an admin point of view, hence you may have an MSMQ that they listen to admin commands on. Depending on your implementation, you may need to restart them for some reason, or possibly just 'force' a running of whatever they wish to do.
So you'll use an MSMQ Private Queue to do it (System.Messaging namespace). One of the main things to note about MSMQ, is that message need to be < 4meg. So if you intend to send large object graphs, serialise to a file first and just send the filename.
MSMQ is quite beautiful. You can send based on a 'Correlation ID' if you need to, but, for some amusing reason, the correlation ID must be in the form of:
{guid}\1
Anything else does not work (at least in version 2.0 of the framework, the code may have changed).
-- Edit
Example, as requested:
using System.Messaging;
...
MessageQueue queue = new MessageQueue(".\\Private$\\yourqueue");
queue.Formatter = new BinaryMessageFormatter();
Message m = new Message();
m.Body = "your serialisable object or just plain string";
queue.Send(m);
// on the other side
MessageQueue queue = new MessageQueue(".\\Private$\\yourqueue");
queue.Formatter = new BinaryMessageFormatter();
Message m = queue.Receive();
string s = m.Body as string;
// s contains that string now

Jeff has a great post showing how he achieved this originally for Stack Overflow at https://blog.stackoverflow.com/2008/07/easy-background-tasks-in-aspnet/
While it might not be as reliable as a service if that's an option to you, it's served me well in the past for non essential tasks, and I've had it working in a virtual hosting environment.

Just found this question when searching for background process in ASP.NET MVC. (Available after .NET 4.5.2)
public ActionResult InitiateLongRunningProcess(Emails emails)
{
if (ModelState.IsValid)
{
HostingEnvironment.QueueBackgroundWorkItem(ct => LongRunningProcessAsync(emails.Email));
return RedirectToAction("Index", "Home");
}
return View(user);
}
Note: Personally I won't use the Webserver to run background tasks. Also don't reinvent the wheel, I would highly recommend to use Hangfire.
Read this great article from Hanselman HowToRunBackgroundTasksInASPNET

Related

Is there any way to keep asp.net web application running system timer in background? [duplicate]

I am building a website using .NET 4. There are lots of MSDN articles dating from 2003, about using Thread objects and 2007, using Asynchronous Pages in .NET 2, but that is all pretty stale. I know .NET 4 brought us the Task class and some people vaguely cautioning against its use for this purpose.
So I ask you, what is the "preferred" method circa 2011 for running background/asynchronous work under IIS in ASP.NET 4? What caveats are there about using Thread/Task directly? Is Async=true still in vogue?
EDIT: Ok, ok, from the answers it's clear the opinion is that I should make a service if I can. But the advantages to doing it inside the webapp are significant, especially easier deployment/redeployment. Assuming the process is safe-to-crash, then, if I were to do it inside IIS, what is the best way?
Preferentially, avoid having long tasks executing in such an environment.
Delegate long running tasks out to a stable system service via interoperability, leaving the web application responsive and only required for direct user requests.
Web applications have never been (and still aren't) considered reliable systems - anyone who has ever used a browser has encountered (at least) a time-out, to be sure; and such inconvenience (for both parties) is not limited to this scenario. Of course, any system can crash, but the circumstances surrounding such an event on a system built-to-be-persistent ought to completely exceptional.
Windows services are designed to be long running, and if something goes wrong you've generally got more to worry about than your individual service.
It's best to be avoided, but if you are forced to, consider Hanselman's thoughts at How to run Background Tasks in ASP.NET.
Among them, and for something quick and easy, I would suggest you look in particular at the QueueBackgroundWorkItem added in 4.5.2.
From personal experience, Task does not cut it. QueueBackgroundWorkItem is much better.
You can create a static ThreadPool like this http://www.dotnetperls.com/threadpool with limited threads number(for example only 2). and then queue tasks in it, but it's highly not recommended because web servers are not for such kind of tasks
My preferred method is the same as Robert Harvey proposes in his answer.
You can still use the Task Parallel Library, but spin the task up in a separate process outside of IIS (the reason being that IIS has a limited number of worker threads to hand out and imposes other limitations that can make long running tasks unpredictable).
This is a description of a 'once a day' scenario.
If you really want to avoid creating a service, you could start a timer with 1 minute intervals. Each time the timer delegate is invoked, you will have to run something like this (pseudo code):
lastInvokeDay = LoadLastInvokeDate();
If (lastInvokeDay < DateTime.Now.Date && timeOfDayToRun == DateTime.Now.Time)
{
try
{
today = DateTime.Now.Date;
runMyTask();
}
catch..
finally
{
lastInvokeDay = today;
SaveLastInvokeDay(lastInvokeDay);
}
}
Keep in mind that the lastInvokeDay should be persisted either in Database or on a file...
Now, If you want to enable immediate invocation of the task, you could simply call runMyTask() on demand.
If its important for you to keep the runMyTask from occuring more than once a day, you could create a syncronized block of code inside it (with a lock statement) and move the lastInvokeDay check inside.
Does this answer your question?
I could suggest a simple solution, which doesn't use Windows Services, yet is able to invoke a task to be executed outside of the IIS sandbox.
Also it could be easily adopted by any other language or mix of them, in my case that was Python
Create Event log and source on the IIS server (requires Administrative rights), executing from the PowerShell console:
[System.Diagnostics.EventLog]::CreateEventSource('Automations', 'Automations')
If you have no Administrative rights, skip this step. You will fallback to use Windows/Application log
Create a Task Scheduler task to be executed on event, for example, with ID = 2020, Log = 'Automations' and Source = 'Automations'. There you could invoke whatever you like with all necessary permissions
Prepare a code to send your event, while handling a web request. Giving you a Python example, but you could adopt it to your language:
import win32evtlog
app_name = "Automations"
event_id = 2020
event_category = 0
event_type = win32evtlog.EVENTLOG_INFORMATION_TYPE
messages = ['Starting automation']
# Logs event into the custom Automations log if it exists,
# otherwise logs event into Windows/Application log
handle = win32evtlog.OpenEventLog("localhost", app_name)
win32evtlog.ReportEvent(handle, event_type, event_category, event_id, None, messages, None)
Profit

Design question about background processing web service [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
the title of the question may not clear enough, allow me to explain the background here:
I would like to design a web service that generates PDF and submit it to printer, here is the workflow:
User submit a request to the web service, probably the request will be one off so that user wouldn't suffer from waiting the job complete. User may received a HTTP200 and continue their work.
Once web service received the request, the web service generates the PDF, and submit it to designated printer and this process could take some time and CPU resources. As I don't want the drain all resource on that server, I may use producer consumer pattern here, there might be a queue to to queue client jobs, and process them one by one.
My Questions is that:
I'm new to C#, what is the proper pattern to queue and process them? Should I use ConcurrentQueue and ThreadPool to archive it?
What is the proper way to notify user about the job is success/fail? Instead of using callback service, is async an ideal way? My concern is that there may be lots of jobs in the queue and I don't want client suffer from waiting it complete.
The web service is placed behind a load balancer, how can I maintain a 'process queue' among them? I've tried using Hangfire and it seems okay, however I'm looking for alternative?
How can I know the number of jobs in the Queue/ how may thread is currently running? The webservice will be deployed on IIS, is there a Native way to archive it, or should I implement a web service call to obtain them?
Any help will be appreciated, thanks!
WCF supports the idea of a fire-and-forget methods. You just mark your contract interface method as one way, and there will be no waiting for a return:
[OperationContract( IsOneWay = true )]
void PrintPDF( PrintRequest request );
The only downside, of course, is that you won't get any notification from the server that you're request was successful or even valid. You'd have to do some kind of periodic polling to see what's going on. I guess you could put a Guid into the PrintRequest, so you could interrogate for that job later.
If you're not married to wcf, you might consider signalR...there's a comprehensive sample app of both a server and simple wpf client here. It has the advantage that either party can initiate an exchange once the connection has been established.
If you need to stick with wcf, there's the possibility of doing dualHttp. The client connects with an endpoint to callback to...and the server can then post notifications as work completes. You can get a feel for it from this sample.
Both signalR and wcf dualHttp are pretty straightforward. I guess my preference would be based on the experience of the folks doing the work. signalR has the advantage of playing nicely with browser-based clients...if that ever turns into a concern for you.
As for the queue itself...and keeping with the wcf model, you want to make sure your requests are serializable...so if need be, you can drain the queue and restart it later. In wcf, that typically means making data contracts for queue items. As an aside, I never like to send a boatload of arguments to a service, I prefer instead to make a data contract for method parameters and return types.
Data contracts are typically just simple types marked up with attributes to control serialization. The wcf methods do the magic of serializing/deserializing your types over the wire without you having to do much thinking. The client sends a whizzy and the server receives a whizzy as it's parameter.
There are caveats...in particular, the deserialization doesn't call your constructor (I believe it uses MemberwiseClone instead) ...so you can't rely on the constructor to initialize properties. To that end, you have to remember that, for example, collection types that aren't required might need to be lazily initialized. For example:
[DataContract]
public class ClientState
{
private static object sync = new object( );
//--> and then somewhat later...
[DataMember( Name = "UpdateProblems", IsRequired = false, EmitDefaultValue = false )]
List<UpdateProblem> updateProblems;
/// <summary>Problems encountered during previous Windows Update sessions</summary>
public List<UpdateProblem> UpdateProblems
{
get
{
lock ( sync )
{
if ( updateProblems == null ) updateProblems = new List<UpdateProblem>( );
}
return updateProblems;
}
}
//--> ...and so on...
}
Something I always do is to mark the backing variable as the serializable member, so deserialization doesn't invoke the property logic. I've found this to be an important "trick".
Producer/consumer is easy to write...and easy to get wrong. Look around on StackOverflow...you'll find plenty of examples. One of the best is here. You can do it with ConcurrentQueue and avoid the locks, or just go at it with a good ol' simple Queue as in the example.
But really...you're so much better off using some kind of service bus architecture and not rolling your own queue.
Being behind a load balancer means you probably want them all calling to a service instance to manage a single queue. You could roll your own or, you could let each instance manage its own queue. That might be more processing than you want going on on your server instances...that's your call. With wcf dual http, you may need your load balancer to be configured to have client affinity...so you can have session-oriented two-way communications. signalR supports a message bus backed by Sql Server, Redis, or Azure Service Bus, so you don't have to worry about affinity with a particular server instance. It has performance implication that are discussed here.
I guess the most salient advice is...find out what's out there and try to avoid reinventing the wheel. By all means, go for it if you're in burning/learning mode and can afford the time. But, if you're getting paid, find and learn the tools that are already in the field.
Since you're using .Net on both sides, you might consider writing all your contracts (service contracts and data contracts) into a .DLL that you use on both the client and the service. The nice thing about that is it's easy to keep things in sync, and you don't have to use the (rather weak) generated data contract types that come through WSDL discovery or the service reference wizard, and you can spin up client instances using ChannelFactory<IYourServiceContract>.

AKKA.NET with Asp.net web api [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
We are planning to develop an audit service web api using AKKA.NET .
The API will receive the audit log data and will spawn an Audit actor which will do some security checks , insert the data in to a database and sent a email notification in case of any major security breaches. The plan is to use TELL method to do the processing in a fire and forget mechanism . The API will always return 200 OK so that calling service is not affected.
When we did some research , we found that most of the posts suggest creating a static instance of ActorSystem in global.asax, we found 2 ways of creaing an actor
Spawn an actor [with unique names] , inside the API and initiate a new instance of the Actor for every call and call the Tell method
Create a single static instance of the Actor and call the Tell Method
We feel the approach 2 is the best way to leverage AKKA.NET. Are we on the right path?
I would normally go for option 1 in this type of scenario. You need to think about how you handle failure. I would probably create some sort of supervisor actor to handle this process each time you get a command to do it. The supervisor could then create and supervise (i.e. determine how failure is handled) two actors - one for saving to the DB, another to send the email. The supervisor would be responsible for managing the process and killing itself (and children) when finished.
If you go for the second option, you will have a single queue for all messages so you will run into problems with scaling. You could get around this by having a pool of Actors , but I think it will be more difficult to handle retries and failures, especially if you need to know which things have been saved but didn't send emails. You could probably still get it to work, especially if you don't care if the save-then-email process fully completes, but I just think the first option fits the Actor model better.

Detect and process XML files when they appear in a known folder [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
Background
A common way to integrate systems that know nothing of one another, such as banking software on mainframes and software on PCs, is to have one system, the provider (usually a mainframe, an AS400 or similar), export data to some well understood file format such as CSV or XML on a shared network file system, and then have the other system, the consumer, import and process the data. Typically the processing involves merging the data into a database for downstream access and processing, but this is not relevant to the transfer system.
A fundamental problem facing builders of such interfaces is how to coordinate the two systems with minimum latency, and ideally with a failure detection and reporting mechanism.
The oldest and crudest method is common scheduling: the provider emits the file at a scheduled time, and the consumer polls for the file at a scheduled time not long after. Margins are allowed for clock error and time is allowed for the file to be produced and copied, and at the appointed hour the consumer attempts to access the file and either processes it or reports that it couldn't.
This is a variety of polling optimised using foreknowledge of when the provider will provide.
The platform on which I have to implement this is Windows, and the technology available to me is C# and the .NET framework.
The question
As I have never tried to do this on the Windows platform I need guidance from anyone with applicable experience. Is polling the right way to go about this, or if not what approach would you recommend? If the strategy you suggest introduces dependencies, what are they and what are their major consequences? For example if your approach requires a process to run all the time it should probably be implemented as a Windows Service.
I would really appreciate links to relevant material including any pertinent Stack Overflow Q&A so I can do the necessary self-education.
Additional context
This is the requirement as it was presented to me:
I have a service that will provide me some XML files in regular interval to a folder e.g. C://Folder . In this folder, I have to check if a particular XML is there or not. Suppose File.XML should be in Folder at 12 noon then I have to run the window service to check folder of that file is in folder or not. If it is there then I have to process it otherwise I have to log a alert that File was not available at 12 noon.
If I correctly interpret your requirements, you want a process to run whenever XML files are put into a specified folder, and you want to do this using C#.
There are two fundamental ways you can approach this problem.
Poll, which means check for files at regular intervals.
Use a FileSystemWatcher.
Polling is the strategy to which you allude in your question. You don't need to write a Windows Service to have a background task run at regular intervals. You can use the Windows Task Scheduler to run a program at regular intervals. If you let Task Scheduler take care of the scheduling, all your program has to do is process any files it finds.
The other way relies on the fact that the NTFS file system raises events when files are created, modified, renamed or deleted. You can use a FileSystemWatcher object to bind an event handler directly to any of these events. If you use this approach your program must not exit, so in this case a service is exactly what you need. A FileSystemWatcher can be set up to fire events for a particular folder and you can specify a file mask like *.xml
Either way, after you process each file I imagine you want to either remove it or mark it as processed.
Instead of checking the Folder periodically you could also work event-driven with the FilesystemWatcher:
var fsw = new FileSystemWatcher()
{
Path = #"c:\my\path",
Filter = "*.xml"
};
fsw.Created += (sender, args) =>
{
// do whatever you like, args contains informations about the file that was created
};
Advantages:
no need for complex job-scheduling
event-driven, there is no delay
you can also react on other events like changed or deleted
Edit:
If, you like to have more control over the events, than you could combine the FilesystemWatcher with the Reactive-Extensions.
For example, you could use the Buffer() extension to collect all events over a timespan of x minutes but invoke the event-handler only once every x minutes.
First idea:
You can use Quartz.NET more info.
It is very good and easy task scheduler.
To easy create windows services I use TopShelf more info
Example of configure TopShelf with QuartzServer
Host host = HostFactory.New(x =>
{
x.Service<IQuartzServer>(s =>
{
s.SetServiceName("Services");
s.ConstructUsing(builder =>
{
IQuartzServer server = QuartzServerFactory.CreateServer();
server.Initialize();
return server;
});
s.WhenStarted(server => server.Start());
s.WhenPaused(server => server.Pause());
s.WhenContinued(server => server.Resume());
s.WhenStopped(server => server.Stop());
});
x.RunAsLocalSystem();
x.SetDescription(Configuration.ServiceDescription);
x.SetDisplayName(Configuration.ServiceDisplayName);
x.SetServiceName(Configuration.ServiceName);
});
host.Run();
In this config You configure QuarztServer with windows services.
To create job You must implement IJob interface form Quartz namespace. In this implement You code all logic You need.
Then You can define quaraz.xml file with the job scheduler details with when job must start etc.
Other approach:
If You don't want to use external dlls as You mention in comments
You must create WindowsService fromVisual Studio template and next You must implement the main method:
For example it is very simple and pure but my have got the main idea:
while(;;)
{
if (Directory.GetFiles("path").Any())
{
//thre are some file
}
Thread.Sleep(50000);
}
Using LINQ with SELECT and WHERE statement You can filter what You want.

Bus instances Lifecycle and Best practices [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
While evaulating queueing mechanisms in general and Rebus in particular, I came up with the following questions about Bus Instances Lifecycle:
When need access to the Bus instance (one-way client mode) from several WCF services hosted on a windows service, the only option for instancing is on Singleton mode?
There is a way to Pause a Bus (stop dispatching message to the message handlers?) and then start it again.Or the only option is to dispose it and create a new one.
A use Case for this is when you connect to systems that have throughput limitations, or transactions per hour limits.
Can sagas have multiple workers, if so and assuming that the events were send on the correct order (initiator first), there is way to warranty that the initiator going to be handled first, there for creating the saga, before the following events are handled with multiple workers?
If in the same host, several Bus instances are used, and inside a message handler we call send on another bus instance based on the same configuration. The correlation-id wont be transmitted, And things like reply wont work properly, right?
I prefer concrete answers on how Rebus could support or not this, with code references/examples.
1: It's really simple: The bus instance (i.e. the implementation of IBus that gets put into the container and is handed to you when you do the Configure.With(...) configuration spells) is supposed to be a singleton instance that you keep around for the entire duration of your application's lifetime.
You can easily create multiple instances though, but that would be useful only for hosting multiple Rebus endpoints in the same process.
IOW the bus is fully reentrant and can safely be shared among threads in your web application.
2: Not readily, no - at least not in a way that is supported by the public API. You can do this though: ((RebusBus)bus).SetNumberOfWorkers(0) (i.e. cast the IBus instance to RebusBus and change the number of worker threads), which will block until the number of workers has been adjusted to the desired number.
This way, you can actually achieve what you're after. It's just not an official feature of Rebus (yet), but it might be in the future. I can guarantee, though, that the ability to adjust the number of workers at runtime will not go away.
3: Yes, sagas are guarded by an optimistic concurrency scheme no matter which persistence layer you choose. If you're unsure which type of message will arrive first at your saga, you should make your saga tolerant to this - i.e. just implement IAmInitiatedBy<> for each potentially initiating message type and make the saga handle that properly.
Being (fairly) tolerant to out-of-order messages is a good general robustness principle that will serve you well also when messages are redelivered after having stayed a while in an error queue.
4: Rebus will pick up the current message context even though you're using multiple bus instances because it uses an "ambient context" (i.e. a MessageContext instance mounted on the worker thread) to pick up the fact that you're sending a message from within a handler, which in turn will cause the correlation ID of the handled message to be copied to any outgoing messages.
Thus bus.Reply will work, too.
But as I stated in (1) the bus instance is fully reentrant and there's no need to have multiple instances around, unless they're actually logically difference endpoints.
I hope this answers your questions :)

Categories

Resources