What is a safe overhead for RequestAdditionalTime()? - c#

I have a Windows service that spawns a set of child activities on separate threads and that should only terminate when all those activities have successfully completed. I do not know in advance how long it might take to terminate an activity after a stop signal is received. During OnStop(), I wait in intervals for that stop signal and keep requesting additional time for as long as the system is willing to grant it.
Here is the basic structure:
class MyService : ServiceBase
{
private CancellationTokenSource stopAllActivities;
private CountdownEvent runningActivities;
protected override void OnStart(string[] args)
{
// ... start a set of activities that signal runningActivities
// when they stop
// ... initialize runningActivities to the number of activities
}
protected override void OnStop()
{
stopAllActivities.Cancel();
while (!runningActivities.Wait(10000))
{
RequestAdditionalTime(15000); // NOTE: 5000 added for overhead
}
}
}
Just how much "overhead" should I be adding in the RequestAdditionalTime call? I'm concerned that the requests are cumulative, instead of based on the point in time when each RequestAdditionalTime call is made. If that's the case, adding overhead could result in the system eventually denying the request because it's too far out in the future. But if I don't add any overhead then my service could be terminated before it has a chance to request the next block of additional time.

This post wasn't exactly encouraging:
The MSDN documentation doesn’t mention this but it appears that the value specified in RequestAdditionalTime is not actually ‘additional’ time. Instead, it replaces the value in ServicesPipeTimeout. Worse still, any value greater than two minutes (120000 milliseconds) is ignored, i.e. capped at two minutes.
I hope that's not the case, but I'm posting this as a worst-case answer.
UPDATE: The author of that post was kind enough to post a very detailed reply to my comment, which I've copied below.
Lars, the short answer is no.
What I would say is that I now realise that Windows Services ought to be designed to start and terminate processing quickly when requested to do so.
As developers, we tend to focus on the implementation of the processing and then package it up and deliver it as a Windows Service.
However, this really isn’t the correct approach to designing Windows Services. Services must be able to respond quickly to requests to start and stop not only when an administrator making the request from the services console but also when the operating system is requesting a start as part of its start up processing or a stop because it is shutting down,
Consider what happens when Windows is configured to shut down when a UPS signals that the power has failed. It’s not appropriate for the service to respond with “I need a few more minutes…”.
It’s possible to write services that react quickly to stop requests even when they implement long running processing tasks. Usually a long running process will consist of batch processing of data and the processing should check if a stop has been requested at the level of the smallest unit of work that ensures data consistency.
As an example, the first service where I found the stop timeout was a problem involved the processing of a notifications queue on a remote server. The processing retrieved a notification from the queue, calling a web service to retrieve data related to the subject of the notification, and then writing a data file for processing by another application.
I implemented the processing as a timer driven call to a single method. Once the method is called it doesn’t return until all the notifications in the queue have been processed. I realised this was a mistake for a Windows Service because occasionally there might be tens of thousands of notifications in the queue and processing might take several minutes.
The method is capable of processing 50 notifications per second. So, what I should have done was implement a check to see if a stop had been requested before processing each notification. This would have allowed the method to return when it has completed the processing of a notification but before it has started to process the next notification. This would have ensured that the service responds quickly to a stop request and any pending notifications remained queued for processing when the service is restarted.

Related

Is there a way to return a response from an API call to ASP.NET while keeping the instance running?

I am writing an API using ASP.NET and I have some potentially long running code from the different end points. The system uses CQRS and Event Sourcing. A Command comes into to an end point and is then published as an event using MediatR. However the Handlers are potentially long running. Since some of the Requests coming in might be sent to multiple Handlers. This process could take longer than the 12s that AWS allows before returning an Error code.
Is there a way to return a response back to the caller to say that the event has been created while still contining with the process? That is to say fire off a separate task that performs the long running piece of code, that also catches and logs errors. Then return a value back to the user saying the Event has been successfully created?
I believe that ASP.NET spins up a new instance each time a call is made, will the old instance die one a value is returned, killing the task?
I could be wrong with a number of points here, this is my knowledge gleaned from the internet but I could have missunderstood articles.
Thanks.
Yes, you should pass the long-running task off to a background process and return to the user. When the task is complete, notifiy the user with whatever mechanism is appropriate for your site.
But do not start a new thread, what you want is to have a background service running for this, and use that to manage your request.
If a new thread is running the long operation it will remain “open/live” until it finishes. Also you can configure the app pool to always be active.
There are a lot of frameworks to work with long running tasks like Hangfire.
And to keep the user updated with the status of the task you can use SignalR to push notifications to the UI

webapi 2 - how to properly invoke long running method async/in new thread, and return response to client

I am developing a web-api that takes data from client, and saves it for later use. Now i have an external system that needs to know of all events, so i want to setup a notification component in my web-api.
What i do is, after data is saved, i execute a SendNotification(message) method in my new component. Meanwhile i don't want my client to wait or even know that we're sending notifications, so i want to return a 201 Created / 200 OK response as fast as possible to my clients.
Yes this is a fire-and-forget scenario. I want the notification component to handle all exception cases (if notification fails, the client of the api doesn't really care at all).
I have tried using async/await, but this does not work in the web-api, since when the request-thread terminates, the async operation does so aswell.
So i took a look at Task.Run().
My controller looks like so:
public IHttpActionResult PostData([FromBody] Data data) {
_dataService.saveData(data);
//This could fail, and retry strategy takes time.
Task.Run(() => _notificationHandler.SendNotification(new Message(data)));
return CreatedAtRoute<object>(...);
}
And the method in my NotificationHandler
public void SendNotification(Message message) {
//..send stuff to a notification server somewhere, syncronously.
}
I am relatively new in the C# world, and i don't know if there is a more elegant(or proper) way of doing this. Are there any pitfalls with using this method?
It really depends how long. Have you looked into the possibility of QueueBackgroundWorkItem as detailed here. If you want to implement a very fast fire and forget you also might want to consider a queue to pop these messages onto so you can return from the controller immediately. You'd then have to have something which polls the queue and sends out the notifications i.e. Scheduled Task, Windows service etc. IIRC, if IIS recycles during a task, the process is killed whereas with QueueBackgroundWorkItem there is a grace period for which ASP.Net will let the work item finish it's job.
I would take a look on Hangfire. It is fairly easy to setup, it should be able to run within your ASP.NET process and is easy to migrate to a standalone process in case your IIS load suddenly increases.
I experimented with Hangfire a while ago but in standalone mode. It has enough docs and easy to understand API.

Call a method after a certain period of time without blocking

I'm making a webserver application, and I have a Listener class which waits for connections and spawns an HTTPConnection, passing it the new Socket created, each time a connect request is made. The HTTPConnection class waits for data asynchronously (using Socket.BeginReceive).
I need the delayed execution for a timeout. If the client fails to send a full HTTP request after a certain amount of time, I want to close the connection. As soon as the HTTPConnection object is constructed, the waiting period should begin, then call a Timeout function if the client fails to send the request. Obviously, I can't have the constructor method paused for a few seconds, so the waiting needs to happen async. I also need to be able to cancel the task.
I could do new Thread(...) and all, but that's very poor design. Are there any other ways to schedule a method to be called later?
You could append all postponed events to some ordered data structure and have a background task checking at certain interval if there's a timeout event that have to be executed.
You could save these events in database also (if you have a lot of clients I imagine it could lead to high memory usage).
Also you background task could get all the expired events from the database and handle them at once.

Azure Service Bus Subscriber regularly phoning home?

We have pub/sub application that involves an external client subscribing to a Web Role publisher via an Azure Service Bus Topic. Our current billing cycle indicates we've sent/received >25K messages, while our dashboard indicates we've sent <100. We're investigating our implementation and checking our assumptions in order to understand the disparity.
As part of our investigation we've gathered wireshark captures of client<=>service bus traffic on the client machine. We've noticed a regular pattern of communication that we haven't seen documented and would like to better understand. The following exchange occurs once every 50s when there is otherwise no activity on the bus:
The client pushes ~200B to the service bus.
10s later, the service bus pushes ~800B to the client. The client registers the receipt of an empty message (determined via breakpoint.)
The client immediately responds by pushing ~1000B to the service bus.
Some relevant information:
This occurs when our web role is not actively pushing data to the service bus.
Upon receiving a legit message from the Web Role, the pattern described above will not occur again until a full 50s has passed.
Both client and server connect to sb://namespace.servicebus.windows.net via TCP.
Our application messages are <64 KB
Questions
What is responsible for the regular, 3-packet message exchange we're seeing? Is it some sort of keep-alive?
Do each of the 3 packets count as a separately billable message?
Is this behavior configurable or otherwise documented?
EDIT:
This is the code the receives the messages:
private void Listen()
{
_subscriptionClient.ReceiveAsync().ContinueWith(MessageReceived);
}
private void MessageReceived(Task<BrokeredMessage> task)
{
if (task.Status != TaskStatus.Faulted && task.Result != null)
{
task.Result.CompleteAsync();
// Do some things...
}
Listen();
}
I think what you are seeing is the Receive call in the background. Behind the scenes the Receive calls are all using long polling. Which means they call out to the Service Bus endpoint and ask for a message. The Service Bus service gets that request and if it has a message it will return it immediately. If it doesn't have a message it will hold the connection open for a time period in case a message arrives. If a message arrives within that time frame it will be returned to the client. If a message is not available by the end of the time frame a response is sent to the client indicating that no message was there (aka, your null BrokeredMessage). If you call Receive with no overloads (like you've done here) it will immediately make another request. This loop continues to happend until a message is received.
Thus, what you are seeing are the number of times the client requests a message but there isn't one there. The long polling makes it nicer than what the Windows Azure Storage Queues have because they will just immediately return a null result if there is no message. For both technologies it is common to implement an exponential back off for requests. There are lots of examples out there of how to do this. This cuts back on how often you need to go check the queue and can reduce your transaction count.
To answer your questions:
Yes, this is normal expected behaviour.
No, this is only one transaction. For Service Bus you get charged a transaction each time you put a message on a queue and each time a message is requested (which can be a little opaque given that Recieve makes calls multiple times in the background). Note that the docs point out that you get charged for each idle transaction (meaning a null result from a Receive call).
Again, you can implement a back off methodology so that you aren't hitting the queue so often. Another suggestion I've recently heard was if you have a queue that isn't seeing a lot of traffic you could also check the queue depth to see if it was > 0 before entering the loop for processing and if you get no messages back from a receive call you could go back to watching the queue depth. I've not tried that and it is possible that you could get throttled if you did the queue depth check too often I'd think.
If these are your production numbers then your subscription isn't really processing a lot of messages. It would likely be a really good idea to have a back off policy to a time that is acceptable to wait before it is processed. Like, if it is okay that a message sits for more than 10 minutes then create a back off approach that will eventually just be checking for a message every 10 minutes, then when it gets one process it and immediately check again.
Oh, there is a Receive overload that takes a timeout, but I'm not 100% that is a server timeout or a local timeout. If it is local then it could still be making the calls every X seconds to the service. I think this is based on the OperationTimeout value set on the Messaging Factory Settings when creating the SubscriptionClient. You'd have to test that.

How to implement an IIS-like threadpool on a worker-server

EDIT I realised my question was not stated clearly enough and have edited it heavily.
This is a bit of an open ended question so apologies in advance.
In a nutshell, I want to implement IIS-style asynchronous request processing in an Azure worker role.
It may be very simple or it may be insanely hard - I am looking for pointers to where to research.
While my implementation will use Azure Workers and Service Bus Queues, the general principle is applicable to any scenario where a worker process is listening for incoming requests and then servicing them.
What IIS does
In IIS there is a fixed-size threadpool. If you deal with all request synchronously then the maximum number of requests you can deal with in parallel == maxthreads. However, if you have to do slow external I/O to serve requests then this is highly inefficient because you can end up with the server being idle, yet have all threads tied up waiting for external I/O to complete.
From MSDN:
On the Web server, the .NET Framework maintains a pool of threads that are used to service ASP.NET requests. When a request arrives, a thread from the pool is dispatched to process that request. If the request is processed synchronously, the thread that processes the request is blocked while the request is being processed, and that thread cannot service another request.
This might not be a problem, because the thread pool can be made large enough to accommodate many blocked threads. However, the number of threads in the thread pool is limited. In large applications that process multiple simultaneous long-running requests, all available threads might be blocked. This condition is known as thread starvation. When this condition is reached, the Web server queues requests. If the request queue becomes full, the Web server rejects requests with an HTTP 503 status (Server Too Busy).
In order to overcome this issue, IIS has some clever logic that allows you to deal with requests asynchronously:
When an asynchronous action is invoked, the following steps occur:
The Web server gets a thread from the thread pool (the worker thread) and schedules it to handle an incoming request. This worker thread initiates an asynchronous operation.
The worker thread is returned to the thread pool to service another Web request.
When the asynchronous operation is complete, it notifies ASP.NET.
The Web server gets a worker thread from the thread pool (which might be a different thread from the thread that started the asynchronous operation) to process the remainder of the request, including rendering the response.
The important point here is when the asynchronous request returns, the return action is scheduled to run on one of the same pool of threads that serves the initial incoming requests. This means that the system is limiting how much work it is doing concurrently and this is what I would like to replicate.
What I want to do
I want to create a Worker role which will listen for incoming work requests on Azure Service Bus Queues and also potentially on TCP sockets. Like IIS I want to have a maxium threadpool size and I want to limit how much actual work the worker is doing in parallel; If the worker is busy serving existing requests - whether new incoming ones or the callbacks from previous async calls - I don't want to pick up any new incoming requests until some threads have been freed up.
It is not a problem to limit how many jobs I start concurrently - that is easy to control; It is limiting how many I am actually working on concurrently.
Let's assume a threadpool of 100 threads.
I get 100 requests to send an email come in and each email takes 5 seconds to send to the SMTP server. If I limit my server to only process 100 requests at the same time then my server will be unable to do anything else for 5 seconds, while the CPU is completely idle. So, I don't really mind starting to send 1,000 or 10,000 emails at the same time, because 99% of the "request process time" will be spent waiting for external I/O and my server will still be very quiet.
So, that particular scenario I could deal with by just keeping on accepting incoming requests with no limit (or only limit the start of the request until I fire off the async call; as soon as the BeginSend is called, I'll return and start serving another request).
Now, imagine instead that I have a type of request that goes to the database to read some data, does some heavy calculation on it and then writes that back to the database. There are two database requests there that should be made asynchronous but 90% of the request processing time will be spent on my worker. So, if I follow the same logic as above and keep start async calls and just letting the return do whatever it needs to get a thread to continue on then I will end up with a server that is very overloaded.
Somehow, what IIS does is make sure that when an async call returns it uses the same fixed-size thread pool. This means that if I fire off a lot of async calls and they then return and start using my threads, IIS will not accept new requests until those returns have finished. And that is perfect because it ensures a sensible load on the server, especially when I have multiple load-balanced servers and a queue system that the servers pick work from.
I have this sneaky suspicion that this might be very simple to do, there is just something basic I am missing. Or maybe it is insanely hard.
Creating a threadpool should be considered as independent of Windows Azure. Since a Worker Role instance is effectively Windows 2008 Server R2 (or SP2), there's nothing really different. You'd just need to set things up from your OnStart() or Run().
One thing you wanted to do was use queue length as a determining factor when scaling to more/less worker instances. Note that Service Bus Queues don't advertise queue length, where Windows Azure Queues (based on Storage, vs. Service Bus) do. With Windows Azure Queues, you'll need to poll synchronously for messages (whereas Service Bus Queues have long-polling operations). Probably a good idea to review the differences between Service Bus Queues and Windows Azure Queues, here.
Have you considered having a dedicated WCF instance (not WAS or IIS hosted) to buffer the long running requests? It will have its own dedicated app pool, with a separate Max value setting from IIS that won't contend with your ASP.NET HTTP requests. (HTTP requests are served by
Then use IIS Async methods to call WCF with the constrained app pool.
I've used the SmartThreadPool project in the past as a per-instance pool and, if I'm reading you correctly, it should have all the callback and worker-limiting functionality you need. My company actually has it running currently on Azure for the exact purpose you describe of reading message bus requests asynchronously.
I have been digging around in this and found that it is indeed relatively easy.
http://www.albahari.com/threading/ has got some good information and I actually ended up buying the book which that website is essentially promoting.
What I found out is that;
Your application has a ThreadPool available to it by default
You can limit the number of threads available in the ThreadPool
When you use QueueUserWorkItem or Task.Factory.StartNew the job you start run on a Thread in the ThreadPool
When you use one of the asynchronous IO calls in the framework (Begin... methods or WebcClient.DownloadStringAsync etc) the the callbacks will also run on a Thread from the ThreadPool (what happens with the IO request itself is outside the scope of this discussion).
So far, so good. The problem is that I can keep calling Task.Factory.StartNew as much as I like and the ThreadPool will simply queue up the work until there are free threads to service them. So, in the case of an Azure Worker, I could easily empty the Queue even though my worker is busy servicing existing requests (and callbacks from existing requests). That is the core of my problem. What I want is to not take anything out of the queue until I actually have some free threads to service the request.
This is a very simple example of how this could be achieved. In essence, I am using an AutoResetEvent to make sure that I don't start another task from the queue until the previous task has actually started. Granted, I do actually take stuff out of the queue before there is a free thread, but on balance this should avoid crazy overloads of the worker and allow me to spin up more workers to share the load.
ThreadPool.SetMaxThreads(5, 1000); // Limit to 5 concurrent threads
ThreadPool.SetMinThreads(5, 10); // Ensure we spin up all threads
var jobStart = new AutoResetEvent(true);
// The "listen" loop
while (true)
{
var job = this.jobQueue.Dequeue();
jobStart.WaitOne(); // Wait until the previous job has actually been started
Task.Factory.StartNew(
() =>
{
jobStart.Set(); // Will happen when the threadpool allocates this job to a thread
this.Download(job);
});
}
This can - and probably should - be made a lot more sophisticated, including having timeouts, putting the work item back in the queue if a thread can't be allocated within a reasonable time and so on.
An alternative would be to use ThreadPool.GetAvailableThreads to check if there are free threads before starting to listen to the queue but that feels rather more error prone.
Somehow, what IIS does is make sure that when an async call returns
it uses the same fixed-size thread pool.
This is not true: When your code runs in response to an HTTP-Request you decide on what threads the continuation function executes. Usually, this is the thread pool. And the thread pool is an appdomain-wide resource that is shared among all requests.
I think IIS does less "magic" than you think it does. All it does is to limit the number of parallel HTTP-requests and the backlog size. You decide what happens once you have been given control by ASP.NET.
If your code is not protected against overloading the server, you will overload the server even on IIS.
From what I understand you want to constrain the number of threads used for processing a certain type of message at the same time.
One approach would be to simply wrap the message processor, invoked on a new thread with something like
try
{
Interlocked.Increment(ref count)
Process(message);
}
finally
{
Interlocked.Decrement(ref count)
}
Before invoking the wrapper, simply check if the ‘count’ is less than your threshold count; and stop polling/handling more messages till the count is sufficiently lower.
EDIT Added more information based on comment
Frans, not sure why you see the infrastructure and business code being coupled. Once you place your business process to be serviced as a task on a new thread to run asynchronously, you need not worry about performing additional IO bound calls asynchronously. This is a simpler model to program in.
Here is what I am thinking.
// semi - pseudo-code
// Infrastructure – reads messages from the queue
// (independent thread, could be a triggered by a timer)
while(count < maxCount && (message = Queue.GetMessage()) != null)
{
Interlocked.Increment(ref count);
// process message asynchronously on a new thread
Task.Factory.StartNew(() => ProcessWrapper(message));
}
// glue / semi-infrastructure - deals with message deletion and exceptions
void ProcessWrapper(Message message)
{
try
{
Process(message);
Queue.DeleteMessage(message);
}
catch(Exception ex)
{
// Handle exception here.
// Log, write to poison message queue etc ...
}
finally
{
Interlocked.Decrement(ref count)
}
}
// business process
void Process(Message message)
{
// actual work done here
;
}

Categories

Resources